Datasets:
de-francophones
commited on
Commit
•
389a56c
1
Parent(s):
97b793d
06d5a8dd885b5dc8b9cc82a7031bbcca74d04acb6c9f128f1df75a73ab72e81d
Browse files- en/2574.html.txt +111 -0
- en/2575.html.txt +111 -0
- en/2576.html.txt +0 -0
- en/2577.html.txt +0 -0
- en/2578.html.txt +194 -0
- en/2579.html.txt +122 -0
- en/258.html.txt +112 -0
- en/2580.html.txt +0 -0
- en/2581.html.txt +0 -0
- en/2582.html.txt +0 -0
- en/2583.html.txt +147 -0
- en/2584.html.txt +219 -0
- en/2585.html.txt +147 -0
- en/2586.html.txt +147 -0
- en/2587.html.txt +239 -0
- en/2588.html.txt +49 -0
- en/2589.html.txt +224 -0
- en/259.html.txt +102 -0
- en/2590.html.txt +227 -0
- en/2591.html.txt +105 -0
- en/2592.html.txt +99 -0
- en/2593.html.txt +248 -0
- en/2594.html.txt +57 -0
- en/2595.html.txt +57 -0
- en/2596.html.txt +57 -0
- en/2597.html.txt +82 -0
- en/2598.html.txt +93 -0
- en/2599.html.txt +0 -0
- en/26.html.txt +145 -0
- en/260.html.txt +142 -0
- en/2600.html.txt +0 -0
- en/2601.html.txt +0 -0
- en/2602.html.txt +0 -0
- en/2603.html.txt +123 -0
- en/2604.html.txt +23 -0
- en/2605.html.txt +23 -0
- en/2606.html.txt +31 -0
- en/2607.html.txt +31 -0
- en/2608.html.txt +31 -0
- en/2609.html.txt +27 -0
- en/261.html.txt +142 -0
- en/2610.html.txt +27 -0
- en/2611.html.txt +218 -0
- en/2612.html.txt +218 -0
- en/2613.html.txt +243 -0
- en/2614.html.txt +243 -0
- en/2615.html.txt +265 -0
- en/2616.html.txt +282 -0
- en/2617.html.txt +155 -0
- en/2618.html.txt +155 -0
en/2574.html.txt
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
6, see text
|
4 |
+
|
5 |
+
The barn swallow (Hirundo rustica) is the most widespread species of swallow in the world.[2] It is a distinctive passerine bird with blue upperparts and a long, deeply forked tail. It is found in Europe, Asia, Africa and the Americas. In Anglophone Europe it is just called the swallow; in Northern Europe it is the only common species called a "swallow" rather than a "martin".[3]
|
6 |
+
|
7 |
+
There are six subspecies of barn swallow, which breed across the Northern Hemisphere. Four are strongly migratory, and their wintering grounds cover much of the Southern Hemisphere as far south as central Argentina, the Cape Province of South Africa, and northern Australia. Its huge range means that the barn swallow is not endangered, although there may be local population declines due to specific threats.
|
8 |
+
|
9 |
+
The barn swallow is a bird of open country that normally uses man-made structures to breed and consequently has spread with human expansion. It builds a cup nest from mud pellets in barns or similar structures and feeds on insects caught in flight.[4] This species lives in close association with humans, and its insect-eating habits mean that it is tolerated by humans; this acceptance was reinforced in the past by superstitions regarding the bird and its nest. There are frequent cultural references to the barn swallow in literary and religious works due to both its living in close proximity to humans and its annual migration.[5] The barn swallow is the national bird of Austria and Estonia.
|
10 |
+
|
11 |
+
The adult male barn swallow of the nominate subspecies H. r. rustica is 17–19 cm (6.7–7.5 in) long including 2–7 cm (0.79–2.76 in) of elongated outer tail feathers. It has a wingspan of 32–34.5 cm (12.6–13.6 in) and weighs 16–22 g (0.56–0.78 oz). It has steel blue upperparts and a rufous forehead, chin and throat, which are separated from the off-white underparts by a broad dark blue breast band. The outer tail feathers are elongated, giving the distinctive deeply forked "swallow tail". There is a line of white spots across the outer end of the upper tail.[4] The female is similar in appearance to the male, but the tail streamers are shorter, the blue of the upperparts and breast band is less glossy, and the underparts paler. The juvenile is browner and has a paler rufous face and whiter underparts. It also lacks the long tail streamers of the adult.[2]
|
12 |
+
|
13 |
+
The song of the male barn swallow is a cheerful warble, often ending with su-seer with the second note higher than the first but falling in pitch. Calls include witt or witt-witt and a loud splee-plink when excited (or trying to chase intruders away from the nest).[4] The alarm calls include a sharp siflitt for predators like cats and a flitt-flitt for birds of prey like the hobby.[6] This species is fairly quiet on the wintering grounds.[7]
|
14 |
+
|
15 |
+
The distinctive combination of a red face and blue breast band render the adult barn swallow easy to distinguish from the African Hirundo species and from the welcome swallow (Hirundo neoxena) with which its range overlaps in Australasia.[2] In Africa the short tail streamers of the juvenile barn swallow invite confusion with juvenile red-chested swallow (Hirundo lucida), but the latter has a narrower breast band and more white in the tail.[8]
|
16 |
+
|
17 |
+
The barn swallow was described by Carl Linnaeus in his 1758 10th edition of Systema Naturae as Hirundo rustica, characterised as "H. rectricibus, exceptis duabus intermediis, macula alba notatîs".[9] Hirundo is the Latin word for "swallow"; rusticus means "of the country".[10] This species is the only one of that genus to have a range extending into the Americas, with the majority of Hirundo species being native to Africa. This genus of blue-backed swallows is sometimes called the "barn swallows".[2][3]
|
18 |
+
|
19 |
+
The Oxford English Dictionary dates the English common name "barn swallow" to 1851,[11] though an earlier instance of the collocation in an English-language context is in Gilbert White's popular book The Natural History of Selborne, originally published in 1789:
|
20 |
+
|
21 |
+
The swallow, though called the chimney-swallow, by no means builds altogether in chimnies [sic], but often within barns and out-houses against the rafters ... In Sweden she builds in barns, and is called ladusvala, the barn-swallow.[12]
|
22 |
+
|
23 |
+
This suggests that the English name may be a calque on the Swedish term.
|
24 |
+
|
25 |
+
There are few taxonomic problems within the genus, but the red-chested swallow—a resident of West Africa, the Congo basin, and Ethiopia—was formerly treated as a subspecies of barn swallow. The red-chested swallow is slightly smaller than its migratory relative, has a narrower blue breast-band, and (in the adult) has shorter tail streamers. In flight, it looks paler underneath than barn swallow.[8]
|
26 |
+
|
27 |
+
Six subspecies of barn swallow are generally recognised. In eastern Asia, a number of additional or alternative forms have been proposed, including saturata by Robert Ridgway in 1883,[13] kamtschatica by Benedykt Dybowski in 1883,[14] ambigua by Erwin Stresemann[15] and mandschurica by Wilhelm Meise in 1934.[13] Given the uncertainties over the validity of these forms,[14][16] this article follows the treatment of Turner and Rose.[2]
|
28 |
+
|
29 |
+
The short wings, red belly and incomplete breast band of H. r. tytleri are also found in H. r. erythrogaster, and DNA analyses show that barn swallows from North America colonised the Baikal region of Siberia, a dispersal direction opposite to that for most changes in distribution between North America and Eurasia.[26]
|
30 |
+
|
31 |
+
The preferred habitat of the barn swallow is open country with low vegetation, such as pasture, meadows and farmland, preferably with nearby water. This swallow avoids heavily wooded or precipitous areas and densely built-up locations. The presence of accessible open structures such as barns, stables, or culverts to provide nesting sites, and exposed locations such as wires, roof ridges or bare branches for perching, are also important in the bird's selection of its breeding range.[4]
|
32 |
+
|
33 |
+
It breeds in the Northern Hemisphere from sea level to typically 2,700 m (8,900 ft),[27] but to 3,000 m (9,800 ft) in the Caucasus[4] and North America,[28] and it is absent only from deserts and the cold northernmost parts of the continents. Over much of its range, it avoids towns, and in Europe is replaced in urban areas by the house martin. However, in Honshū, Japan, the barn swallow is a more urban bird, with the red-rumped swallow (Cecropis daurica) replacing it as the rural species.[2]
|
34 |
+
|
35 |
+
In winter, the barn swallow is cosmopolitan in its choice of habitat, avoiding only dense forests and deserts.[29] It is most common in open, low vegetation habitats, such as savanna and ranch land, and in Venezuela, South Africa and Trinidad and Tobago it is described as being particularly attracted to burnt or harvested sugarcane fields and the waste from the cane.[7][30][31] In the absence of suitable roost sites, they may sometimes roost on wires where they are more exposed to predators.[32] Individual birds tend to return to the same wintering locality each year[33] and congregate from a large area to roost in reed beds.[30] These roosts can be extremely large; one in Nigeria had an estimated 1.5 million birds.[34] These roosts are thought to be a protection from predators, and the arrival of roosting birds is synchronised in order to overwhelm predators like African hobbies. The barn swallow has been recorded as breeding in the more temperate parts of its winter range, such as the mountains of Thailand and in central Argentina.[2][35]
|
36 |
+
|
37 |
+
Migration of barn swallows between Britain and South Africa was first established on 23 December 1912 when a bird that had been ringed by James Masefield at a nest in Staffordshire, was found in Natal.[36] As would be expected for a long-distance migrant, this bird has occurred as a vagrant to such distant areas as Hawaii, Bermuda, Greenland, Tristan da Cunha the Falkland Islands,[2] and even Antarctica.[37]
|
38 |
+
|
39 |
+
The barn swallow is similar in its habits to other aerial insectivores, including other swallow species and the unrelated swifts. It is not a particularly fast flier, with a speed estimated at about 11 metres per second (25 mph), up to 20 metres per second (45 mph) and a wing beat rate of approximately 5, up to 7–9 times each second.[38][39]
|
40 |
+
|
41 |
+
The barn swallow typically feeds in open areas[40] 7–8 m (23–26 ft) above shallow water or the ground often following animals, humans or farm machinery to catch disturbed insects, but it will occasionally pick prey items from the water surface, walls and plants.[4] In the breeding areas, large flies make up around 70% of the diet, with aphids also a significant component. However, in Europe, the barn swallow consumes fewer aphids than the house or sand martins.[4] On the wintering grounds, Hymenoptera, especially flying ants, are important food items.[2] When egg-laying barn swallows hunt in pairs, but otherwise will form often large flocks.[2]
|
42 |
+
|
43 |
+
The amount of food a clutch will get depends on the size of the clutch, with larger clutches getting more food on average. The timing of a clutch also determines the food given; later broods get food that is smaller in size compared to earlier broods. This is because larger insects are too far away from the nest to be profitable in terms of energy expenditure.[41]
|
44 |
+
|
45 |
+
Isotope studies have shown that wintering populations may utilise different feeding habitats, with British breeders feeding mostly over grassland, whereas Swiss birds utilised woodland more.[42] Another study showed that a single population breeding in Denmark actually wintered in two separate areas.[43]
|
46 |
+
|
47 |
+
The barn swallow drinks by skimming low over lakes or rivers and scooping up water with its open mouth.[28] This bird bathes in a similar fashion, dipping into the water for an instant while in flight.[33]
|
48 |
+
|
49 |
+
Swallows gather in communal roosts after breeding, sometimes thousands strong. Reed beds are regularly favoured, with the birds swirling en masse before swooping low over the reeds.[6] Reed beds are an important source of food prior to and whilst on migration; although the barn swallow is a diurnal migrant that can feed on the wing whilst it travels low over ground or water, the reed beds enable fat deposits to be established or replenished.[44]
|
50 |
+
|
51 |
+
The male barn swallow returns to the breeding grounds before the females and selects a nest site, which is then advertised to females with a circling flight and song.[4] Plumage may be used to advertise: in some populations, like in the subspecies H. r. gutturalis, darker ventral plumage in males is associated with higher breeding success. In other populations,[45] the breeding success of the male is related to the length of the tail streamers, with longer streamers being more attractive to the female.[4][46] Males with longer tail feathers are generally longer-lived and more disease resistant, females thus gaining an indirect fitness benefit from this form of selection, since longer tail feathers indicate a genetically stronger individual which will produce offspring with enhanced vitality.[47] Males in northern Europe have longer tails than those further south; whereas in Spain the male's tail streamers are only 5% longer than the female's, in Finland the difference is 20%. In Denmark, the average male tail length increased by 9% between 1984 and 2004, but it is possible that climatic changes may lead in the future to shorter tails if summers become hot and dry.[48]
|
52 |
+
|
53 |
+
Males with long streamers also have larger white tail spots, and since feather-eating bird lice prefer white feathers, large white tail spots without parasite damage again demonstrate breeding quality; there is a positive association between spot size and the number of offspring produced each season.[49]
|
54 |
+
|
55 |
+
The breeding season of the barn swallow is variable; in the southern part of the range, the breeding season usually is from February or March to early to mid September, although some late second and third broods finish in October. In the northern part of the range, it usually starts late May to early June and ends the same time as the breeding season of the southernmost birds.[50]
|
56 |
+
|
57 |
+
Both sexes defend the nest, but the male is particularly aggressive and territorial.[2] Once established, pairs stay together to breed for life, but extra-pair copulation is common, making this species genetically polygamous, despite being socially monogamous.[51] Males guard females actively to avoid being cuckolded.[52] Males may use deceptive alarm calls to disrupt extrapair copulation attempts toward their mates.[53]
|
58 |
+
|
59 |
+
As its name implies, the barn swallow typically nests inside accessible buildings such as barns and stables, or under bridges and wharves.[54] Before man-made sites became common, it nested on cliff faces or in caves, but this is now rare.[2] The neat cup-shaped nest is placed on a beam or against a suitable vertical projection. It is constructed by both sexes, although more often by the female, with mud pellets collected in their beaks and lined with grasses, feathers, algae[54] or other soft materials.[2] The nest building ability of the male is also sexually selected; females will lay more eggs and at an earlier date with males who are better at nest construction, with the opposite being true with males that are not.[55] After building the nest, barn swallows may nest colonially where sufficient high-quality nest sites are available, and within a colony, each pair defends a territory around the nest which, for the European subspecies, is 4 to 8 m2 (43 to 86 sq ft) in size. Colony size tends to be larger in North America.[28]
|
60 |
+
|
61 |
+
In North America at least, barn swallows frequently engage in a mutualist relationship with ospreys. Barn swallows will build their nest below an osprey nest, receiving protection from other birds of prey that are repelled by the exclusively fish-eating ospreys. The ospreys are alerted to the presence of these predators by the alarm calls of the swallows.[28]
|
62 |
+
|
63 |
+
There are normally two broods, with the original nest being reused for the second brood and being repaired and reused in subsequent years. The female lays two to seven, but typically four or five, reddish-spotted white eggs.[2] The clutch size is influenced by latitude, with clutch sizes of northern populations being higher on average than southern populations.[56] The eggs are 20 mm × 14 mm (0.79 in × 0.55 in) in size, and weigh 1.9 g (0.067 oz), of which 5% is shell. In Europe, the female does almost all the incubation, but in North America the male may incubate up to 25% of the time. The incubation period is normally 14–19 days, with another 18–23 days before the altricial chicks fledge. The fledged young stay with, and are fed by, the parents for about a week after leaving the nest. Occasionally, first-year birds from the first brood will assist in feeding the second brood.[2] Compared to those from early broods, juvenile barn swallows from late broods have been found to migrate at a younger age, fuel less efficiently during migration and have lower return rates the following year.[57]
|
64 |
+
|
65 |
+
The barn swallow will mob intruders such as cats or accipiters that venture too close to their nest, often flying very close to the threat.[47] Adult barn swallows have few predators, but some are taken by accipiters, falcons, and owls. Brood parasitism by cowbirds in North America or cuckoos in Eurasia is rare.[4][28]
|
66 |
+
|
67 |
+
Hatching success is 90% and the fledging survival rate is 70–90%. Average mortality is 70–80% in the first year and 40–70% for the adult. Although the record age is more than 11 years, most survive less than four years.[2] Barn swallow nestlings have prominent red gapes, a feature shown to induce feeding by parent birds. An experiment in manipulating brood size and immune system showed the vividness of the gape was positively correlated with T-cell–mediated immunocompetence, and that larger brood size and injection with an antigen led to a less vivid gape.[58]
|
68 |
+
|
69 |
+
The barn swallow has been recorded as hybridising with the cliff swallow (Petrochelidon pyrrhonota) and the cave swallow (P. fulva) in North America, and the house martin (Delichon urbicum) in Eurasia, the cross with the latter being one of the most common passerine hybrids.[47]
|
70 |
+
|
71 |
+
Eggs in the Muséum de Toulouse
|
72 |
+
|
73 |
+
A chick less than an hour after being born
|
74 |
+
|
75 |
+
Chicks and eggs in a nest with horse hair lining
|
76 |
+
|
77 |
+
Older chicks in nest
|
78 |
+
|
79 |
+
Juvenile being fed
|
80 |
+
|
81 |
+
Barn swallows (and other small passerines) often have characteristic feather holes on their wing and tail feathers. These holes were suggested as being caused by avian lice such as Machaerilaemus malleus and Myrsidea rustica, although other studies suggest that they are mainly caused by species of Brueelia. Several other species of lice have been described from barn swallow hosts, including Brueelia domestica and Philopterus microsomaticus.[59][60] The avian lice prefer to feed on white tail spots, and they are generally found more numerously on short-tailed males, indicating the function of unbroken white tail spots as a measure of quality.[61] In Texas, the swallow bug (Oeciacus vicarius), which is common on species such as the cliff swallow, is also known to infest barn swallows.[62]
|
82 |
+
|
83 |
+
Predatory bats such as the greater false vampire bat are known to prey on barn swallows.[63] Swallows at their communal roosts attract predators and several falcon species make use of these opportunities. Falcon species confirmed as predators include the peregrine falcon[64] and the African hobby.[34]
|
84 |
+
|
85 |
+
The barn swallow has an enormous range, with an estimated global extent of 51,700,000 km2 (20,000,000 sq mi) and a population of 190 million individuals. The species is evaluated as least concern on the 2007 IUCN Red List,[1] and has no special status under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which regulates international trade in specimens of wild animals and plants.[28]
|
86 |
+
|
87 |
+
This is a species that has greatly benefited historically from forest clearance, which has created the open habitats it prefers, and from human habitation, which have given it an abundance of safe man-made nest sites. There have been local declines due to the use of DDT in Israel in the 1950s, competition for nest sites with house sparrows in the US in the 19th century, and an ongoing gradual decline in numbers in parts of Europe and Asia due to agricultural intensification, reducing the availability of insect food. However, there has been an increase in the population in North America during the 20th century with the greater availability of nesting sites and subsequent range expansion, including the colonisation of northern Alberta.[2]
|
88 |
+
|
89 |
+
A specific threat to wintering birds from the European populations is the transformation by the South African government of a light aircraft runway near Durban into an international airport for the 2010 FIFA World Cup. The roughly 250 m (270 yd) square Mount Moreland reed bed is a night roost for more than three million barn swallows, which represent 1% of the global population and 8% of the European breeding population. The reed bed lies on the flight path of aircraft using the proposed La Mercy airport, and there were fears that it would be cleared because the birds could threaten aircraft safety.[65][66] However, following detailed evaluation, advanced radar technology will be installed to enable planes using the airport to be warned of bird movements and, if necessary, take appropriate measures to avoid the flocks.[30]
|
90 |
+
|
91 |
+
Climate change may affect the barn swallow; drought causes weight loss and slow feather regrowth, and the expansion of the Sahara will make it a more formidable obstacle for migrating European birds. Hot dry summers will reduce the availability of insect food for chicks. Conversely, warmer springs may lengthen the breeding season and result in more chicks, and the opportunity to use nest sites outside buildings in the north of the range might also lead to more offspring.[48]
|
92 |
+
|
93 |
+
The barn swallow is an attractive bird that feeds on flying insects and has therefore been tolerated by humans when it shares their buildings for nesting. As one of the earlier migrants, this conspicuous species is also seen as an early sign of summer's approach.[67]
|
94 |
+
|
95 |
+
In the Old World, the barn swallow appears to have used man-made structures and bridges since time immemorial. An early reference is in Virgil's Georgics (29 BC), "Ante garrula quam tignis nidum suspendat hirundo" (Before the twittering swallow hangs its nest from the rafters).[68]
|
96 |
+
|
97 |
+
Many cattle farmers believed that swallows spread Salmonella infections, however a study in Sweden showed no evidence of the birds being reservoirs of the bacteria.[69]
|
98 |
+
|
99 |
+
Many literary references are based on the barn swallow's northward migration as a symbol of spring or summer. The proverb about the necessity for more than one piece of evidence goes back at least to Aristotle's Nicomachean Ethics: "For as one swallow or one day does not make a spring, so one day or a short time does not make a fortunate or happy man."[67]
|
100 |
+
|
101 |
+
The barn swallow symbolises the coming of spring and thus love in the Pervigilium Veneris, a late Latin poem. In his poem "The Waste Land", T. S. Eliot quoted the line "Quando fiam uti chelidon [ut tacere desinam]?" ("When will I be like the swallow, so that I can stop being silent?") This refers to the myth of Philomela in which she turns into a nightingale, and her sister Procne into a swallow.[70] On the other hand, an image of the assembly of swallows for their southward migration concludes John Keats's ode "To Autumn".
|
102 |
+
|
103 |
+
The swallow is cited in several of William Shakespeare's plays for the swiftness of its flight, with "True hope is swift, and flies with swallow's wings" from Act 5 of Richard III, and "I have horse will follow where the game Makes way, and run like swallows o'er the plain." from the second act of Titus Andronicus. Shakespeare references the annual migration of the species in The Winter's Tale, Act 4: "Daffodils, That come before the swallow dares, and take The winds of March with beauty".
|
104 |
+
|
105 |
+
A swallow is the main character in Oscar Wilde's story, The Happy Prince.
|
106 |
+
|
107 |
+
Gilbert White studied the barn swallow in detail in his pioneering work The Natural History of Selborne, but even this careful observer was uncertain whether it migrated or hibernated in winter.[12] Elsewhere, its long journeys have been well observed, and a swallow tattoo is popular amongst nautical men as a symbol of a safe return; the tradition was that a mariner had a tattoo of this fellow wanderer after sailing 5,000 nmi (9,300 km; 5,800 mi). A second swallow would be added after 10,000 nmi (19,000 km; 12,000 mi) at sea.[71] In the past, the tolerance for this beneficial insectivore was reinforced by superstitions regarding damage to the barn swallow's nest. Such an act might lead to cows giving bloody milk, or no milk at all, or to hens ceasing to lay.[5] This may be a factor in the longevity of swallows' nests. Survival, with suitable annual refurbishment, for 10–15 years is regular, and one nest was reported to have been occupied for 48 years.[5]
|
108 |
+
|
109 |
+
It is depicted as the martlet, merlette or merlot in heraldry, where it represents younger sons who have no lands. It is also represented as lacking feet as this was a common belief at the time.[72] As a result of a campaign by ornithologists, the barn swallow has been the national bird of Estonia since 23 June 1960.[73][74]
|
110 |
+
|
111 |
+
Barn swallows are one of the most depicted birds on postage stamps around the world.[75][76][77]
|
en/2575.html.txt
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
6, see text
|
4 |
+
|
5 |
+
The barn swallow (Hirundo rustica) is the most widespread species of swallow in the world.[2] It is a distinctive passerine bird with blue upperparts and a long, deeply forked tail. It is found in Europe, Asia, Africa and the Americas. In Anglophone Europe it is just called the swallow; in Northern Europe it is the only common species called a "swallow" rather than a "martin".[3]
|
6 |
+
|
7 |
+
There are six subspecies of barn swallow, which breed across the Northern Hemisphere. Four are strongly migratory, and their wintering grounds cover much of the Southern Hemisphere as far south as central Argentina, the Cape Province of South Africa, and northern Australia. Its huge range means that the barn swallow is not endangered, although there may be local population declines due to specific threats.
|
8 |
+
|
9 |
+
The barn swallow is a bird of open country that normally uses man-made structures to breed and consequently has spread with human expansion. It builds a cup nest from mud pellets in barns or similar structures and feeds on insects caught in flight.[4] This species lives in close association with humans, and its insect-eating habits mean that it is tolerated by humans; this acceptance was reinforced in the past by superstitions regarding the bird and its nest. There are frequent cultural references to the barn swallow in literary and religious works due to both its living in close proximity to humans and its annual migration.[5] The barn swallow is the national bird of Austria and Estonia.
|
10 |
+
|
11 |
+
The adult male barn swallow of the nominate subspecies H. r. rustica is 17–19 cm (6.7–7.5 in) long including 2–7 cm (0.79–2.76 in) of elongated outer tail feathers. It has a wingspan of 32–34.5 cm (12.6–13.6 in) and weighs 16–22 g (0.56–0.78 oz). It has steel blue upperparts and a rufous forehead, chin and throat, which are separated from the off-white underparts by a broad dark blue breast band. The outer tail feathers are elongated, giving the distinctive deeply forked "swallow tail". There is a line of white spots across the outer end of the upper tail.[4] The female is similar in appearance to the male, but the tail streamers are shorter, the blue of the upperparts and breast band is less glossy, and the underparts paler. The juvenile is browner and has a paler rufous face and whiter underparts. It also lacks the long tail streamers of the adult.[2]
|
12 |
+
|
13 |
+
The song of the male barn swallow is a cheerful warble, often ending with su-seer with the second note higher than the first but falling in pitch. Calls include witt or witt-witt and a loud splee-plink when excited (or trying to chase intruders away from the nest).[4] The alarm calls include a sharp siflitt for predators like cats and a flitt-flitt for birds of prey like the hobby.[6] This species is fairly quiet on the wintering grounds.[7]
|
14 |
+
|
15 |
+
The distinctive combination of a red face and blue breast band render the adult barn swallow easy to distinguish from the African Hirundo species and from the welcome swallow (Hirundo neoxena) with which its range overlaps in Australasia.[2] In Africa the short tail streamers of the juvenile barn swallow invite confusion with juvenile red-chested swallow (Hirundo lucida), but the latter has a narrower breast band and more white in the tail.[8]
|
16 |
+
|
17 |
+
The barn swallow was described by Carl Linnaeus in his 1758 10th edition of Systema Naturae as Hirundo rustica, characterised as "H. rectricibus, exceptis duabus intermediis, macula alba notatîs".[9] Hirundo is the Latin word for "swallow"; rusticus means "of the country".[10] This species is the only one of that genus to have a range extending into the Americas, with the majority of Hirundo species being native to Africa. This genus of blue-backed swallows is sometimes called the "barn swallows".[2][3]
|
18 |
+
|
19 |
+
The Oxford English Dictionary dates the English common name "barn swallow" to 1851,[11] though an earlier instance of the collocation in an English-language context is in Gilbert White's popular book The Natural History of Selborne, originally published in 1789:
|
20 |
+
|
21 |
+
The swallow, though called the chimney-swallow, by no means builds altogether in chimnies [sic], but often within barns and out-houses against the rafters ... In Sweden she builds in barns, and is called ladusvala, the barn-swallow.[12]
|
22 |
+
|
23 |
+
This suggests that the English name may be a calque on the Swedish term.
|
24 |
+
|
25 |
+
There are few taxonomic problems within the genus, but the red-chested swallow—a resident of West Africa, the Congo basin, and Ethiopia—was formerly treated as a subspecies of barn swallow. The red-chested swallow is slightly smaller than its migratory relative, has a narrower blue breast-band, and (in the adult) has shorter tail streamers. In flight, it looks paler underneath than barn swallow.[8]
|
26 |
+
|
27 |
+
Six subspecies of barn swallow are generally recognised. In eastern Asia, a number of additional or alternative forms have been proposed, including saturata by Robert Ridgway in 1883,[13] kamtschatica by Benedykt Dybowski in 1883,[14] ambigua by Erwin Stresemann[15] and mandschurica by Wilhelm Meise in 1934.[13] Given the uncertainties over the validity of these forms,[14][16] this article follows the treatment of Turner and Rose.[2]
|
28 |
+
|
29 |
+
The short wings, red belly and incomplete breast band of H. r. tytleri are also found in H. r. erythrogaster, and DNA analyses show that barn swallows from North America colonised the Baikal region of Siberia, a dispersal direction opposite to that for most changes in distribution between North America and Eurasia.[26]
|
30 |
+
|
31 |
+
The preferred habitat of the barn swallow is open country with low vegetation, such as pasture, meadows and farmland, preferably with nearby water. This swallow avoids heavily wooded or precipitous areas and densely built-up locations. The presence of accessible open structures such as barns, stables, or culverts to provide nesting sites, and exposed locations such as wires, roof ridges or bare branches for perching, are also important in the bird's selection of its breeding range.[4]
|
32 |
+
|
33 |
+
It breeds in the Northern Hemisphere from sea level to typically 2,700 m (8,900 ft),[27] but to 3,000 m (9,800 ft) in the Caucasus[4] and North America,[28] and it is absent only from deserts and the cold northernmost parts of the continents. Over much of its range, it avoids towns, and in Europe is replaced in urban areas by the house martin. However, in Honshū, Japan, the barn swallow is a more urban bird, with the red-rumped swallow (Cecropis daurica) replacing it as the rural species.[2]
|
34 |
+
|
35 |
+
In winter, the barn swallow is cosmopolitan in its choice of habitat, avoiding only dense forests and deserts.[29] It is most common in open, low vegetation habitats, such as savanna and ranch land, and in Venezuela, South Africa and Trinidad and Tobago it is described as being particularly attracted to burnt or harvested sugarcane fields and the waste from the cane.[7][30][31] In the absence of suitable roost sites, they may sometimes roost on wires where they are more exposed to predators.[32] Individual birds tend to return to the same wintering locality each year[33] and congregate from a large area to roost in reed beds.[30] These roosts can be extremely large; one in Nigeria had an estimated 1.5 million birds.[34] These roosts are thought to be a protection from predators, and the arrival of roosting birds is synchronised in order to overwhelm predators like African hobbies. The barn swallow has been recorded as breeding in the more temperate parts of its winter range, such as the mountains of Thailand and in central Argentina.[2][35]
|
36 |
+
|
37 |
+
Migration of barn swallows between Britain and South Africa was first established on 23 December 1912 when a bird that had been ringed by James Masefield at a nest in Staffordshire, was found in Natal.[36] As would be expected for a long-distance migrant, this bird has occurred as a vagrant to such distant areas as Hawaii, Bermuda, Greenland, Tristan da Cunha the Falkland Islands,[2] and even Antarctica.[37]
|
38 |
+
|
39 |
+
The barn swallow is similar in its habits to other aerial insectivores, including other swallow species and the unrelated swifts. It is not a particularly fast flier, with a speed estimated at about 11 metres per second (25 mph), up to 20 metres per second (45 mph) and a wing beat rate of approximately 5, up to 7–9 times each second.[38][39]
|
40 |
+
|
41 |
+
The barn swallow typically feeds in open areas[40] 7–8 m (23–26 ft) above shallow water or the ground often following animals, humans or farm machinery to catch disturbed insects, but it will occasionally pick prey items from the water surface, walls and plants.[4] In the breeding areas, large flies make up around 70% of the diet, with aphids also a significant component. However, in Europe, the barn swallow consumes fewer aphids than the house or sand martins.[4] On the wintering grounds, Hymenoptera, especially flying ants, are important food items.[2] When egg-laying barn swallows hunt in pairs, but otherwise will form often large flocks.[2]
|
42 |
+
|
43 |
+
The amount of food a clutch will get depends on the size of the clutch, with larger clutches getting more food on average. The timing of a clutch also determines the food given; later broods get food that is smaller in size compared to earlier broods. This is because larger insects are too far away from the nest to be profitable in terms of energy expenditure.[41]
|
44 |
+
|
45 |
+
Isotope studies have shown that wintering populations may utilise different feeding habitats, with British breeders feeding mostly over grassland, whereas Swiss birds utilised woodland more.[42] Another study showed that a single population breeding in Denmark actually wintered in two separate areas.[43]
|
46 |
+
|
47 |
+
The barn swallow drinks by skimming low over lakes or rivers and scooping up water with its open mouth.[28] This bird bathes in a similar fashion, dipping into the water for an instant while in flight.[33]
|
48 |
+
|
49 |
+
Swallows gather in communal roosts after breeding, sometimes thousands strong. Reed beds are regularly favoured, with the birds swirling en masse before swooping low over the reeds.[6] Reed beds are an important source of food prior to and whilst on migration; although the barn swallow is a diurnal migrant that can feed on the wing whilst it travels low over ground or water, the reed beds enable fat deposits to be established or replenished.[44]
|
50 |
+
|
51 |
+
The male barn swallow returns to the breeding grounds before the females and selects a nest site, which is then advertised to females with a circling flight and song.[4] Plumage may be used to advertise: in some populations, like in the subspecies H. r. gutturalis, darker ventral plumage in males is associated with higher breeding success. In other populations,[45] the breeding success of the male is related to the length of the tail streamers, with longer streamers being more attractive to the female.[4][46] Males with longer tail feathers are generally longer-lived and more disease resistant, females thus gaining an indirect fitness benefit from this form of selection, since longer tail feathers indicate a genetically stronger individual which will produce offspring with enhanced vitality.[47] Males in northern Europe have longer tails than those further south; whereas in Spain the male's tail streamers are only 5% longer than the female's, in Finland the difference is 20%. In Denmark, the average male tail length increased by 9% between 1984 and 2004, but it is possible that climatic changes may lead in the future to shorter tails if summers become hot and dry.[48]
|
52 |
+
|
53 |
+
Males with long streamers also have larger white tail spots, and since feather-eating bird lice prefer white feathers, large white tail spots without parasite damage again demonstrate breeding quality; there is a positive association between spot size and the number of offspring produced each season.[49]
|
54 |
+
|
55 |
+
The breeding season of the barn swallow is variable; in the southern part of the range, the breeding season usually is from February or March to early to mid September, although some late second and third broods finish in October. In the northern part of the range, it usually starts late May to early June and ends the same time as the breeding season of the southernmost birds.[50]
|
56 |
+
|
57 |
+
Both sexes defend the nest, but the male is particularly aggressive and territorial.[2] Once established, pairs stay together to breed for life, but extra-pair copulation is common, making this species genetically polygamous, despite being socially monogamous.[51] Males guard females actively to avoid being cuckolded.[52] Males may use deceptive alarm calls to disrupt extrapair copulation attempts toward their mates.[53]
|
58 |
+
|
59 |
+
As its name implies, the barn swallow typically nests inside accessible buildings such as barns and stables, or under bridges and wharves.[54] Before man-made sites became common, it nested on cliff faces or in caves, but this is now rare.[2] The neat cup-shaped nest is placed on a beam or against a suitable vertical projection. It is constructed by both sexes, although more often by the female, with mud pellets collected in their beaks and lined with grasses, feathers, algae[54] or other soft materials.[2] The nest building ability of the male is also sexually selected; females will lay more eggs and at an earlier date with males who are better at nest construction, with the opposite being true with males that are not.[55] After building the nest, barn swallows may nest colonially where sufficient high-quality nest sites are available, and within a colony, each pair defends a territory around the nest which, for the European subspecies, is 4 to 8 m2 (43 to 86 sq ft) in size. Colony size tends to be larger in North America.[28]
|
60 |
+
|
61 |
+
In North America at least, barn swallows frequently engage in a mutualist relationship with ospreys. Barn swallows will build their nest below an osprey nest, receiving protection from other birds of prey that are repelled by the exclusively fish-eating ospreys. The ospreys are alerted to the presence of these predators by the alarm calls of the swallows.[28]
|
62 |
+
|
63 |
+
There are normally two broods, with the original nest being reused for the second brood and being repaired and reused in subsequent years. The female lays two to seven, but typically four or five, reddish-spotted white eggs.[2] The clutch size is influenced by latitude, with clutch sizes of northern populations being higher on average than southern populations.[56] The eggs are 20 mm × 14 mm (0.79 in × 0.55 in) in size, and weigh 1.9 g (0.067 oz), of which 5% is shell. In Europe, the female does almost all the incubation, but in North America the male may incubate up to 25% of the time. The incubation period is normally 14–19 days, with another 18–23 days before the altricial chicks fledge. The fledged young stay with, and are fed by, the parents for about a week after leaving the nest. Occasionally, first-year birds from the first brood will assist in feeding the second brood.[2] Compared to those from early broods, juvenile barn swallows from late broods have been found to migrate at a younger age, fuel less efficiently during migration and have lower return rates the following year.[57]
|
64 |
+
|
65 |
+
The barn swallow will mob intruders such as cats or accipiters that venture too close to their nest, often flying very close to the threat.[47] Adult barn swallows have few predators, but some are taken by accipiters, falcons, and owls. Brood parasitism by cowbirds in North America or cuckoos in Eurasia is rare.[4][28]
|
66 |
+
|
67 |
+
Hatching success is 90% and the fledging survival rate is 70–90%. Average mortality is 70–80% in the first year and 40–70% for the adult. Although the record age is more than 11 years, most survive less than four years.[2] Barn swallow nestlings have prominent red gapes, a feature shown to induce feeding by parent birds. An experiment in manipulating brood size and immune system showed the vividness of the gape was positively correlated with T-cell–mediated immunocompetence, and that larger brood size and injection with an antigen led to a less vivid gape.[58]
|
68 |
+
|
69 |
+
The barn swallow has been recorded as hybridising with the cliff swallow (Petrochelidon pyrrhonota) and the cave swallow (P. fulva) in North America, and the house martin (Delichon urbicum) in Eurasia, the cross with the latter being one of the most common passerine hybrids.[47]
|
70 |
+
|
71 |
+
Eggs in the Muséum de Toulouse
|
72 |
+
|
73 |
+
A chick less than an hour after being born
|
74 |
+
|
75 |
+
Chicks and eggs in a nest with horse hair lining
|
76 |
+
|
77 |
+
Older chicks in nest
|
78 |
+
|
79 |
+
Juvenile being fed
|
80 |
+
|
81 |
+
Barn swallows (and other small passerines) often have characteristic feather holes on their wing and tail feathers. These holes were suggested as being caused by avian lice such as Machaerilaemus malleus and Myrsidea rustica, although other studies suggest that they are mainly caused by species of Brueelia. Several other species of lice have been described from barn swallow hosts, including Brueelia domestica and Philopterus microsomaticus.[59][60] The avian lice prefer to feed on white tail spots, and they are generally found more numerously on short-tailed males, indicating the function of unbroken white tail spots as a measure of quality.[61] In Texas, the swallow bug (Oeciacus vicarius), which is common on species such as the cliff swallow, is also known to infest barn swallows.[62]
|
82 |
+
|
83 |
+
Predatory bats such as the greater false vampire bat are known to prey on barn swallows.[63] Swallows at their communal roosts attract predators and several falcon species make use of these opportunities. Falcon species confirmed as predators include the peregrine falcon[64] and the African hobby.[34]
|
84 |
+
|
85 |
+
The barn swallow has an enormous range, with an estimated global extent of 51,700,000 km2 (20,000,000 sq mi) and a population of 190 million individuals. The species is evaluated as least concern on the 2007 IUCN Red List,[1] and has no special status under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which regulates international trade in specimens of wild animals and plants.[28]
|
86 |
+
|
87 |
+
This is a species that has greatly benefited historically from forest clearance, which has created the open habitats it prefers, and from human habitation, which have given it an abundance of safe man-made nest sites. There have been local declines due to the use of DDT in Israel in the 1950s, competition for nest sites with house sparrows in the US in the 19th century, and an ongoing gradual decline in numbers in parts of Europe and Asia due to agricultural intensification, reducing the availability of insect food. However, there has been an increase in the population in North America during the 20th century with the greater availability of nesting sites and subsequent range expansion, including the colonisation of northern Alberta.[2]
|
88 |
+
|
89 |
+
A specific threat to wintering birds from the European populations is the transformation by the South African government of a light aircraft runway near Durban into an international airport for the 2010 FIFA World Cup. The roughly 250 m (270 yd) square Mount Moreland reed bed is a night roost for more than three million barn swallows, which represent 1% of the global population and 8% of the European breeding population. The reed bed lies on the flight path of aircraft using the proposed La Mercy airport, and there were fears that it would be cleared because the birds could threaten aircraft safety.[65][66] However, following detailed evaluation, advanced radar technology will be installed to enable planes using the airport to be warned of bird movements and, if necessary, take appropriate measures to avoid the flocks.[30]
|
90 |
+
|
91 |
+
Climate change may affect the barn swallow; drought causes weight loss and slow feather regrowth, and the expansion of the Sahara will make it a more formidable obstacle for migrating European birds. Hot dry summers will reduce the availability of insect food for chicks. Conversely, warmer springs may lengthen the breeding season and result in more chicks, and the opportunity to use nest sites outside buildings in the north of the range might also lead to more offspring.[48]
|
92 |
+
|
93 |
+
The barn swallow is an attractive bird that feeds on flying insects and has therefore been tolerated by humans when it shares their buildings for nesting. As one of the earlier migrants, this conspicuous species is also seen as an early sign of summer's approach.[67]
|
94 |
+
|
95 |
+
In the Old World, the barn swallow appears to have used man-made structures and bridges since time immemorial. An early reference is in Virgil's Georgics (29 BC), "Ante garrula quam tignis nidum suspendat hirundo" (Before the twittering swallow hangs its nest from the rafters).[68]
|
96 |
+
|
97 |
+
Many cattle farmers believed that swallows spread Salmonella infections, however a study in Sweden showed no evidence of the birds being reservoirs of the bacteria.[69]
|
98 |
+
|
99 |
+
Many literary references are based on the barn swallow's northward migration as a symbol of spring or summer. The proverb about the necessity for more than one piece of evidence goes back at least to Aristotle's Nicomachean Ethics: "For as one swallow or one day does not make a spring, so one day or a short time does not make a fortunate or happy man."[67]
|
100 |
+
|
101 |
+
The barn swallow symbolises the coming of spring and thus love in the Pervigilium Veneris, a late Latin poem. In his poem "The Waste Land", T. S. Eliot quoted the line "Quando fiam uti chelidon [ut tacere desinam]?" ("When will I be like the swallow, so that I can stop being silent?") This refers to the myth of Philomela in which she turns into a nightingale, and her sister Procne into a swallow.[70] On the other hand, an image of the assembly of swallows for their southward migration concludes John Keats's ode "To Autumn".
|
102 |
+
|
103 |
+
The swallow is cited in several of William Shakespeare's plays for the swiftness of its flight, with "True hope is swift, and flies with swallow's wings" from Act 5 of Richard III, and "I have horse will follow where the game Makes way, and run like swallows o'er the plain." from the second act of Titus Andronicus. Shakespeare references the annual migration of the species in The Winter's Tale, Act 4: "Daffodils, That come before the swallow dares, and take The winds of March with beauty".
|
104 |
+
|
105 |
+
A swallow is the main character in Oscar Wilde's story, The Happy Prince.
|
106 |
+
|
107 |
+
Gilbert White studied the barn swallow in detail in his pioneering work The Natural History of Selborne, but even this careful observer was uncertain whether it migrated or hibernated in winter.[12] Elsewhere, its long journeys have been well observed, and a swallow tattoo is popular amongst nautical men as a symbol of a safe return; the tradition was that a mariner had a tattoo of this fellow wanderer after sailing 5,000 nmi (9,300 km; 5,800 mi). A second swallow would be added after 10,000 nmi (19,000 km; 12,000 mi) at sea.[71] In the past, the tolerance for this beneficial insectivore was reinforced by superstitions regarding damage to the barn swallow's nest. Such an act might lead to cows giving bloody milk, or no milk at all, or to hens ceasing to lay.[5] This may be a factor in the longevity of swallows' nests. Survival, with suitable annual refurbishment, for 10–15 years is regular, and one nest was reported to have been occupied for 48 years.[5]
|
108 |
+
|
109 |
+
It is depicted as the martlet, merlette or merlot in heraldry, where it represents younger sons who have no lands. It is also represented as lacking feet as this was a common belief at the time.[72] As a result of a campaign by ornithologists, the barn swallow has been the national bird of Estonia since 23 June 1960.[73][74]
|
110 |
+
|
111 |
+
Barn swallows are one of the most depicted birds on postage stamps around the world.[75][76][77]
|
en/2576.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/2577.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/2578.html.txt
ADDED
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The history of Earth concerns the development of planet Earth from its formation to the present day.[1][2] Nearly all branches of natural science have contributed to understanding of the main events of Earth's past, characterized by constant geological change and biological evolution.
|
4 |
+
|
5 |
+
The geological time scale (GTS), as defined by international convention,[3] depicts the large spans of time from the beginning of the Earth to the present, and its divisions chronicle some definitive events of Earth history. (In the graphic: Ga means "billion years ago"; Ma, "million years ago".) Earth formed around 4.54 billion years ago, approximately one-third the age of the universe, by accretion from the solar nebula.[4][5][6] Volcanic outgassing probably created the primordial atmosphere and then the ocean, but the early atmosphere contained almost no oxygen. Much of the Earth was molten because of frequent collisions with other bodies which led to extreme volcanism. While the Earth was in its earliest stage (Early Earth), a giant impact collision with a planet-sized body named Theia is thought to have formed the Moon. Over time, the Earth cooled, causing the formation of a solid crust, and allowing liquid water on the surface.
|
6 |
+
|
7 |
+
The Hadean eon represents the time before a reliable (fossil) record of life; it began with the formation of the planet and ended 4.0 billion years ago. The following Archean and Proterozoic eons produced the beginnings of life on Earth and its earliest evolution. The succeeding eon is the Phanerozoic, divided into three eras: the Palaeozoic, an era of arthropods, fishes, and the first life on land; the Mesozoic, which spanned the rise, reign, and climactic extinction of the non-avian dinosaurs; and the Cenozoic, which saw the rise of mammals. Recognizable humans emerged at most 2 million years ago, a vanishingly small period on the geological scale.
|
8 |
+
|
9 |
+
The earliest undisputed evidence of life on Earth dates at least from 3.5 billion years ago,[7][8][9] during the Eoarchean Era, after a geological crust started to solidify following the earlier molten Hadean Eon. There are microbial mat fossils such as stromatolites found in 3.48 billion-year-old sandstone discovered in Western Australia.[10][11][12] Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland[13] as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia.[14][15] According to one of the researchers, "If life arose relatively quickly on Earth … then it could be common in the universe."[14]
|
10 |
+
|
11 |
+
Photosynthetic organisms appeared between 3.2 and 2.4 billion years ago and began enriching the atmosphere with oxygen. Life remained mostly small and microscopic until about 580 million years ago, when complex multicellular life arose, developed over time, and culminated in the Cambrian Explosion about 541 million years ago. This sudden diversification of life forms produced most of the major phyla known today, and divided the Proterozoic Eon from the Cambrian Period of the Paleozoic Era. It is estimated that 99 percent of all species that ever lived on Earth, over five billion,[16] have gone extinct.[17][18] Estimates on the number of Earth's current species range from 10 million to 14 million,[19] of which about 1.2 million are documented, but over 86 percent have not been described.[20] However, it was recently claimed that 1 trillion species currently live on Earth, with only one-thousandth of one percent described.[21]
|
12 |
+
|
13 |
+
The Earth's crust has constantly changed since its formation, as has life since its first appearance. Species continue to evolve, taking on new forms, splitting into daughter species, or going extinct in the face of ever-changing physical environments. The process of plate tectonics continues to shape the Earth's continents and oceans and the life they harbor. Human activity is now a dominant force affecting global change, harming the biosphere, the Earth's surface, hydrosphere, and atmosphere with the loss of wild lands, over-exploitation of the oceans, production of greenhouse gases, degradation of the ozone layer, and general degradation of soil, air, and water quality.
|
14 |
+
|
15 |
+
In geochronology, time is generally measured in mya (million years ago), each unit representing the period of approximately 1,000,000 years in the past. The history of Earth is divided into four great eons, starting 4,540 mya with the formation of the planet. Each eon saw the most significant changes in Earth's composition, climate and life. Each eon is subsequently divided into eras, which in turn are divided into periods, which are further divided into epochs.
|
16 |
+
|
17 |
+
The history of the Earth can be organized chronologically according to the geologic time scale, which is split into intervals based on stratigraphic analysis.[2][22]
|
18 |
+
The following four timelines show the geologic time scale. The first shows the entire time from the formation of the Earth to the present, but this gives little space for the most recent eon. Therefore, the second timeline shows an expanded view of the most recent eon. In a similar way, the most recent era is expanded in the third timeline, and the most recent period is expanded in the fourth timeline.
|
19 |
+
|
20 |
+
The standard model for the formation of the Solar System (including the Earth) is the solar nebula hypothesis.[23] In this model, the Solar System formed from a large, rotating cloud of interstellar dust and gas called the solar nebula. It was composed of hydrogen and helium created shortly after the Big Bang 13.8 Ga (billion years ago) and heavier elements ejected by supernovae. About 4.5 Ga, the nebula began a contraction that may have been triggered by the shock wave from a nearby supernova.[24] A shock wave would have also made the nebula rotate. As the cloud began to accelerate, its angular momentum, gravity, and inertia flattened it into a protoplanetary disk perpendicular to its axis of rotation. Small perturbations due to collisions and the angular momentum of other large debris created the means by which kilometer-sized protoplanets began to form, orbiting the nebular center.[25]
|
21 |
+
|
22 |
+
The center of the nebula, not having much angular momentum, collapsed rapidly, the compression heating it until nuclear fusion of hydrogen into helium began. After more contraction, a T Tauri star ignited and evolved into the Sun. Meanwhile, in the outer part of the nebula gravity caused matter to condense around density perturbations and dust particles, and the rest of the protoplanetary disk began separating into rings. In a process known as runaway accretion, successively larger fragments of dust and debris clumped together to form planets.[25] Earth formed in this manner about 4.54 billion years ago (with an uncertainty of 1%)[26][27][4][28] and was largely completed within 10–20 million years.[29] The solar wind of the newly formed T Tauri star cleared out most of the material in the disk that had not already condensed into larger bodies. The same process is expected to produce accretion disks around virtually all newly forming stars in the universe, some of which yield planets.[30]
|
23 |
+
|
24 |
+
The proto-Earth grew by accretion until its interior was hot enough to melt the heavy, siderophile metals. Having higher densities than the silicates, these metals sank. This so-called iron catastrophe resulted in the separation of a primitive mantle and a (metallic) core only 10 million years after the Earth began to form, producing the layered structure of Earth and setting up the formation of Earth's magnetic field.[31] J.A. Jacobs [32] was the first to suggest that Earth's inner core—a solid center distinct from the liquid outer core—is freezing and growing out of the liquid outer core due to the gradual cooling of Earth's interior (about 100 degrees Celsius per billion years[33]).
|
25 |
+
|
26 |
+
The first eon in Earth's history, the Hadean, begins with the Earth's formation and is followed by the Archean eon at 3.8 Ga.[2]:145 The oldest rocks found on Earth date to about 4.0 Ga, and the oldest detrital zircon crystals in rocks to about 4.4 Ga,[34][35][36] soon after the formation of the Earth's crust and the Earth itself. The giant impact hypothesis for the Moon's formation states that shortly after formation of an initial crust, the proto-Earth was impacted by a smaller protoplanet, which ejected part of the mantle and crust into space and created the Moon.[37][38][39]
|
27 |
+
|
28 |
+
From crater counts on other celestial bodies, it is inferred that a period of intense meteorite impacts, called the Late Heavy Bombardment, began about 4.1 Ga, and concluded around 3.8 Ga, at the end of the Hadean.[40] In addition, volcanism was severe due to the large heat flow and geothermal gradient.[41] Nevertheless, detrital zircon crystals dated to 4.4 Ga show evidence of having undergone contact with liquid water, suggesting that the Earth already had oceans or seas at that time.[34]
|
29 |
+
|
30 |
+
By the beginning of the Archean, the Earth had cooled significantly. Present life forms could not have survived at Earth's surface, because the Archean atmosphere lacked oxygen hence had no ozone layer to block ultraviolet light. Nevertheless, it is believed that primordial life began to evolve by the early Archean, with candidate fossils dated to around 3.5 Ga.[42] Some scientists even speculate that life could have begun during the early Hadean, as far back as 4.4 Ga, surviving the possible Late Heavy Bombardment period in hydrothermal vents below the Earth's surface.[43]
|
31 |
+
|
32 |
+
Earth's only natural satellite, the Moon, is larger relative to its planet than any other satellite in the Solar System.[nb 1] During the Apollo program, rocks from the Moon's surface were brought to Earth. Radiometric dating of these rocks shows that the Moon is 4.53 ± 0.01 billion years old,[46] formed at least 30 million years after the Solar System.[47] New evidence suggests the Moon formed even later, 4.48 ± 0.02 Ga, or 70–110 million years after the start of the Solar System.[48]
|
33 |
+
|
34 |
+
Theories for the formation of the Moon must explain its late formation as well as the following facts. First, the Moon has a low density (3.3 times that of water, compared to 5.5 for the Earth[49]) and a small metallic core. Second, there is virtually no water or other volatiles on the Moon. Third, the Earth and Moon have the same oxygen isotopic signature (relative abundance of the oxygen isotopes). Of the theories proposed to account for these phenomena, one is widely accepted: The giant impact hypothesis proposes that the Moon originated after a body the size of Mars (sometimes named Theia[47]) struck the proto-Earth a glancing blow.[1]:256[50][51]
|
35 |
+
|
36 |
+
The collision released about 100 million times more energy than the more recent Chicxulub impact that is believed to have caused the extinction of the non-avian dinosaurs. It was enough to vaporize some of the Earth's outer layers and melt both bodies.[50][1]:256 A portion of the mantle material was ejected into orbit around the Earth. The giant impact hypothesis predicts that the Moon was depleted of metallic material,[52] explaining its abnormal composition.[53] The ejecta in orbit around the Earth could have condensed into a single body within a couple of weeks. Under the influence of its own gravity, the ejected material became a more spherical body: the Moon.[54]
|
37 |
+
|
38 |
+
Mantle convection, the process that drives plate tectonics, is a result of heat flow from the Earth's interior to the Earth's surface.[55]:2 It involves the creation of rigid tectonic plates at mid-oceanic ridges. These plates are destroyed by subduction into the mantle at subduction zones. During the early Archean (about 3.0 Ga) the mantle was much hotter than today, probably around 1,600 °C (2,910 °F),[56]:82 so convection in the mantle was faster. Although a process similar to present-day plate tectonics did occur, this would have gone faster too. It is likely that during the Hadean and Archean, subduction zones were more common, and therefore tectonic plates were smaller.[1]:258[57]
|
39 |
+
|
40 |
+
The initial crust, formed when the Earth's surface first solidified, totally disappeared from a combination of this fast Hadean plate tectonics and the intense impacts of the Late Heavy Bombardment. However, it is thought that it was basaltic in composition, like today's oceanic crust, because little crustal differentiation had yet taken place.[1]:258 The first larger pieces of continental crust, which is a product of differentiation of lighter elements during partial melting in the lower crust, appeared at the end of the Hadean, about 4.0 Ga. What is left of these first small continents are called cratons. These pieces of late Hadean and early Archean crust form the cores around which today's continents grew.[58]
|
41 |
+
|
42 |
+
The oldest rocks on Earth are found in the North American craton of Canada. They are tonalites from about 4.0 Ga. They show traces of metamorphism by high temperature, but also sedimentary grains that have been rounded by erosion during transport by water, showing that rivers and seas existed then.[59] Cratons consist primarily of two alternating types of terranes. The first are so-called greenstone belts, consisting of low-grade metamorphosed sedimentary rocks. These "greenstones" are similar to the sediments today found in oceanic trenches, above subduction zones. For this reason, greenstones are sometimes seen as evidence for subduction during the Archean. The second type is a complex of felsic magmatic rocks. These rocks are mostly tonalite, trondhjemite or granodiorite, types of rock similar in composition to granite (hence such terranes are called TTG-terranes). TTG-complexes are seen as the relicts of the first continental crust, formed by partial melting in basalt.[60]:Chapter 5
|
43 |
+
|
44 |
+
Earth is often described as having had three atmospheres. The first atmosphere, captured from the solar nebula, was composed of light (atmophile) elements from the solar nebula, mostly hydrogen and helium. A combination of the solar wind and Earth's heat would have driven off this atmosphere, as a result of which the atmosphere is now depleted of these elements compared to cosmic abundances.[62] After the impact which created the Moon, the molten Earth released volatile gases; and later more gases were released by volcanoes, completing a second atmosphere rich in greenhouse gases but poor in oxygen. [1]:256 Finally, the third atmosphere, rich in oxygen, emerged when bacteria began to produce oxygen about 2.8 Ga.[63]:83–84, 116–117
|
45 |
+
|
46 |
+
In early models for the formation of the atmosphere and ocean, the second atmosphere was formed by outgassing of volatiles from the Earth's interior. Now it is considered likely that many of the volatiles were delivered during accretion by a process known as impact degassing in which incoming bodies vaporize on impact. The ocean and atmosphere would, therefore, have started to form even as the Earth formed.[64] The new atmosphere probably contained water vapor, carbon dioxide, nitrogen, and smaller amounts of other gases.[65]
|
47 |
+
|
48 |
+
Planetesimals at a distance of 1 astronomical unit (AU), the distance of the Earth from the Sun, probably did not contribute any water to the Earth because the solar nebula was too hot for ice to form and the hydration of rocks by water vapor would have taken too long.[64][66] The water must have been supplied by meteorites from the outer asteroid belt and some large planetary embryos from beyond 2.5 AU.[64][67] Comets may also have contributed. Though most comets are today in orbits farther away from the Sun than Neptune, computer simulations show that they were originally far more common in the inner parts of the Solar System.[59]:130–132
|
49 |
+
|
50 |
+
As the Earth cooled, clouds formed. Rain created the oceans. Recent evidence suggests the oceans may have begun forming as early as 4.4 Ga.[34] By the start of the Archean eon, they already covered much of the Earth. This early formation has been difficult to explain because of a problem known as the faint young Sun paradox. Stars are known to get brighter as they age, and at the time of its formation the Sun would have been emitting only 70% of its current power. Thus, the Sun has become 30% brighter in the last 4.5 billion years.[68] Many models indicate that the Earth would have been covered in ice.[69][64] A likely solution is that there was enough carbon dioxide and methane to produce a greenhouse effect. The carbon dioxide would have been produced by volcanoes and the methane by early microbes. Another greenhouse gas, ammonia, would have been ejected by volcanos but quickly destroyed by ultraviolet radiation.[63]:83
|
51 |
+
|
52 |
+
One of the reasons for interest in the early atmosphere and ocean is that they form the conditions under which life first arose. There are many models, but little consensus, on how life emerged from non-living chemicals; chemical systems created in the laboratory fall well short of the minimum complexity for a living organism.[70][71]
|
53 |
+
|
54 |
+
The first step in the emergence of life may have been chemical reactions that produced many of the simpler organic compounds, including nucleobases and amino acids, that are the building blocks of life. An experiment in 1953 by Stanley Miller and Harold Urey showed that such molecules could form in an atmosphere of water, methane, ammonia and hydrogen with the aid of sparks to mimic the effect of lightning.[72] Although atmospheric composition was probably different from that used by Miller and Urey, later experiments with more realistic compositions also managed to synthesize organic molecules.[73] Computer simulations show that extraterrestrial organic molecules could have formed in the protoplanetary disk before the formation of the Earth.[74]
|
55 |
+
|
56 |
+
Additional complexity could have been reached from at least three possible starting points: self-replication, an organism's ability to produce offspring that are similar to itself; metabolism, its ability to feed and repair itself; and external cell membranes, which allow food to enter and waste products to leave, but exclude unwanted substances.[75]
|
57 |
+
|
58 |
+
Even the simplest members of the three modern domains of life use DNA to record their "recipes" and a complex array of RNA and protein molecules to "read" these instructions and use them for growth, maintenance, and self-replication.
|
59 |
+
|
60 |
+
The discovery that a kind of RNA molecule called a ribozyme can catalyze both its own replication and the construction of proteins led to the hypothesis that earlier life-forms were based entirely on RNA.[76] They could have formed an RNA world in which there were individuals but no species, as mutations and horizontal gene transfers would have meant that the offspring in each generation were quite likely to have different genomes from those that their parents started with.[77] RNA would later have been replaced by DNA, which is more stable and therefore can build longer genomes, expanding the range of capabilities a single organism can have.[78] Ribozymes remain as the main components of ribosomes, the "protein factories" of modern cells.[79]
|
61 |
+
|
62 |
+
Although short, self-replicating RNA molecules have been artificially produced in laboratories,[80] doubts have been raised about whether natural non-biological synthesis of RNA is possible.[81][82][83] The earliest ribozymes may have been formed of simpler nucleic acids such as PNA, TNA or GNA, which would have been replaced later by RNA.[84][85] Other pre-RNA replicators have been posited, including crystals[86]:150 and even quantum systems.[87]
|
63 |
+
|
64 |
+
In 2003 it was proposed that porous metal sulfide precipitates would assist RNA synthesis at about 100 °C (212 °F) and at ocean-bottom pressures near hydrothermal vents. In this hypothesis, the proto-cells would be confined in the pores of the metal substrate until the later development of lipid membranes.[88]
|
65 |
+
|
66 |
+
Another long-standing hypothesis is that the first life was composed of protein molecules. Amino acids, the building blocks of proteins, are easily synthesized in plausible prebiotic conditions, as are small peptides (polymers of amino acids) that make good catalysts.[89]:295–297 A series of experiments starting in 1997 showed that amino acids and peptides could form in the presence of carbon monoxide and hydrogen sulfide with iron sulfide and nickel sulfide as catalysts. Most of the steps in their assembly required temperatures of about 100 °C (212 °F) and moderate pressures, although one stage required 250 °C (482 °F) and a pressure equivalent to that found under 7 kilometers (4.3 mi) of rock. Hence, self-sustaining synthesis of proteins could have occurred near hydrothermal vents.[90]
|
67 |
+
|
68 |
+
A difficulty with the metabolism-first scenario is finding a way for organisms to evolve. Without the ability to replicate as individuals, aggregates of molecules would have "compositional genomes" (counts of molecular species in the aggregate) as the target of natural selection. However, a recent model shows that such a system is unable to evolve in response to natural selection.[91]
|
69 |
+
|
70 |
+
It has been suggested that double-walled "bubbles" of lipids like those that form the external membranes of cells may have been an essential first step.[92] Experiments that simulated the conditions of the early Earth have reported the formation of lipids, and these can spontaneously form liposomes, double-walled "bubbles", and then reproduce themselves. Although they are not intrinsically information-carriers as nucleic acids are, they would be subject to natural selection for longevity and reproduction. Nucleic acids such as RNA might then have formed more easily within the liposomes than they would have outside.[93]
|
71 |
+
|
72 |
+
Some clays, notably montmorillonite, have properties that make them plausible accelerators for the emergence of an RNA world: they grow by self-replication of their crystalline pattern, are subject to an analog of natural selection (as the clay "species" that grows fastest in a particular environment rapidly becomes dominant), and can catalyze the formation of RNA molecules.[94] Although this idea has not become the scientific consensus, it still has active supporters.[95]:150–158[86]
|
73 |
+
|
74 |
+
Research in 2003 reported that montmorillonite could also accelerate the conversion of fatty acids into "bubbles", and that the bubbles could encapsulate RNA attached to the clay. Bubbles can then grow by absorbing additional lipids and dividing. The formation of the earliest cells may have been aided by similar processes.[96]
|
75 |
+
|
76 |
+
A similar hypothesis presents self-replicating iron-rich clays as the progenitors of nucleotides, lipids and amino acids.[97]
|
77 |
+
|
78 |
+
It is believed that of this multiplicity of protocells, only one line survived. Current phylogenetic evidence suggests that the last universal ancestor (LUA) lived during the early Archean eon, perhaps 3.5 Ga or earlier.[98][99] This LUA cell is the ancestor of all life on Earth today. It was probably a prokaryote, possessing a cell membrane and probably ribosomes, but lacking a nucleus or membrane-bound organelles such as mitochondria or chloroplasts. Like modern cells, it used DNA as its genetic code, RNA for information transfer and protein synthesis, and enzymes to catalyze reactions. Some scientists believe that instead of a single organism being the last universal common ancestor, there were populations of organisms exchanging genes by lateral gene transfer.[98]
|
79 |
+
|
80 |
+
The Proterozoic eon lasted from 2.5 Ga to 542 Ma (million years) ago.[2]:130 In this time span, cratons grew into continents with modern sizes. The change to an oxygen-rich atmosphere was a crucial development. Life developed from prokaryotes into eukaryotes and multicellular forms. The Proterozoic saw a couple of severe ice ages called snowball Earths. After the last Snowball Earth about 600 Ma, the evolution of life on Earth accelerated. About 580 Ma, the Ediacaran biota formed the prelude for the Cambrian Explosion.[citation needed]
|
81 |
+
|
82 |
+
The earliest cells absorbed energy and food from the surrounding environment. They used fermentation, the breakdown of more complex compounds into less complex compounds with less energy, and used the energy so liberated to grow and reproduce. Fermentation can only occur in an anaerobic (oxygen-free) environment. The evolution of photosynthesis made it possible for cells to derive energy from the Sun.[100]:377
|
83 |
+
|
84 |
+
Most of the life that covers the surface of the Earth depends directly or indirectly on photosynthesis. The most common form, oxygenic photosynthesis, turns carbon dioxide, water, and sunlight into food. It captures the energy of sunlight in energy-rich molecules such as ATP, which then provide the energy to make sugars. To supply the electrons in the circuit, hydrogen is stripped from water, leaving oxygen as a waste product.[101] Some organisms, including purple bacteria and green sulfur bacteria, use an anoxygenic form of photosynthesis that uses alternatives to hydrogen stripped from water as electron donors; examples are hydrogen sulfide, sulfur and iron. Such extremophile organisms are restricted to otherwise inhospitable environments such as hot springs and hydrothermal vents.[100]:379–382[102]
|
85 |
+
|
86 |
+
The simpler anoxygenic form arose about 3.8 Ga, not long after the appearance of life. The timing of oxygenic photosynthesis is more controversial; it had certainly appeared by about 2.4 Ga, but some researchers put it back as far as 3.2 Ga.[101] The latter "probably increased global productivity by at least two or three orders of magnitude".[103][104] Among the oldest remnants of oxygen-producing lifeforms are fossil stromatolites.[103][104][61]
|
87 |
+
|
88 |
+
At first, the released oxygen was bound up with limestone, iron, and other minerals. The oxidized iron appears as red layers in geological strata called banded iron formations that formed in abundance during the Siderian period (between 2500 Ma and 2300 Ma).[2]:133 When most of the exposed readily reacting minerals were oxidized, oxygen finally began to accumulate in the atmosphere. Though each cell only produced a minute amount of oxygen, the combined metabolism of many cells over a vast time transformed Earth's atmosphere to its current state. This was Earth's third atmosphere.[105]:50–51[63]:83–84, 116–117
|
89 |
+
|
90 |
+
Some oxygen was stimulated by solar ultraviolet radiation to form ozone, which collected in a layer near the upper part of the atmosphere. The ozone layer absorbed, and still absorbs, a significant amount of the ultraviolet radiation that once had passed through the atmosphere. It allowed cells to colonize the surface of the ocean and eventually the land: without the ozone layer, ultraviolet radiation bombarding land and sea would have caused unsustainable levels of mutation in exposed cells.[106][59]:219–220
|
91 |
+
|
92 |
+
Photosynthesis had another major impact. Oxygen was toxic; much life on Earth probably died out as its levels rose in what is known as the oxygen catastrophe. Resistant forms survived and thrived, and some developed the ability to use oxygen to increase their metabolism and obtain more energy from the same food.[106]
|
93 |
+
|
94 |
+
The natural evolution of the Sun made it progressively more luminous during the Archean and Proterozoic eons; the Sun's luminosity increases 6% every billion years.[59]:165 As a result, the Earth began to receive more heat from the Sun in the Proterozoic eon. However, the Earth did not get warmer. Instead, the geological record suggests it cooled dramatically during the early Proterozoic. Glacial deposits found in South Africa date back to 2.2 Ga, at which time, based on paleomagnetic evidence, they must have been located near the equator. Thus, this glaciation, known as the Huronian glaciation, may have been global. Some scientists suggest this was so severe that the Earth was frozen over from the poles to the equator, a hypothesis called Snowball Earth.[107]
|
95 |
+
|
96 |
+
The Huronian ice age might have been caused by the increased oxygen concentration in the atmosphere, which caused the decrease of methane (CH4) in the atmosphere. Methane is a strong greenhouse gas, but with oxygen it reacts to form CO2, a less effective greenhouse gas.[59]:172 When free oxygen became available in the atmosphere, the concentration of methane could have decreased dramatically, enough to counter the effect of the increasing heat flow from the Sun.[108]
|
97 |
+
|
98 |
+
However, the term Snowball Earth is more commonly used to describe later extreme ice ages during the Cryogenian period. There were four periods, each lasting about 10 million years, between 750 and 580 million years ago, when the earth is thought to have been covered with ice apart from the highest mountains, and average temperatures were about −50 °C (−58 °F).[109] The snowball may have been partly due to the location of the supercontinent Rodinia straddling the Equator. Carbon dioxide combines with rain to weather rocks to form carbonic acid, which is then washed out to sea, thus extracting the greenhouse gas from the atmosphere. When the continents are near the poles, the advance of ice covers the rocks, slowing the reduction in carbon dioxide, but in the Cryogenian the weathering of Rodinia was able to continue unchecked until the ice advanced to the tropics. The process may have finally been reversed by the emission of carbon dioxide from volcanoes or the destabilization of methane gas hydrates. According to the alternative Slushball Earth theory, even at the height of the ice ages there was still open water at the Equator.[110][111]
|
99 |
+
|
100 |
+
Modern taxonomy classifies life into three domains. The time of their origin is uncertain. The Bacteria domain probably first split off from the other forms of life (sometimes called Neomura), but this supposition is controversial. Soon after this, by 2 Ga,[112] the Neomura split into the Archaea and the Eukarya. Eukaryotic cells (Eukarya) are larger and more complex than prokaryotic cells (Bacteria and Archaea), and the origin of that complexity is only now becoming known.[citation needed] The earliest fossils possessing features typical of fungi date to the Paleoproterozoic era, some 2.4 ago; these multicellular benthic organisms had filamentous structures capable of anastomosis.[113]
|
101 |
+
|
102 |
+
Around this time, the first proto-mitochondrion was formed. A bacterial cell related to today's Rickettsia,[114] which had evolved to metabolize oxygen, entered a larger prokaryotic cell, which lacked that capability. Perhaps the large cell attempted to digest the smaller one but failed (possibly due to the evolution of prey defenses). The smaller cell may have tried to parasitize the larger one. In any case, the smaller cell survived inside the larger cell. Using oxygen, it metabolized the larger cell's waste products and derived more energy. Part of this excess energy was returned to the host. The smaller cell replicated inside the larger one. Soon, a stable symbiosis developed between the large cell and the smaller cells inside it. Over time, the host cell acquired some genes from the smaller cells, and the two kinds became dependent on each other: the larger cell could not survive without the energy produced by the smaller ones, and these, in turn, could not survive without the raw materials provided by the larger cell. The whole cell is now considered a single organism, and the smaller cells are classified as organelles called mitochondria.[115]
|
103 |
+
|
104 |
+
A similar event occurred with photosynthetic cyanobacteria[116] entering large heterotrophic cells and becoming chloroplasts.[105]:60–61[117]:536–539 Probably as a result of these changes, a line of cells capable of photosynthesis split off from the other eukaryotes more than 1 billion years ago. There were probably several such inclusion events. Besides the well-established endosymbiotic theory of the cellular origin of mitochondria and chloroplasts, there are theories that cells led to peroxisomes, spirochetes led to cilia and flagella, and that perhaps a DNA virus led to the cell nucleus,[118][119] though none of them are widely accepted.[120]
|
105 |
+
|
106 |
+
Archaeans, bacteria, and eukaryotes continued to diversify and to become more complex and better adapted to their environments. Each domain repeatedly split into multiple lineages, although little is known about the history of the archaea and bacteria. Around 1.1 Ga, the supercontinent Rodinia was assembling.[121][122] The plant, animal, and fungi lines had split, though they still existed as solitary cells. Some of these lived in colonies, and gradually a division of labor began to take place; for instance, cells on the periphery might have started to assume different roles from those in the interior. Although the division between a colony with specialized cells and a multicellular organism is not always clear, around 1 billion years ago[123], the first multicellular plants emerged, probably green algae.[124] Possibly by around 900 Ma[117]:488 true multicellularity had also evolved in animals.[citation needed]
|
107 |
+
|
108 |
+
At first, it probably resembled today's sponges, which have totipotent cells that allow a disrupted organism to reassemble itself.[117]:483–487 As the division of labor was completed in all lines of multicellular organisms, cells became more specialized and more dependent on each other; isolated cells would die.[citation needed]
|
109 |
+
|
110 |
+
Reconstructions of tectonic plate movement in the past 250 million years (the Cenozoic and Mesozoic eras) can be made reliably using fitting of continental margins, ocean floor magnetic anomalies and paleomagnetic poles. No ocean crust dates back further than that, so earlier reconstructions are more difficult. Paleomagnetic poles are supplemented by geologic evidence such as orogenic belts, which mark the edges of ancient plates, and past distributions of flora and fauna. The further back in time, the scarcer and harder to interpret the data get and the more uncertain the reconstructions.[125]:370
|
111 |
+
|
112 |
+
Throughout the history of the Earth, there have been times when continents collided and formed a supercontinent, which later broke up into new continents. About 1000 to 830 Ma, most continental mass was united in the supercontinent Rodinia.[125]:370[126] Rodinia may have been preceded by Early-Middle Proterozoic continents called Nuna and Columbia.[125]:374[127][128]
|
113 |
+
|
114 |
+
After the break-up of Rodinia about 800 Ma, the continents may have formed another short-lived supercontinent around 550 Ma. The hypothetical supercontinent is sometimes referred to as Pannotia or Vendia.[129]:321–322 The evidence for it is a phase of continental collision known as the Pan-African orogeny, which joined the continental masses of current-day Africa, South America, Antarctica and Australia. The existence of Pannotia depends on the timing of the rifting between Gondwana (which included most of the landmass now in the Southern Hemisphere, as well as the Arabian Peninsula and the Indian subcontinent) and Laurentia (roughly equivalent to current-day North America).[125]:374 It is at least certain that by the end of the Proterozoic eon, most of the continental mass lay united in a position around the south pole.[130]
|
115 |
+
|
116 |
+
The end of the Proterozoic saw at least two Snowball Earths, so severe that the surface of the oceans may have been completely frozen. This happened about 716.5 and 635 Ma, in the Cryogenian period.[131] The intensity and mechanism of both glaciations are still under investigation and harder to explain than the early Proterozoic Snowball Earth.[132]
|
117 |
+
Most paleoclimatologists think the cold episodes were linked to the formation of the supercontinent Rodinia.[133] Because Rodinia was centered on the equator, rates of chemical weathering increased and carbon dioxide (CO2) was taken from the atmosphere. Because CO2 is an important greenhouse gas, climates cooled globally.[citation needed]
|
118 |
+
In the same way, during the Snowball Earths most of the continental surface was covered with permafrost, which decreased chemical weathering again, leading to the end of the glaciations. An alternative hypothesis is that enough carbon dioxide escaped through volcanic outgassing that the resulting greenhouse effect raised global temperatures.[133] Increased volcanic activity resulted from the break-up of Rodinia at about the same time.[citation needed]
|
119 |
+
|
120 |
+
The Cryogenian period was followed by the Ediacaran period, which was characterized by a rapid development of new multicellular lifeforms.[134] Whether there is a connection between the end of the severe ice ages and the increase in diversity of life is not clear, but it does not seem coincidental. The new forms of life, called Ediacara biota, were larger and more diverse than ever. Though the taxonomy of most Ediacaran life forms is unclear, some were ancestors of groups of modern life.[135] Important developments were the origin of muscular and neural cells. None of the Ediacaran fossils had hard body parts like skeletons. These first appear after the boundary between the Proterozoic and Phanerozoic eons or Ediacaran and Cambrian periods.[citation needed]
|
121 |
+
|
122 |
+
The Phanerozoic is the current eon on Earth, which started approximately 542 million years ago. It consists of three eras: The Paleozoic, Mesozoic, and Cenozoic,[22] and is the time when multi-cellular life greatly diversified into almost all the organisms known today.[136]
|
123 |
+
|
124 |
+
The Paleozoic ("old life") era was the first and longest era of the Phanerozoic eon, lasting from 542 to 251 Ma.[22] During the Paleozoic, many modern groups of life came into existence. Life colonized the land, first plants, then animals. Two major extinctions occurred. The continents formed at the break-up of Pannotia and Rodinia at the end of the Proterozoic slowly moved together again, forming the supercontinent Pangaea in the late Paleozoic.[citation needed]
|
125 |
+
|
126 |
+
The Mesozoic ("middle life") era lasted from 251 Ma to 66 Ma.[22] It is subdivided into the Triassic, Jurassic, and Cretaceous periods. The era began with the Permian–Triassic extinction event, the most severe extinction event in the fossil record; 95% of the species on Earth died out.[137] It ended with the Cretaceous–Paleogene extinction event that wiped out the dinosaurs.[citation needed].
|
127 |
+
|
128 |
+
The Cenozoic ("new life") era began at 66 Ma,[22] and is subdivided into the Paleogene, Neogene, and Quaternary periods. These three periods are further split into seven sub-divisions, with the Paleogene composed of The Paleocene, Eocene, and Oligocene, the Neogene divided into the Miocene, Pliocene, and the Quaternary composed of the Pleistocene, and Holocene.[138] Mammals, birds, amphibians, crocodilians, turtles, and lepidosaurs survived the Cretaceous–Paleogene extinction event that killed off the non-avian dinosaurs and many other forms of life, and this is the era during which they diversified into their modern forms.[citation needed]
|
129 |
+
|
130 |
+
At the end of the Proterozoic, the supercontinent Pannotia had broken apart into the smaller continents Laurentia, Baltica, Siberia and Gondwana.[139] During periods when continents move apart, more oceanic crust is formed by volcanic activity. Because young volcanic crust is relatively hotter and less dense than old oceanic crust, the ocean floors rise during such periods. This causes the sea level to rise. Therefore, in the first half of the Paleozoic, large areas of the continents were below sea level.[citation needed]
|
131 |
+
|
132 |
+
Early Paleozoic climates were warmer than today, but the end of the Ordovician saw a short ice age during which glaciers covered the south pole, where the huge continent Gondwana was situated. Traces of glaciation from this period are only found on former Gondwana. During the Late Ordovician ice age, a few mass extinctions took place, in which many brachiopods, trilobites, Bryozoa and corals disappeared. These marine species could probably not contend with the decreasing temperature of the sea water.[140]
|
133 |
+
|
134 |
+
The continents Laurentia and Baltica collided between 450 and 400 Ma, during the Caledonian Orogeny, to form Laurussia (also known as Euramerica).[141] Traces of the mountain belt this collision caused can be found in Scandinavia, Scotland, and the northern Appalachians. In the Devonian period (416–359 Ma)[22] Gondwana and Siberia began to move towards Laurussia. The collision of Siberia with Laurussia caused the Uralian Orogeny, the collision of Gondwana with Laurussia is called the Variscan or Hercynian Orogeny in Europe or the Alleghenian Orogeny in North America. The latter phase took place during the Carboniferous period (359–299 Ma)[22] and resulted in the formation of the last supercontinent, Pangaea.[60]
|
135 |
+
|
136 |
+
By 180 Ma, Pangaea broke up into Laurasia and Gondwana.[citation needed]
|
137 |
+
|
138 |
+
The rate of the evolution of life as recorded by fossils accelerated in the Cambrian period (542–488 Ma).[22] The sudden emergence of many new species, phyla, and forms in this period is called the Cambrian Explosion. The biological fomenting in the Cambrian Explosion was unprecedented before and since that time.[59]:229 Whereas the Ediacaran life forms appear yet primitive and not easy to put in any modern group, at the end of the Cambrian most modern phyla were already present. The development of hard body parts such as shells, skeletons or exoskeletons in animals like molluscs, echinoderms, crinoids and arthropods (a well-known group of arthropods from the lower Paleozoic are the trilobites) made the preservation and fossilization of such life forms easier than those of their Proterozoic ancestors. For this reason, much more is known about life in and after the Cambrian than about that of older periods. Some of these Cambrian groups appear complex but are seemingly quite different from modern life; examples are Anomalocaris and Haikouichthys. More recently, however, these seem to have found a place in modern classification.[citation needed]
|
139 |
+
|
140 |
+
During the Cambrian, the first vertebrate animals, among them the first fishes, had appeared.[117]:357 A creature that could have been the ancestor of the fishes, or was probably closely related to it, was Pikaia. It had a primitive notochord, a structure that could have developed into a vertebral column later. The first fishes with jaws (Gnathostomata) appeared during the next geological period, the Ordovician. The colonisation of new niches resulted in massive body sizes. In this way, fishes with increasing sizes evolved during the early Paleozoic, such as the titanic placoderm Dunkleosteus, which could grow 7 meters (23 ft) long.[citation needed]
|
141 |
+
|
142 |
+
The diversity of life forms did not increase greatly because of a series of mass extinctions that define widespread biostratigraphic units called biomeres.[142] After each extinction pulse, the continental shelf regions were repopulated by similar life forms that may have been evolving slowly elsewhere.[143] By the late Cambrian, the trilobites had reached their greatest diversity and dominated nearly all fossil assemblages.[144]:34
|
143 |
+
|
144 |
+
Oxygen accumulation from photosynthesis resulted in the formation of an ozone layer that absorbed much of the Sun's ultraviolet radiation, meaning unicellular organisms that reached land were less likely to die, and prokaryotes began to multiply and become better adapted to survival out of the water. Prokaryote lineages[145] had probably colonized the land as early as 2.6 Ga[146] even before the origin of the eukaryotes. For a long time, the land remained barren of multicellular organisms. The supercontinent Pannotia formed around 600 Ma and then broke apart a short 50 million years later.[147] Fish, the earliest vertebrates, evolved in the oceans around 530 Ma.[117]:354 A major extinction event occurred near the end of the Cambrian period,[148] which ended 488 Ma.[149]
|
145 |
+
|
146 |
+
Several hundred million years ago, plants (probably resembling algae) and fungi started growing at the edges of the water, and then out of it.[150]:138–140 The oldest fossils of land fungi and plants date to 480–460 Ma, though molecular evidence suggests the fungi may have colonized the land as early as 1000 Ma and the plants 700 Ma.[151] Initially remaining close to the water's edge, mutations and variations resulted in further colonization of this new environment. The timing of the first animals to leave the oceans is not precisely known: the oldest clear evidence is of arthropods on land around 450 Ma,[152] perhaps thriving and becoming better adapted due to the vast food source provided by the terrestrial plants. There is also unconfirmed evidence that arthropods may have appeared on land as early as 530 Ma.[153]
|
147 |
+
|
148 |
+
At the end of the Ordovician period, 443 Ma,[22] additional extinction events occurred, perhaps due to a concurrent ice age.[140] Around 380 to 375 Ma, the first tetrapods evolved from fish.[154] Fins evolved to become limbs that the first tetrapods used to lift their heads out of the water to breathe air. This would let them live in oxygen-poor water, or pursue small prey in shallow water.[154] They may have later ventured on land for brief periods. Eventually, some of them became so well adapted to terrestrial life that they spent their adult lives on land, although they hatched in the water and returned to lay their eggs. This was the origin of the amphibians. About 365 Ma, another period of extinction occurred, perhaps as a result of global cooling.[155] Plants evolved seeds, which dramatically accelerated their spread on land, around this time (by approximately 360 Ma).[156][157]
|
149 |
+
|
150 |
+
About 20 million years later (340 Ma[117]:293–296), the amniotic egg evolved, which could be laid on land, giving a survival advantage to tetrapod embryos. This resulted in the divergence of amniotes from amphibians. Another 30 million years (310 Ma[117]:254–256) saw the divergence of the synapsids (including mammals) from the sauropsids (including birds and reptiles). Other groups of organisms continued to evolve, and lines diverged—in fish, insects, bacteria, and so on—but less is known of the details.[citation needed]
|
151 |
+
|
152 |
+
After yet another, the most severe extinction of the period (251~250 Ma), around 230 Ma, dinosaurs split off from their reptilian ancestors.[158] The Triassic–Jurassic extinction event at 200 Ma spared many of the dinosaurs,[22][159] and they soon became dominant among the vertebrates. Though some mammalian lines began to separate during this period, existing mammals were probably small animals resembling shrews.[117]:169
|
153 |
+
|
154 |
+
The boundary between avian and non-avian dinosaurs is not clear, but Archaeopteryx, traditionally considered one of the first birds, lived around 150 Ma.[160]
|
155 |
+
|
156 |
+
The earliest evidence for the angiosperms evolving flowers is during the Cretaceous period, some 20 million years later (132 Ma).[161]
|
157 |
+
|
158 |
+
The first of five great mass extinctions was the Ordovician-Silurian extinction. Its possible cause was the intense glaciation of Gondwana, which eventually led to a snowball earth. 60% of marine invertebrates became extinct and 25% of all families.[citation needed]
|
159 |
+
|
160 |
+
The second mass extinction was the Late Devonian extinction, probably caused by the evolution of trees, which could have led to the depletion of greenhouse gases (like CO2) or the eutrophication of water. 70% of all species became extinct.[citation needed]
|
161 |
+
|
162 |
+
The third mass extinction was the Permian-Triassic, or the Great Dying, event was possibly caused by some combination of the Siberian Traps volcanic event, an asteroid impact, methane hydrate gasification, sea level fluctuations, and a major anoxic event. Either the proposed Wilkes Land crater[162] in Antarctica or Bedout structure off the northwest coast of Australia may indicate an impact connection with the Permian-Triassic extinction. But it remains uncertain whether either these or other proposed Permian-Triassic boundary craters are either real impact craters or even contemporaneous with the Permian-Triassic extinction event. This was by far the deadliest extinction ever, with about 57% of all families and 83% of all genera killed.[163][164]
|
163 |
+
|
164 |
+
The fourth mass extinction was the Triassic-Jurassic extinction event in which almost all synapsids and archosaurs became extinct, probably due to new competition from dinosaurs.[citation needed]
|
165 |
+
|
166 |
+
The fifth and most recent mass extinction was the K-T extinction. In 66 Ma, a 10-kilometer (6.2 mi) asteroid struck Earth just off the Yucatán Peninsula—somewhere in the south western tip of then Laurasia—where the Chicxulub crater is today. This ejected vast quantities of particulate matter and vapor into the air that occluded sunlight, inhibiting photosynthesis. 75% of all life, including the non-avian dinosaurs, became extinct,[165] marking the end of the Cretaceous period and Mesozoic era.[citation needed]
|
167 |
+
|
168 |
+
The first true mammals evolved in the shadows of dinosaurs and other large archosaurs that filled the world by the late Triassic. The first mammals were very small, and were probably nocturnal to escape predation. Mammal diversification truly began only after the Cretaceous-Paleogene extinction event.[166] By the early Paleocene the earth recovered from the extinction, and mammalian diversity increased. Creatures like Ambulocetus took to the oceans to eventually evolve into whales,[167] whereas some creatures, like primates, took to the trees.[168] This all changed during the mid to late Eocene when the circum-Antarctic current formed between Antarctica and Australia which disrupted weather patterns on a global scale. Grassless savannas began to predominate much of the landscape, and mammals such as Andrewsarchus rose up to become the largest known terrestrial predatory mammal ever,[169] and early whales like Basilosaurus took control of the seas.[citation needed]
|
169 |
+
|
170 |
+
The evolution of grass brought a remarkable change to the Earth's landscape, and the new open spaces created pushed mammals to get bigger and bigger. Grass started to expand in the Miocene, and the Miocene is where many modern- day mammals first appeared. Giant ungulates like Paraceratherium and Deinotherium evolved to rule the grasslands. The evolution of grass also brought primates down from the trees, and started human evolution. The first big cats evolved during this time as well.[170] The Tethys Sea was closed off by the collision of Africa and Europe.[171]
|
171 |
+
|
172 |
+
The formation of Panama was perhaps the most important geological event to occur in the last 60 million years. Atlantic and Pacific currents were closed off from each other, which caused the formation of the Gulf Stream, which made Europe warmer. The land bridge allowed the isolated creatures of South America to migrate over to North America, and vice versa.[172] Various species migrated south, leading to the presence in South America of llamas, the spectacled bear, kinkajous and jaguars.[citation needed]
|
173 |
+
|
174 |
+
Three million years ago saw the start of the Pleistocene epoch, which featured dramatic climactic changes due to the ice ages. The ice ages led to the evolution of modern man in Saharan Africa and expansion. The mega-fauna that dominated fed on grasslands that, by now, had taken over much of the subtropical world. The large amounts of water held in the ice allowed for various bodies of water to shrink and sometimes disappear such as the North Sea and the Bering Strait. It is believed by many that a huge migration took place along Beringia which is why, today, there are camels (which evolved and became extinct in North America), horses (which evolved and became extinct in North America), and Native Americans. The ending of the last ice age coincided with the expansion of man, along with a massive die out of ice age mega-fauna. This extinction is nicknamed "the Sixth Extinction".
|
175 |
+
|
176 |
+
A small African ape living around 6 Ma was the last animal whose descendants would include both modern humans and their closest relatives, the chimpanzees.[117]:100–101 Only two branches of its family tree have surviving descendants. Very soon after the split, for reasons that are still unclear, apes in one branch developed the ability to walk upright.[117]:95–99 Brain size increased rapidly, and by 2 Ma, the first animals classified in the genus Homo had appeared.[150]:300 Of course, the line between different species or even genera is somewhat arbitrary as organisms continuously change over generations. Around the same time, the other branch split into the ancestors of the common chimpanzee and the ancestors of the bonobo as evolution continued simultaneously in all life forms.[117]:100–101
|
177 |
+
|
178 |
+
The ability to control fire probably began in Homo erectus (or Homo ergaster), probably at least 790,000 years ago[173] but perhaps as early as 1.5 Ma.[117]:67 The use and discovery of controlled fire may even predate Homo erectus. Fire was possibly used by the early Lower Paleolithic (Oldowan) hominid Homo habilis or strong australopithecines such as Paranthropus.[174]
|
179 |
+
|
180 |
+
It is more difficult to establish the origin of language; it is unclear whether Homo erectus could speak or if that capability had not begun until Homo sapiens.[117]:67 As brain size increased, babies were born earlier, before their heads grew too large to pass through the pelvis. As a result, they exhibited more plasticity, and thus possessed an increased capacity to learn and required a longer period of dependence. Social skills became more complex, language became more sophisticated, and tools became more elaborate. This contributed to further cooperation and intellectual development.[176]:7 Modern humans (Homo sapiens) are believed to have originated around 200,000 years ago or earlier in Africa; the oldest fossils date back to around 160,000 years ago.[177]
|
181 |
+
|
182 |
+
The first humans to show signs of spirituality are the Neanderthals (usually classified as a separate species with no surviving descendants); they buried their dead, often with no sign of food or tools.[178]:17 However, evidence of more sophisticated beliefs, such as the early Cro-Magnon cave paintings (probably with magical or religious significance)[178]:17–19 did not appear until 32,000 years ago.[179] Cro-Magnons also left behind stone figurines such as Venus of Willendorf, probably also signifying religious belief.[178]:17–19 By 11,000 years ago, Homo sapiens had reached the southern tip of South America, the last of the uninhabited continents (except for Antarctica, which remained undiscovered until 1820 AD).[180] Tool use and communication continued to improve, and interpersonal relationships became more intricate.[citation needed]
|
183 |
+
|
184 |
+
Throughout more than 90% of its history, Homo sapiens lived in small bands as nomadic hunter-gatherers.[176]:8 As language became more complex, the ability to remember and communicate information resulted, according to a theory proposed by Richard Dawkins, in a new replicator: the meme.[181] Ideas could be exchanged quickly and passed down the generations. Cultural evolution quickly outpaced biological evolution, and history proper began. Between 8500 and 7000 BC, humans in the Fertile Crescent in the Middle East began the systematic husbandry of plants and animals: agriculture.[182] This spread to neighboring regions, and developed independently elsewhere, until most Homo sapiens lived sedentary lives in permanent settlements as farmers. Not all societies abandoned nomadism, especially those in isolated areas of the globe poor in domesticable plant species, such as Australia.[183] However, among those civilizations that did adopt agriculture, the relative stability and increased productivity provided by farming allowed the population to expand.[citation needed]
|
185 |
+
|
186 |
+
Agriculture had a major impact; humans began to affect the environment as never before. Surplus food allowed a priestly or governing class to arise, followed by increasing division of labor. This led to Earth's first civilization at Sumer in the Middle East, between 4000 and 3000 BC.[176]:15 Additional civilizations quickly arose in ancient Egypt, at the Indus River valley and in China. The invention of writing enabled complex societies to arise: record-keeping and libraries served as a storehouse of knowledge and increased the cultural transmission of information. Humans no longer had to spend all their time working for survival, enabling the first specialized occupations (e.g. craftsmen, merchants, priests, etc.). Curiosity and education drove the pursuit of knowledge and wisdom, and various disciplines, including science (in a primitive form), arose. This in turn led to the emergence of increasingly larger and more complex civilizations, such as the first empires, which at times traded with one another, or fought for territory and resources.
|
187 |
+
|
188 |
+
By around 500 BC, there were advanced civilizations in the Middle East, Iran, India, China, and Greece, at times expanding, at times entering into decline.[176]:3 In 221 BC, China became a single polity that would grow to spread its culture throughout East Asia, and it has remained the most populous nation in the world. The fundamentals of Western civilization were largely shaped in Ancient Greece, with the world's first democratic government and major advances in philosophy, science, and mathematics, and in Ancient Rome in law, government, and engineering.[184] The Roman Empire was Christianized by Emperor Constantine in the early 4th century and declined by the end of the 5th. Beginning with the 7th century, Christianization of Europe began. In 610, Islam was founded and quickly became the dominant religion in Western Asia. The House of Wisdom was established in Abbasid-era Baghdad, Iraq.[185] It is considered to have been a major intellectual center during the Islamic Golden Age, where Muslim scholars in Baghdad and Cairo flourished from the ninth to the thirteenth centuries until the Mongol sack of Baghdad in 1258 AD. In 1054 AD the Great Schism between the Roman Catholic Church and the Eastern Orthodox Church led to the prominent cultural differences between Western and Eastern Europe.[citation needed]
|
189 |
+
|
190 |
+
In the 14th century, the Renaissance began in Italy with advances in religion, art, and science.[176]:317–319 At that time the Christian Church as a political entity lost much of its power. In 1492, Christopher Columbus reached the Americas, initiating great changes to the new world. European civilization began to change beginning in 1500, leading to the scientific and industrial revolutions. That continent began to exert political and cultural dominance over human societies around the world, a time known as the Colonial era (also see Age of Discovery).[176]:295–299 In the 18th century a cultural movement known as the Age of Enlightenment further shaped the mentality of Europe and contributed to its secularization. From 1914 to 1918 and 1939 to 1945, nations around the world were embroiled in world wars. Established following World War I, the League of Nations was a first step in establishing international institutions to settle disputes peacefully. After failing to prevent World War II, mankind's bloodiest conflict, it was replaced by the United Nations. After the war, many new states were formed, declaring or being granted independence in a period of decolonization. The democratic capitalist United States and the socialist Soviet Union became the world's dominant superpowers for a time, and they held an ideological, often-violent rivalry known as the Cold War until the dissolution of the latter. In 1992, several European nations joined in the European Union. As transportation and communication improved, the economies and political affairs of nations around the world have become increasingly intertwined. This globalization has often produced both conflict and cooperation.[citation needed]
|
191 |
+
|
192 |
+
Change has continued at a rapid pace from the mid-1940s to today. Technological developments include nuclear weapons, computers, genetic engineering, and nanotechnology. Economic globalization, spurred by advances in communication and transportation technology, has influenced everyday life in many parts of the world. Cultural and institutional forms such as democracy, capitalism, and environmentalism have increased influence. Major concerns and problems such as disease, war, poverty, violent radicalism, and recently, human-caused climate change have risen as the world population increases.[citation needed]
|
193 |
+
|
194 |
+
In 1957, the Soviet Union launched the first artificial satellite into orbit and, soon afterward, Yuri Gagarin became the first human in space. Neil Armstrong, an American, was the first to set foot on another astronomical object, the Moon. Unmanned probes have been sent to all the known planets in the Solar System, with some (such as the two Voyager spacecrafts) having left the Solar System. Five space agencies, representing over fifteen countries,[186] have worked together to build the International Space Station. Aboard it, there has been a continuous human presence in space since 2000.[187] The World Wide Web became a part of everyday life in the 1990s, and since then has become an indispensable source of information in the developed world.[citation needed]
|
en/2579.html.txt
ADDED
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A car (or automobile) is a wheeled motor vehicle used for transportation. Most definitions of cars say that they run primarily on roads, seat one to eight people, have four tires, and mainly transport people rather than goods.[2][3]
|
4 |
+
|
5 |
+
Cars came into global use during the 20th century, and developed economies depend on them. The year 1886 is regarded as the birth year of the modern car when German inventor Karl Benz patented his Benz Patent-Motorwagen. Cars became widely available in the early 20th century. One of the first cars accessible to the masses was the 1908 Model T, an American car manufactured by the Ford Motor Company. Cars were rapidly adopted in the US, where they replaced animal-drawn carriages and carts, but took much longer to be accepted in Western Europe and other parts of the world.[citation needed]
|
6 |
+
|
7 |
+
Cars have controls for driving, parking, passenger comfort, and a variety of lights. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex, but also more reliable and easier to operate.[citation needed] These include rear-reversing cameras, air conditioning, navigation systems, and in-car entertainment. Most cars in use in the 2010s are propelled by an internal combustion engine, fueled by the combustion of fossil fuels. Electric cars, which were invented early in the history of the car, became commercially available in the 2000s and are predicted to cost less to buy than gasoline cars before 2025.[4][5] The transition from fossil fuels to electric cars features prominently in most climate change mitigation scenarios.[6]
|
8 |
+
|
9 |
+
There are costs and benefits to car use. The costs to the individual include acquiring the vehicle, interest payments (if the car is financed), repairs and maintenance, fuel, depreciation, driving time, parking fees, taxes, and insurance.[7] The costs to society include maintaining roads, land use, road congestion, air pollution, public health, healthcare, and disposing of the vehicle at the end of its life. Traffic collisions are the largest cause of injury-related deaths worldwide.[8]
|
10 |
+
|
11 |
+
The personal benefits include on-demand transportation, mobility, independence, and convenience.[9] The societal benefits include economic benefits, such as job and wealth creation from the automotive industry, transportation provision, societal well-being from leisure and travel opportunities, and revenue generation from the taxes. People's ability to move flexibly from place to place has far-reaching implications for the nature of societies.[10] There are around 1 billion cars in use worldwide. The numbers are increasing rapidly, especially in China, India and other newly industrialized countries.[11]
|
12 |
+
|
13 |
+
The English word car is believed to originate from Latin carrus/carrum "wheeled vehicle" or (via Old North French) Middle English carre "two-wheeled cart," both of which in turn derive from Gaulish karros "chariot."[12][13] It originally referred to any wheeled horse-drawn vehicle, such as a cart, carriage, or wagon.[14][15]
|
14 |
+
|
15 |
+
"Motor car," attested from 1895, is the usual formal term in British English.[3] "Autocar," a variant likewise attested from 1895 and literally meaning "self-propelled car," is now considered archaic.[16] "Horseless carriage" is attested from 1895.[17]
|
16 |
+
|
17 |
+
"Automobile," a classical compound derived from Ancient Greek autós (αὐτός) "self" and Latin mobilis "movable," entered English from French and was first adopted by the Automobile Club of Great Britain in 1897.[18] It fell out of favour in Britain and is now used chiefly in North America,[19] where the abbreviated form "auto" commonly appears as an adjective in compound formations like "auto industry" and "auto mechanic".[20][21] Both forms are still used in everyday Dutch (auto/automobiel) and German (Auto/Automobil).[citation needed]
|
18 |
+
|
19 |
+
The first working steam-powered vehicle was designed — and quite possibly built — by Ferdinand Verbiest, a Flemish member of a Jesuit mission in China around 1672. It was a 65-cm-long scale-model toy for the Chinese Emperor that was unable to carry a driver or a passenger.[9][22][23] It is not known with certainty if Verbiest's model was successfully built or run.[23]
|
20 |
+
|
21 |
+
Nicolas-Joseph Cugnot is widely credited with building the first full-scale, self-propelled mechanical vehicle or car in about 1769; he created a steam-powered tricycle.[24] He also constructed two steam tractors for the French Army, one of which is preserved in the French National Conservatory of Arts and Crafts.[25] His inventions were, however, handicapped by problems with water supply and maintaining steam pressure.[25] In 1801, Richard Trevithick built and demonstrated his Puffing Devil road locomotive, believed by many to be the first demonstration of a steam-powered road vehicle. It was unable to maintain sufficient steam pressure for long periods and was of little practical use.
|
22 |
+
|
23 |
+
The development of external combustion engines is detailed as part of the history of the car but often treated separately from the development of true cars. A variety of steam-powered road vehicles were used during the first part of the 19th century, including steam cars, steam buses, phaetons, and steam rollers. Sentiment against them led to the Locomotive Acts of 1865.
|
24 |
+
|
25 |
+
In 1807, Nicéphore Niépce and his brother Claude created what was probably the world's first internal combustion engine (which they called a Pyréolophore), but they chose to install it in a boat on the river Saone in France.[26] Coincidentally, in 1807 the Swiss inventor François Isaac de Rivaz designed his own 'de Rivaz internal combustion engine' and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture of Lycopodium powder (dried spores of the Lycopodium plant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture of hydrogen and oxygen.[26] Neither design was very successful, as was the case with others, such as Samuel Brown, Samuel Morey, and Etienne Lenoir with his hippomobile, who each produced vehicles (usually adapted carriages or carts) powered by internal combustion engines.[1]
|
26 |
+
|
27 |
+
In November 1881, French inventor Gustave Trouvé demonstrated the first working (three-wheeled) car powered by electricity at the International Exposition of Electricity, Paris.[27] Although several other German engineers (including Gottlieb Daimler, Wilhelm Maybach, and Siegfried Marcus) were working on the problem at about the same time, Karl Benz generally is acknowledged as the inventor of the modern car.[1]
|
28 |
+
|
29 |
+
In 1879, Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle. His first Motorwagen was built in 1885 in Mannheim, Germany. He was awarded the patent for its invention as of his application on 29 January 1886 (under the auspices of his major company, Benz & Cie., which was founded in 1883). Benz began promotion of the vehicle on 3 July 1886, and about 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a cheaper model. They also were powered with four-stroke engines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz car to his line of products. Because France was more open to the early cars, initially more were built and sold in France through Roger than Benz sold in Germany. In August 1888 Bertha Benz, the wife of Karl Benz, undertook the first road trip by car, to prove the road-worthiness of her husband's invention.
|
30 |
+
|
31 |
+
In 1896, Benz designed and patented the first internal-combustion flat engine, called boxermotor. During the last years of the nineteenth century, Benz was the largest car company in the world with 572 units produced in 1899 and, because of its size, Benz & Cie., became a joint-stock company. The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed to Tatra) in 1897, the Präsident automobil.
|
32 |
+
|
33 |
+
Daimler and Maybach founded Daimler Motoren Gesellschaft (DMG) in Cannstatt in 1890, and sold their first car in 1892 under the brand name Daimler. It was a horse-drawn stagecoach built by another manufacturer, which they retrofitted with an engine of their design. By 1895 about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after disputes with their backers. Benz, Maybach and the Daimler team seem to have been unaware of each other's early work. They never worked together; by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG. Daimler died in 1900 and later that year, Maybach designed an engine named Daimler-Mercedes that was placed in a specially ordered model built to specifications set by Emil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG car was produced and the model was named Mercedes after the Maybach engine, which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to the Daimler brand name were sold to other manufacturers.
|
34 |
+
|
35 |
+
Karl Benz proposed co-operation between DMG and Benz & Cie. when economic conditions began to deteriorate in Germany following the First World War, but the directors of DMG refused to consider it initially. Negotiations between the two companies resumed several years later when these conditions worsened and, in 1924 they signed an Agreement of Mutual Interest, valid until the year 2000. Both enterprises standardized design, production, purchasing, and sales and they advertised or marketed their car models jointly, although keeping their respective brands. On 28 June 1926, Benz & Cie. and DMG finally merged as the Daimler-Benz company, baptizing all of its cars Mercedes Benz, as a brand honoring the most important model of the DMG cars, the Maybach design later referred to as the 1902 Mercedes-35 hp, along with the Benz name. Karl Benz remained a member of the board of directors of Daimler-Benz until his death in 1929, and at times, his two sons also participated in the management of the company.
|
36 |
+
|
37 |
+
In 1890, Émile Levassor and Armand Peugeot of France began producing vehicles with Daimler engines, and so laid the foundation of the automotive industry in France. In 1891, Auguste Doriot and his Peugeot colleague Louis Rigoulot completed the longest trip by a gasoline-powered vehicle when their self-designed and built Daimler powered Peugeot Type 3 completed 2,100 km (1,300 miles) from Valentigney to Paris and Brest and back again. They were attached to the first Paris–Brest–Paris bicycle race, but finished 6 days after the winning cyclist, Charles Terront.
|
38 |
+
|
39 |
+
The first design for an American car with a gasoline internal combustion engine was made in 1877 by George Selden of Rochester, New York. Selden applied for a patent for a car in 1879, but the patent application expired because the vehicle was never built. After a delay of sixteen years and a series of attachments to his application, on 5 November 1895, Selden was granted a United States patent (U.S. Patent 549,160) for a two-stroke car engine, which hindered, more than encouraged, development of cars in the United States. His patent was challenged by Henry Ford and others, and overturned in 1911.
|
40 |
+
|
41 |
+
In 1893, the first running, gasoline-powered American car was built and road-tested by the Duryea brothers of Springfield, Massachusetts. The first public run of the Duryea Motor Wagon took place on 21 September 1893, on Taylor Street in Metro Center Springfield.[28][29] The Studebaker Automobile Company, subsidiary of a long-established wagon and coach manufacturer, started to build cars in 1897[30]:p.66 and commenced sales of electric vehicles in 1902 and gasoline vehicles in 1904.[31]
|
42 |
+
|
43 |
+
In Britain, there had been several attempts to build steam cars with varying degrees of success, with Thomas Rickett even attempting a production run in 1860.[32] Santler from Malvern is recognized by the Veteran Car Club of Great Britain as having made the first gasoline-powered car in the country in 1894,[33] followed by Frederick William Lanchester in 1895, but these were both one-offs.[33] The first production vehicles in Great Britain came from the Daimler Company, a company founded by Harry J. Lawson in 1896, after purchasing the right to use the name of the engines. Lawson's company made its first car in 1897, and they bore the name Daimler.[33]
|
44 |
+
|
45 |
+
In 1892, German engineer Rudolf Diesel was granted a patent for a "New Rational Combustion Engine". In 1897, he built the first diesel engine.[1] Steam-, electric-, and gasoline-powered vehicles competed for decades, with gasoline internal combustion engines achieving dominance in the 1910s. Although various pistonless rotary engine designs have attempted to compete with the conventional piston and crankshaft design, only Mazda's version of the Wankel engine has had more than very limited success.
|
46 |
+
|
47 |
+
All in all, it is estimated that over 100,000 patents created the modern automobile and motorcycle.[34]
|
48 |
+
|
49 |
+
Large-scale, production-line manufacturing of affordable cars was started by Ransom Olds in 1901 at his Oldsmobile factory in Lansing, Michigan and based upon stationary assembly line techniques pioneered by Marc Isambard Brunel at the Portsmouth Block Mills, England, in 1802. The assembly line style of mass production and interchangeable parts had been pioneered in the U.S. by Thomas Blanchard in 1821, at the Springfield Armory in Springfield, Massachusetts.[35] This concept was greatly expanded by Henry Ford, beginning in 1913 with the world's first moving assembly line for cars at the Highland Park Ford Plant.
|
50 |
+
|
51 |
+
As a result, Ford's cars came off the line in fifteen-minute intervals, much faster than previous methods, increasing productivity eightfold, while using less manpower (from 12.5-man-hours to 1 hour 33 minutes).[36] It was so successful, paint became a bottleneck. Only Japan black would dry fast enough, forcing the company to drop the variety of colors available before 1913, until fast-drying Duco lacquer was developed in 1926. This is the source of Ford's apocryphal remark, "any color as long as it's black".[36] In 1914, an assembly line worker could buy a Model T with four months' pay.[36]
|
52 |
+
|
53 |
+
Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury.[citation needed] The combination of high wages and high efficiency is called "Fordism," and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the United States. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
|
54 |
+
|
55 |
+
In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921, Citroen was the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going broke; by 1930, 250 companies which did not, had disappeared.[36]
|
56 |
+
|
57 |
+
Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electric ignition and the electric self-starter (both by Charles Kettering, for the Cadillac Motor Company in 1910–1911), independent suspension, and four-wheel brakes.
|
58 |
+
|
59 |
+
Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced car design. It was Alfred P. Sloan who established the idea of different makes of cars produced by one company, called the General Motors Companion Make Program, so that buyers could "move up" as their fortunes improved.
|
60 |
+
|
61 |
+
Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s, LaSalles, sold by Cadillac, used cheaper mechanical parts made by Oldsmobile; in the 1950s, Chevrolet shared hood, doors, roof, and windows with Pontiac; by the 1990s, corporate powertrains and shared platforms (with interchangeable brakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such as Apperson, Cole, Dorris, Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with the Great Depression, by 1940, only 17 of those were left.[36]
|
62 |
+
|
63 |
+
In Europe, much the same would happen. Morris set up its production line at Cowley in 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practice of vertical integration, buying Hotchkiss (engines), Wrigley (gearboxes), and Osberton (radiators), for instance, as well as competitors, such as Wolseley: in 1925, Morris had 41% of total British car production. Most British small-car assemblers, from Abbey to Xtra, had gone under. Citroen did the same in France, coming to cars in 1919; between them and other cheap cars in reply such as Renault's 10CV and Peugeot's 5CV, they produced 550,000 cars in 1925, and Mors, Hurtu, and others could not compete.[36] Germany's first mass-manufactured car, the Opel 4PS Laubfrosch (Tree Frog), came off the line at Russelsheim in 1924, soon making Opel the top car builder in Germany, with 37.5% of the market.[36]
|
64 |
+
|
65 |
+
In Japan, car production was very limited before World War II. Only a handful of companies were producing vehicles in limited numbers, and these were small, three-wheeled for commercial uses, like Daihatsu, or were the result of partnering with European companies, like Isuzu building the Wolseley A-9 in 1922. Mitsubishi was also partnered with Fiat and built the Mitsubishi Model A based on a Fiat vehicle. Toyota, Nissan, Suzuki, Mazda, and Honda began as companies producing non-automotive products before the war, switching to car production during the 1950s. Kiichiro Toyoda's decision to take Toyoda Loom Works into automobile manufacturing would create what would eventually become Toyota Motor Corporation, the largest automobile manufacturer in the world. Subaru, meanwhile, was formed from a conglomerate of six companies who banded together as Fuji Heavy Industries, as a result of having been broken up under keiretsu legislation.
|
66 |
+
|
67 |
+
According to the European Environment Agency, the transport sector is a major contributor to air pollution, noise pollution and climate change.[37]
|
68 |
+
|
69 |
+
Most cars in use in the 2010s run on gasoline burnt in an internal combustion engine (ICE). The International Organization of Motor Vehicle Manufacturers says that, in countries that mandate low sulfur gasoline, gasoline-fuelled cars built to late 2010s standards (such as Euro-6) emit very little local air pollution.[38][39] Some cities ban older gasoline-fuelled cars and some countries plan to ban sales in future. However some environmental groups say this phase-out of fossil fuel vehicles must be brought forward to limit climate change. Production of gasoline fueled cars peaked in 2017.[40][41]
|
70 |
+
|
71 |
+
Other hydrocarbon fossil fuels also burnt by deflagration (rather than detonation) in ICE cars include diesel, Autogas and CNG. Removal of fossil fuel subsidies,[42][43] concerns about oil dependence, tightening environmental laws and restrictions on greenhouse gas emissions are propelling work on alternative power systems for cars. This includes hybrid vehicles, plug-in electric vehicles and hydrogen vehicles. 2.1 million light electric vehicles (of all types but mainly cars) were sold in 2018, over half in China: this was an increase of 64% on the previous year, giving a global total on the road of 5.4 million.[44] Vehicles using alternative fuels such as ethanol flexible-fuel vehicles and natural gas vehicles[clarification needed] are also gaining popularity in some countries.[citation needed] Cars for racing or speed records have sometimes employed jet or rocket engines, but these are impractical for common use.
|
72 |
+
|
73 |
+
Oil consumption has increased rapidly in the 20th and 21st centuries because there are more cars; the 1985–2003 oil glut even fuelled the sales of low-economy vehicles in OECD countries. The BRIC countries are adding to this consumption.
|
74 |
+
|
75 |
+
Cars are equipped with controls used for driving, passenger comfort and safety, normally operated by a combination of the use of feet and hands, and occasionally by voice on 21st century cars. These controls include a steering wheel, pedals for operating the brakes and controlling the car's speed (and, in a manual transmission car, a clutch pedal), a shift lever or stick for changing gears, and a number of buttons and dials for turning on lights, ventilation and other functions. Modern cars' controls are now standardized, such as the location for the accelerator and brake, but this was not always the case. Controls are evolving in response to new technologies, for example the electric car and the integration of mobile communications.
|
76 |
+
|
77 |
+
Some of the original controls are no longer required. For example, all cars once had controls for the choke valve, clutch, ignition timing, and a crank instead of an electric starter. However new controls have also been added to vehicles, making them more complex. These include air conditioning, navigation systems, and in car entertainment. Another trend is the replacement of physical knobs and switches by secondary controls with touchscreen controls such as BMW's iDrive and Ford's MyFord Touch. Another change is that while early cars' pedals were physically linked to the brake mechanism and throttle, in the 2010s, cars have increasingly replaced these physical linkages with electronic controls.
|
78 |
+
|
79 |
+
Cars are typically fitted with multiple types of lights. These include headlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions, daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-colored reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a trunk light and, more rarely, an engine compartment light.
|
80 |
+
|
81 |
+
During the late 20th and early 21st century cars increased in weight due to batteries,[46] modern steel safety cages, anti-lock brakes, airbags, and "more-powerful—if more-efficient—engines"[47] and, as of 2019[update], typically weigh between 1 and 3 tonnes.[48] Heavier cars are safer for the driver from a crash perspective, but more dangerous for other vehicles and road users.[47] The weight of a car influences fuel consumption and performance, with more weight resulting in increased fuel consumption and decreased performance. The SmartFortwo, a small city car, weighs 750–795 kg (1,655–1,755 lb). Heavier cars include full-size cars, SUVs and extended-length SUVs like the Suburban.
|
82 |
+
|
83 |
+
According to research conducted by Julian Allwood of the University of Cambridge, global energy use could be greatly reduced by using lighter cars, and an average weight of 500 kg (1,100 lb) has been said to be well achievable.[49][better source needed] In some competitions such as the Shell Eco Marathon, average car weights of 45 kg (99 lb) have also been achieved.[50] These cars are only single-seaters (still falling within the definition of a car, although 4-seater cars are more common), but they nevertheless demonstrate the amount by which car weights could still be reduced, and the subsequent lower fuel use (i.e. up to a fuel use of 2560 km/l).[51]
|
84 |
+
|
85 |
+
Most cars are designed to carry multiple occupants, often with four or five seats. Cars with five seats typically seat two passengers in the front and three in the rear. Full-size cars and large sport utility vehicles can often carry six, seven, or more occupants depending on the arrangement of the seats. On the other hand, sports cars are most often designed with only two seats. The differing needs for passenger capacity and their luggage or cargo space has resulted in the availability of a large variety of body styles to meet individual consumer requirements that include, among others, the sedan/saloon, hatchback, station wagon/estate, and minivan.
|
86 |
+
|
87 |
+
Traffic collisions are the largest cause of injury-related deaths worldwide.[8] Mary Ward became one of the first documented car fatalities in 1869 in Parsonstown, Ireland,[52] and Henry Bliss one of the United States' first pedestrian car casualties in 1899 in New York City.[53]
|
88 |
+
There are now standard tests for safety in new cars, such as the EuroNCAP and the US NCAP tests,[54] and insurance-industry-backed tests by the Insurance Institute for Highway Safety (IIHS).[55]
|
89 |
+
|
90 |
+
The costs of car usage, which may include the cost of: acquiring the vehicle, repairs and auto maintenance, fuel, depreciation, driving time, parking fees, taxes, and insurance,[7] are weighed against the cost of the alternatives, and the value of the benefits – perceived and real – of vehicle usage. The benefits may include on-demand transportation, mobility, independence and convenience.[9] During the 1920s, cars had another benefit: "[c]ouples finally had a way to head off on unchaperoned dates, plus they had a private space to snuggle up close at the end of the night."[57]
|
91 |
+
|
92 |
+
Similarly the costs to society of car use may include; maintaining roads, land use, air pollution, road congestion, public health, health care, and of disposing of the vehicle at the end of its life; and can be balanced against the value of the benefits to society that car use generates. Societal benefits may include: economy benefits, such as job and wealth creation, of car production and maintenance, transportation provision, society wellbeing derived from leisure and travel opportunities, and revenue generation from the tax opportunities. The ability of humans to move flexibly from place to place has far-reaching implications for the nature of societies.[10]
|
93 |
+
|
94 |
+
Cars are a major cause of urban air pollution,[58] with all types of cars producing dust from brakes, tyres and road wear.[59] As of 2018[update] the average diesel car has a worse effect on air quality than the average gasoline car[60] But both gasoline and diesel cars pollute more than electric cars.[61] While there are different ways to power cars most rely on gasoline or diesel, and they consume almost a quarter of world oil production as of 2019[update].[40] In 2018 passenger road vehicles emitted 3.6 gigatonnes of carbon dioxide.[62] As of 2019[update], due to greenhouse gases emitted during battery production, electric cars must be driven tens of thousands of kilometers before their lifecycle carbon emissions are less than fossil fuel cars:[63] but this is expected to improve in future due to longer lasting[64] batteries being produced in larger factories,[65] and lower carbon electricity. Many governments are using fiscal policies, such as road tax, to discourage the purchase and use of more polluting cars;[66] and many cities are doing the same with low-emission zones.[67] Fuel taxes may act as an incentive for the production of more efficient, hence less polluting, car designs (e.g. hybrid vehicles) and the development of alternative fuels. High fuel taxes or cultural change may provide a strong incentive for consumers to purchase lighter, smaller, more fuel-efficient cars, or to not drive.[67]
|
95 |
+
|
96 |
+
The lifetime of a car built in the 2020s is expected to be about 16 years, or about 2 million kilometres (1.2 million miles) if driven a lot.[68] According to the International Energy Agency fuel economy improved 0.7% in 2017, but an annual improvement of 3.7% is needed to meet the Global Fuel Economy Initiative 2030 target.[69] The increase in sales of SUVs is bad for fuel economy.[40] Many cities in Europe, have banned older fossil fuel cars and all fossil fuel vehicles will be banned in Amsterdam from 2030.[70] Many Chinese cities limit licensing of fossil fuel cars,[71] and many countries plan to stop selling them between 2025 and 2050.[72]
|
97 |
+
|
98 |
+
The manufacture of vehicles is resource intensive, and many manufacturers now report on the environmental performance of their factories, including energy usage, waste and water consumption.[73] Manufacturing each kWh of battery emits a similar amount of carbon as burning through one full tank of gasoline.[74] The growth in popularity of the car allowed cities to sprawl, therefore encouraging more travel by car resulting in inactivity and obesity, which in turn can lead to increased risk of a variety of diseases.[75]
|
99 |
+
|
100 |
+
Animals and plants are often negatively impacted by cars via habitat destruction and pollution. Over the lifetime of the average car the "loss of habitat potential" may be over 50,000 m2 (540,000 sq ft) based on primary production correlations.[76] Animals are also killed every year on roads by cars, referred to as roadkill. More recent road developments are including significant environmental mitigation in their designs, such as green bridges (designed to allow wildlife crossings) and creating wildlife corridors.
|
101 |
+
|
102 |
+
Growth in the popularity of vehicles and commuting has led to traffic congestion. Moscow, Istanbul, Bogota, Mexico City and Sao Paulo were the world's most congested cities in 2018 according to INRIX, a data analytics company.[77]
|
103 |
+
|
104 |
+
Although intensive development of conventional battery electric vehicles is continuing into the 2020s,[78] other car propulsion technologies that are under development include wheel hub motors,[79] wireless charging,[80] hydrogen cars,[81] and hydrogen/electric hybrids.[82] Research into alternative forms of power includes using ammonia instead of hydrogen in fuel cells.[83]
|
105 |
+
|
106 |
+
New materials[84] which may replace steel car bodies include duralumin, fiberglass, carbon fiber, biocomposites, and carbon nanotubes. Telematics technology is allowing more and more people to share cars, on a pay-as-you-go basis, through car share and carpool schemes. Communication is also evolving due to connected car systems.[85]
|
107 |
+
|
108 |
+
Fully autonomous vehicles, also known as driverless cars, already exist in prototype (such as the Google driverless car), but have a long way to go before they are in general use.
|
109 |
+
|
110 |
+
There have been several projects aiming to develop a car on the principles of open design, an approach to designing in which the plans for the machinery and systems are publicly shared, often without monetary compensation. The projects include OScar, Riversimple (through 40fires.org)[86] and c,mm,n.[87] None of the projects have reached significant success in terms of developing a car as a whole both from hardware and software perspective and no mass production ready open-source based design have been introduced as of late 2009. Some car hacking through on-board diagnostics (OBD) has been done so far.[88]
|
111 |
+
|
112 |
+
Car-share arrangements and carpooling are also increasingly popular, in the US and Europe.[89] For example, in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007. Services like car sharing offering a residents to "share" a vehicle rather than own a car in already congested neighborhoods.[90]
|
113 |
+
|
114 |
+
The automotive industry designs, develops, manufactures, markets, and sells the world's motor vehicles, more than three-quarters of which are cars. In 2018 there were 70 million cars manufactured worldwide,[91] down 2 million from the previous year.[92]
|
115 |
+
|
116 |
+
The automotive industry in China produces by far the most (24 million in 2018), followed by Japan (8 million), Germany (5 million) and India (4 million).[91] The largest market is China, followed by the USA.
|
117 |
+
|
118 |
+
Around the world there are about a billion cars on the road;[93] they burn over a trillion liters of gasoline and diesel fuel yearly, consuming about 50 EJ (nearly 300 terawatt-hours) of energy.[94] The numbers of cars are increasing rapidly in China and India.[11] In the opinion of some, urban transport systems based around the car have proved unsustainable, consuming excessive energy, affecting the health of populations, and delivering a declining level of service despite increasing investment. Many of these negative impacts fall disproportionately on those social groups who are also least likely to own and drive cars.[95][96] The sustainable transport movement focuses on solutions to these problems. The car industry is also facing increasing competition from the public transport sector, as some people re-evaluate their private vehicle usage.
|
119 |
+
|
120 |
+
Established alternatives for some aspects of car use include public transport such as buses, trolleybuses, trains, subways, tramways, light rail, cycling, and walking. Bicycle sharing systems have been established in China and many European cities, including Copenhagen and Amsterdam. Similar programs have been developed in large US cities.[98][99] Additional individual modes of transport, such as personal rapid transit could serve as an alternative to cars if they prove to be socially accepted.[100]
|
121 |
+
|
122 |
+
The term motorcar was formerly also used in the context of electrified rail systems to denote a car which functions as a small locomotive but also provides space for passengers and baggage. These locomotive cars were often used on suburban routes by both interurban and intercity railroad systems.[101]
|
en/258.html.txt
ADDED
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Anna Pavlovna Pavlova (English: /ˈpævləvə, pɑːvˈloʊvə, pæv-/ PAV-lə-və, pahv-LOH-və, pav-,[1] Russian: Анна Павловна Павлова [ˈanːə ˈpavləvnə ˈpavləvə]), born Anna Matveyevna Pavlova (Russian: Анна Матвеевна Павлова [ˈanːə mɐtˈvʲe(j)ɪvnə ˈpavləvə]; February 12 [O.S. January 31] 1881 – January 23, 1931), was a Russian prima ballerina of the late 19th and the early 20th centuries. She was a principal artist of the Imperial Russian Ballet and the Ballets Russes of Sergei Diaghilev. Pavlova is most recognized for her creation of the role of The Dying Swan and, with her own company, became the first ballerina to tour around the world, including South America, India and Australia.[2]
|
2 |
+
|
3 |
+
Anna Matveyevna Pavlova was born in the Preobrazhensky Regiment hospital, Saint Petersburg where her father Matvey Pavlovich Pavlov served.[3] Some sources say that her parents married just before her birth, others — years later. Her mother Lyubov Feodorovna Pavlova came from peasants and worked as a laundress at the house of a Russian-Jewish banker Lazar Polyakov for some time. When Anna rose to fame, Polyakov's son Vladimir claimed that she was an illegitimate daughter of his father; others speculated that Matvey Pavlov himself supposedly came from Crimean Karaites (there is even a monument built in one of Yevpatoria's kenesas dedicated to Pavlova), yet both legends find no historical proof.[4][5] Anna Matveyevna changed her patronymic to Pavlovna when she started performing on stage.[6]
|
4 |
+
|
5 |
+
Pavlova was a premature child, regularly felt ill and was soon sent to the Ligovo village where her grandmother looked after her.[5] Pavlova's passion for the art of ballet took off when her mother took her to a performance of Marius Petipa's original production of The Sleeping Beauty at the Imperial Maryinsky Theater. The lavish spectacle made an impression on Pavlova. When she was nine, her mother took her to audition for the renowned Imperial Ballet School. Because of her youth, and what was considered her "sickly" appearance, she was rejected, but, at age 10, in 1891, she was accepted. She appeared for the first time on stage in Marius Petipa's Un conte de fées (A Fairy Tale), which the ballet master staged for the students of the school.
|
6 |
+
|
7 |
+
Young Pavlova's years of training were difficult. Classical ballet did not come easily to her. Her severely arched feet, thin ankles, and long limbs clashed with the small, compact body favoured for the ballerina of the time. Her fellow students taunted her with such nicknames as The broom and La petite sauvage. Undeterred, Pavlova trained to improve her technique. She would practice and practice after learning a step. She said, "No one can arrive from being talented alone. God gives talent, work transforms talent into genius."[7] She took extra lessons from the noted teachers of the day—Christian Johansson, Pavel Gerdt, Nikolai Legat—and from Enrico Cecchetti, considered the greatest ballet virtuoso of the time and founder of the Cecchetti method, a very influential ballet technique used to this day. In 1898, she entered the classe de perfection of Ekaterina Vazem, former Prima ballerina of the Saint Petersburg Imperial Theatres.
|
8 |
+
|
9 |
+
During her final year at the Imperial Ballet School, she performed many roles with the principal company. She graduated in 1899 at age 18, chosen to enter the Imperial Ballet a rank ahead of corps de ballet as a coryphée. She made her official début at the Mariinsky Theatre in Pavel Gerdt's Les Dryades prétendues (The False Dryads). Her performance drew praise from the critics, particularly the great critic and historian Nikolai Bezobrazov.
|
10 |
+
|
11 |
+
At the height of Petipa's strict academicism, the public was taken aback by Pavlova's style, a combination of a gift that paid little heed to academic rules: she frequently performed with bent knees, bad turnout, misplaced port de bras and incorrectly placed tours. Such a style, in many ways, harked back to the time of the romantic ballet and the great ballerinas of old.
|
12 |
+
|
13 |
+
Pavlova performed in various classical variations, pas de deux and pas de trois in such ballets as La Camargo, Le Roi Candaule, Marcobomba and The Sleeping Beauty. Her enthusiasm often led her astray: once during a performance as the River Thames in Petipa's The Pharaoh's Daughter her energetic double pique turns led her to lose her balance, and she ended up falling into the prompter's box. Her weak ankles led to difficulty while performing as the fairy Candide in Petipa's The Sleeping Beauty, leading the ballerina to revise the fairy's jumps en pointe, much to the surprise of the Ballet Master. She tried desperately to imitate the renowned Pierina Legnani, Prima ballerina assoluta of the Imperial Theaters. Once, during class, she attempted Legnani's famous fouettés, causing her teacher, Pavel Gerdt, to fly into a rage. He told her,
|
14 |
+
|
15 |
+
... leave acrobatics to others. It is positively more than I can bear to see the pressure such steps put on your delicate muscles and the severe arch of your foot. I beg you to never again try to imitate those who are physically stronger than you. You must realize that your daintiness and fragility are your greatest assets. You should always do the kind of dancing which brings out your own rare qualities instead of trying to win praise by mere acrobatic tricks.
|
16 |
+
|
17 |
+
Pavlova rose through the ranks quickly, becoming a favorite of the old maestro Petipa. It was from Petipa himself that Pavlova learned the title role in Paquita, Princess Aspicia in The Pharaoh's Daughter, Queen Nisia in Le Roi Candaule, and Giselle. She was named danseuse in 1902, première danseuse in 1905, and, finally, prima ballerina in 1906 after a resounding performance in Giselle. Petipa revised many grand pas for her, as well as many supplemental variations. She was much celebrated by the fanatical balletomanes of Tsarist Saint Petersburg, her legions of fans calling themselves the Pavlovatzi.
|
18 |
+
|
19 |
+
When the ballerina Mathilde Kschessinska was pregnant in 1901, she coached Pavlova in the role of Nikya in La Bayadère. Kschessinska, not wanting to be upstaged, was certain Pavlova would fail in the role, as she was considered technically inferior because of her small ankles and lithe legs. Instead, audiences became enchanted with Pavlova and her frail, ethereal look, which fitted the role perfectly, particularly in the scene The Kingdom of the Shades.
|
20 |
+
|
21 |
+
Pavlova is perhaps most renowned for creating the role of The Dying Swan, a solo choreographed for her by Michel Fokine. The ballet, created in 1905, is danced to Le cygne from The Carnival of the Animals by Camille Saint-Saëns.
|
22 |
+
Pavlova also choreographed several solos herself, one of which is The Dragonfly, a short ballet set to music by Fritz Kreisler. While performing the role, Pavlova wore a gossamer gown with large dragonfly wings fixed to the back.
|
23 |
+
|
24 |
+
Pavlova had a rivalry with Tamara Karsavina. According to the film A Portrait of Giselle, Karsavina recalls a wardrobe malfunction. During one performance, her shoulder straps fell and she accidentally exposed herself, and Pavlova reduced an embarrassed Karsavina to tears.
|
25 |
+
|
26 |
+
In the first years of the Ballets Russes, Pavlova worked briefly for Sergei Diaghilev. Originally, she was to dance the lead in Mikhail Fokine's The Firebird, but refused the part, as she could not come to terms with Igor Stravinsky's avant-garde score, and the role was given to Tamara Karsavina. All her life, Pavlova preferred the melodious "musique dansante" of the old maestros such as Cesare Pugni and Ludwig Minkus, and cared little for anything else which strayed from the salon-style ballet music of the 19th century.
|
27 |
+
|
28 |
+
After the first Paris season of Ballets Russes, Pavlova left it to form her own company. It performed throughout the world, with a repertory consisting primarily of abridgements of Petipa's works, and specially choreographed pieces for herself. Going independent was
|
29 |
+
|
30 |
+
"a very enterprising and daring act. She toured on her own... for twenty years until her death. She traveled everywhere in the world that travel was possible, and introduced the ballet to millions who had never seen any form of Western dancing."[8]
|
31 |
+
|
32 |
+
Pavlova also performed many ‘ethnic’ dances, some of which she learned from local teachers during her travels. In addition to the dances of her native Russia, she performed Mexican, Japanese, and East Indian dances. Supported by her interest, Uday Shankar, her partner in Krishna Radha (1923), went on to revive the long-neglected art of the dance in his native India.[9] She also toured China.
|
33 |
+
|
34 |
+
In 1916, she produced a 50-minute adaptation of The Sleeping Beauty in New York City.[9] Members of her company were largely English girls with Russianized names.[9] In 1918-1919, her company toured throughout South America, during which time Pavlova exerted an influence on the young American ballerina Ruth Page.[10][11][12]
|
35 |
+
|
36 |
+
In 1915, she appeared in the film The Dumb Girl of Portici, in which she played a mute girl betrayed by an aristocrat.[9]
|
37 |
+
|
38 |
+
After leaving Russia, Pavlova moved to London, England, settling, in 1912, at the Ivy House on North End Road, Golders Green, north of Hampstead Heath, where she lived for the rest of her life. The house had an ornamental lake where she fed her pet swans, and where now stands a statue of her by the Scots sculptor George Henry Paulin. The house was featured in the film Anna Pavlova. It is now the London Jewish Cultural Centre, but a blue plaque marks it as a site of significant historical interest being Pavlova's home.[13][14] While in London, Pavlova was influential in the development of British ballet, most notably inspiring the career of Alicia Markova. The Gate pub, located on the border of Arkley and Totteridge (London Borough of Barnet), has a story, framed on its walls, describing a visit by Pavlova and her dance company.
|
39 |
+
|
40 |
+
There are at least five memorials to Pavlova in London, England: a contemporary sculpture by Tom Merrifield of Pavlova as the Dragonfly in the grounds of Ivy House, a sculpture by Scot George Henry Paulin in the middle of the Ivy House pond, a blue plaque on the front of Ivy House, a statuette sitting with the urn that holds her ashes in Golders Green Crematorium, and the gilded statue atop the Victoria Palace Theatre.[15][16] When the Victoria Palace Theatre in London, England, opened in 1911, a gilded statue of Pavlova had been installed above the cupola of the theatre. This was taken down for its safety during World War II and was lost. In 2006, a replica of the original statue was restored in its place.[17]
|
41 |
+
|
42 |
+
In 1928, Anna Pavlova engaged St. Petersburg conductor Efrem Kurtz to accompany her dancing, which he did until her death in 1931. During the last five years of her life, one of her soloists, Cleo Nordi, another St Petersburg ballerina, became her dedicated assistant, having left the Paris Opera Ballet in 1926 to join her company and accompanied her on her second Australian tour to Adelaide, Brisbane and Sydney in 1929.[18] On the way back on board ship, Nordi married Pavlova's British musical director, Walford Hyden.[19] Nordi kept Pavlova's flame burning in London, well into the 1970s, where she tutored hundreds of pupils including many ballet stars.
|
43 |
+
|
44 |
+
Between 1912 and 1926, Pavlova made almost annual tours of the United States, traveling from coast to coast.
|
45 |
+
|
46 |
+
"A generation of dancers turned to the art because of her. She roused America as no one had done since Elssler. ... America became Pavlova-conscious and therefore ballet-conscious. Dance and passion, dance and drama were fused."[20]
|
47 |
+
|
48 |
+
Pavlova was introduced to audiences in the United States by Max Rabinoff during his time as managing director of the Boston Grand Opera Company from 1914 to 1917 and was featured there with her Russian Ballet Company during that period.[21]
|
49 |
+
|
50 |
+
In 1914, Pavlova performed in St. Louis, Missouri, after being engaged at the last minute by Hattie B. Gooding, responsible for a series of worthy musical attractions presented to the St. Louis public during the season of 1913-14. Gooding went to New York to arrange with the musical managers for the attractions offered. Out of a long list, she selected those who represent the highest in their own special field, and which she felt sure St. Louisans would enjoy. The list began with Madame Louise Homer, prima donna contralto of the Metropolitan Grand Opera Co., followed by Josef Hoffman, pianist, and Anna Pavlova and the Russian ballet. For the last, the expenses were $5,500.00 ($142,278 in 2019 dollars) for two nights, and the receipts $7,500.00 ($194,015 in 2019 dollars), netting a clear gain of $2,000.00 ($51,737 in 2019 dollars); her other evenings were proportionately successful financially. The advance sales were greater than any other city in the United States. At the Pavlova concert, when Gooding engaged, at the last hour, the Russian dancer for two nights, the New York managers became dubious and anxiously rushed four special advance agents to assist her. On seeing the bookings for both nights, they quietly slipped back to New York fully convinced of her ability to attract audiences in St. Louis, which had always, heretofore, been called "the worst show town" in the country.[22]
|
51 |
+
|
52 |
+
Victor Dandré, her manager and companion, asserted he was her husband in his biography of the dancer in 1932: Anna Pavlova: In Art & Life (Dandre 1932, author's foreword). He died on 5 February 1944 and was cremated at Golders Green Crematorium and his ashes placed below those of Anna.
|
53 |
+
|
54 |
+
Victor Dandré wrote of Pavlova's many charity dance performances and charitable efforts to support Russian orphans in post-World War I Paris
|
55 |
+
|
56 |
+
...who were in danger of finding themselves literally in the street. They were already suffering terrible privations and it seemed as though there would soon be no means whatever to carry on their education.[23]
|
57 |
+
|
58 |
+
Fifteen girls were adopted into a home Pavlova purchased near Paris at Saint-Cloud, overseen by the Comtesse de Guerne and supported by her performances and funds solicited by Pavlova, including many small donations from members of the Camp Fire Girls of America who made her an honorary member.[24]
|
59 |
+
|
60 |
+
During her life, she had many pets including a Siamese cat, various dogs, and many kinds of birds, including swans.[25] Dandré indicated she was a lifelong lover of animals and this is evidenced by photographic portraits she sat for, which often included an animal she loved. A formal studio portrait was made of her with Jack, her favorite swan.[25]
|
61 |
+
|
62 |
+
While travelling from Paris to The Hague, Pavlova became very ill, and worsened on her arrival in The Hague. She sent to Paris for her personal physician, Dr Zalewski to attend her.[26] She was told that she had pneumonia and required an operation. She was also told that she would never be able to dance again if she went ahead with it. She refused to have the surgery, saying "If I can't dance, then I'd rather be dead." She died of pleurisy, in the bedroom next to the Japanese Salon of the Hotel Des Indes in The Hague, twenty days short of her 50th birthday.
|
63 |
+
|
64 |
+
Victor Dandré wrote that Anna Pavlova died a half hour past midnight on Friday, January 23, 1931, with her maid Marguerite Létienne, Dr. Zalevsky, and himself at her bedside. Her last words were, "Get my 'Swan' costume ready."[27] Victor and Marguerie dressed her body in her favourite beige lace dress and placed her in a coffin with a sprig of lilac. At 7am, a Russian Orthodox priest arrived to say prayers over her body. At 7:30am, her coffin was taken to the mortuary chapel attaching the Catholic hospital in The Hague.[26]
|
65 |
+
|
66 |
+
In accordance with old ballet tradition, on the day she was to have next performed, the show went on, as scheduled, with a single spotlight circling an empty stage where she would have been. Memorial services were held in the Russian Orthodox Church in London. Anna Pavlova was cremated, and her ashes placed in a columbarium at Golders Green Crematorium, where her urn was adorned with her ballet shoes (which have since been stolen).
|
67 |
+
|
68 |
+
Pavlova's ashes have been a source of much controversy, following attempts by Valentina Zhilenkova and Moscow mayor Yury Luzhkov to have them flown to Moscow for interment in the Novodevichy Cemetery. These attempts were based on claims that it was Pavlova's dying wish that her ashes be returned to Russia following the fall of the Soviet Union. These claims were later found to be false, as there is no evidence to suggest that this was her wish at all. The only documentary evidence that suggests that such a move would be possible is in the will of Pavlova's husband, who stipulated that, if Russian authorities agreed to such a move and treated her remains with proper reverence, then the crematorium caretakers should agree to it. Despite this clause, the will does not contain a formal request or plans for a posthumous journey to Russia.
|
69 |
+
|
70 |
+
The most recent attempt to move Pavlova's remains to Russia came in 2001. Golders Green Crematorium had made arrangements for them to be flown to Russia for interment on 14 March 2001, in a ceremony to be attended by various Russian dignitaries. This plan was later abandoned after Russian authorities withdrew permission for the move. It was later revealed that neither Pavlova's family nor the Russian Government had sanctioned the move and that they had agreed the remains should stay in London.[28][29]
|
71 |
+
|
72 |
+
Pavlova inspired the choreographer Frederick Ashton when, as a boy of 13, he saw her dance in the Municipal Theater in Lima, Peru.
|
73 |
+
|
74 |
+
The Pavlova dessert is believed to have been created in honour of the dancer in Wellington during her tour of New Zealand and Australia in the 1920s. The nationality of its creator has been a source of argument between the two nations for many years.
|
75 |
+
|
76 |
+
The Jarabe Tapatío, known in English as the 'Mexican Hat Dance', gained popularity outside of Mexico when Pavlova created a staged version in pointe shoes, for which she was showered with hats by her adoring Mexican audiences. Afterward, in 1924, the Jarabe Tapatío was proclaimed Mexico's national dance.
|
77 |
+
|
78 |
+
In 1980, Igor Carl Faberge licensed a collection of 8-inch Full Lead Crystal Wine Glasses to commemorate the centenary of Anna's birth. The glasses were crafted in Japan under the supervision of The Franklin Mint. A frosted image of Anna Pavlova appears in the stem of each glass. Originally each set contained 12 glasses.
|
79 |
+
|
80 |
+
Pavlova's life was depicted in the 1983 film Anna Pavlova.
|
81 |
+
|
82 |
+
Pavlova's dances inspired many artworks of the Irish painter John Lavery. The critic of The Observer wrote on 16 April 1911: 'Mr. Lavery's portrait of the Russian dancer Anna Pavlova, caught in a moment of graceful, weightless movement ... Her miraculous, feather-like flight, which seems to defy the law of gravitation'.[30][31]
|
83 |
+
|
84 |
+
A McDonnell Douglas MD-11 of the Dutch airline KLM, with the registration PH-KCH carried her name. It was delivered on August 31, 1995.
|
85 |
+
|
86 |
+
Anna Pavlova appears as a character in Rosario Ferre's novel Flight of the Swan.
|
87 |
+
|
88 |
+
Anna Pavlova appears as a character in the fourth episode of the British series Mr Selfridge, played by real-life ballerina Natalia Kremen.
|
89 |
+
|
90 |
+
Anna Pavlova appeared as herself, played by Maria Tallchief, in the 1952 movie “The Million Dollar Mermaid”.
|
91 |
+
|
92 |
+
Her feet were extremely arched, so she strengthened her pointe shoe by adding a piece of hard leather on the soles for support and flattening the box of the shoe. At the time, many considered this "cheating", for a ballerina of the era was taught that she, not her shoes, must hold her weight en pointe. In Pavlova's case, this was extremely difficult, as the shape of her feet required her to balance her weight on her big toes. Her solution became, over time, the precursor of the modern pointe shoe, as pointe work became less painful and easier for curved feet. According to Margot Fonteyn's biography, Pavlova did not like the way her invention looked in photographs, so she would remove it or have the photographs altered so that it appeared she was using a normal pointe shoe.[32]
|
93 |
+
|
94 |
+
At the turn-of-the-Twentieth century, the Imperial Ballet began a project that notated much of its repertory in the Stepanov method of choreographic notation. Most of the notated choreographies were recorded while dancers were being taken through rehearsals. After the revolution of 1917, this collection of notation was taken out of Russia by the Imperial Ballet's former régisseur Nicholas Sergeyev, who utilized these documents to stage such works as The Nutcracker, The Sleeping Beauty and Swan Lake, as well as Marius Petipa's definitive versions of Giselle and of Coppélia for the Paris Opéra Ballet and the Vic-Wells Ballet of London (precursor of the Royal Ballet). The productions of these works formed the foundation from which all subsequent versions would be based to one extent or another. Eventually, these notations were acquired by Harvard University, and are now part of the cache of materials relating to the Imperial Ballet known as the Sergeyev Collection that includes not only the notated ballets but rehearsal scores as used by the company at the turn-of-the-twentieth century.
|
95 |
+
|
96 |
+
The notations of Giselle and the full-length Paquita were recorded circa 1901-1902 while Marius Petipa himself took Anna Pavlova through rehearsals. Pavlova is also included in some of the other notated choreographies when she participated in performances as a soloist. Several of the violin or piano reductions used as rehearsal scores reflect the variations that Pavlova chose to dance in a particular performance, since, at that time, classical variations were often performed ad libitum, i.e. at the dancer's choice. One variation, in particular, was performed by Pavlova in several ballets, being composed by Riccardo Drigo for Pavlova's performance in Petipa's ballet Le Roi candaule that features a solo harp. This variation is still performed in modern times in the Mariinsky Ballet's staging of the Paquita grand pas classique.
|
97 |
+
|
98 |
+
Pavlova's repertoire includes the following roles:
|
99 |
+
|
100 |
+
"Anna Pavlova as a Bacchante", by Sir John Lavery
|
101 |
+
|
102 |
+
Stained glass window entitled "El Jarabe Tapatio"
|
103 |
+
|
104 |
+
The Butterfly (Costume Design by Léon Bakst for Anna Pavlova), Museum of Fine Arts, Boston
|
105 |
+
|
106 |
+
London, Victoria Palace Theatre, rooftop statue of Anna Pavlova
|
107 |
+
|
108 |
+
Pavlova in "La Fille mal gardée", 1912
|
109 |
+
|
110 |
+
Anna Pavlova in Paris, 1920s
|
111 |
+
|
112 |
+
Povlova in "The Dying Swan"
|
en/2580.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/2581.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/2582.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/2583.html.txt
ADDED
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
—George Santayana
|
2 |
+
|
3 |
+
History (from Greek ἱστορία, historia, meaning 'inquiry; knowledge acquired by investigation')[2] is the study of the past.[3][4] Events occurring before the invention of writing systems are considered prehistory. "History" is an umbrella term that relates to past events as well as the memory, discovery, collection, organization, presentation, and interpretation of information about these events. Scholars who focus on history are called historians. The historian's role is to place the past in context, using sources from moments and events, and filling in the gaps to the best of their ability.[5] Written documents are not the only sources historians use to develop their understanding of the past. They also use material objects, oral accounts, ecological markers, art, and artifacts as historical sources.
|
4 |
+
|
5 |
+
History also includes the academic discipline which uses narrative to describe, examine, question, and analyze a sequence of past events, investigate the patterns of cause and effect that are related to them.[6][7] Historians seek to understand and represent the past through narratives. They often debate which narrative best explains an event, as well as the significance of different causes and effects. Historians also debate the nature of history and its usefulness by discussing the study of the discipline as an end in itself and as a way of providing "perspective" on the problems of the present.[6][8][9][10]
|
6 |
+
|
7 |
+
Stories common to a particular culture, but not supported by external sources (such as the tales surrounding King Arthur), are usually classified as cultural heritage or legends.[11][12] History differs from myth in that it is supported by evidence. However, ancient influences have helped spawn variant interpretations of the nature of history which have evolved over the centuries and continue to change today. The modern study of history is wide-ranging, and includes the study of specific regions and the study of certain topical or thematic elements of historical investigation. History is often taught as part of primary and secondary education, and the academic study of history is a major discipline in university studies.
|
8 |
+
|
9 |
+
Herodotus, a 5th-century BC Greek historian is often considered (within the Western tradition) to be the "father of history," or, by some, the "father of lies." Along with his contemporary Thucydides, he helped form the foundations for the modern study of human history. Their works continue to be read today, and the gap between the culture-focused Herodotus and the military-focused Thucydides remains a point of contention or approach in modern historical writing. In East Asia, a state chronicle, the Spring and Autumn Annals, was known to be compiled from as early as 722 BC although only 2nd-century BC texts have survived.
|
10 |
+
|
11 |
+
The word history comes from the Ancient Greek ἱστορία[13] (historía), meaning 'inquiry', 'knowledge from inquiry', or 'judge'. It was in that sense that Aristotle used the word in his History of Animals.[14] The ancestor word ἵστωρ is attested early on in Homeric Hymns, Heraclitus, the Athenian ephebes' oath, and in Boiotic inscriptions (in a legal sense, either 'judge' or 'witness', or similar). The Greek word was borrowed into Classical Latin as historia, meaning "investigation, inquiry, research, account, description, written account of past events, writing of history, historical narrative, recorded knowledge of past events, story, narrative". History was borrowed from Latin (possibly via Old Irish or Old Welsh) into Old English as stær ('history, narrative, story'), but this word fell out of use in the late Old English period.[15] Meanwhile, as Latin became Old French (and Anglo-Norman), historia developed into forms such as istorie, estoire, and historie, with new developments in the meaning: "account of the events of a person's life (beginning of the 12th century), chronicle, account of events as relevant to a group of people or people in general (1155), dramatic or pictorial representation of historical events (c. 1240), body of knowledge relative to human evolution, science (c. 1265), narrative of real or imaginary events, story (c. 1462)".[15]
|
12 |
+
|
13 |
+
It was from Anglo-Norman that history was borrowed into Middle English, and this time the loan stuck. It appears in the 13th-century Ancrene Wisse, but seems to have become a common word in the late 14th century, with an early attestation appearing in John Gower's Confessio Amantis of the 1390s (VI.1383): "I finde in a bok compiled | To this matiere an old histoire, | The which comth nou to mi memoire". In Middle English, the meaning of history was "story" in general. The restriction to the meaning "the branch of knowledge that deals with past events; the formal record or study of past events, esp. human affairs" arose in the mid-15th century.[15] With the Renaissance, older senses of the word were revived, and it was in the Greek sense that Francis Bacon used the term in the late 16th century, when he wrote about "Natural History". For him, historia was "the knowledge of objects determined by space and time", that sort of knowledge provided by memory (while science was provided by reason, and poetry was provided by fantasy).[16]
|
14 |
+
|
15 |
+
In an expression of the linguistic synthetic vs. analytic/isolating dichotomy, English like Chinese (史 vs. 诌) now designates separate words for human history and storytelling in general. In modern German, French, and most Germanic and Romance languages, which are solidly synthetic and highly inflected, the same word is still used to mean both 'history' and 'story'. Historian in the sense of a "researcher of history" is attested from 1531. In all European languages, the substantive history is still used to mean both "what happened with men", and "the scholarly study of the happened", the latter sense sometimes distinguished with a capital letter, or the word historiography.[14] The adjective historical is attested from 1661, and historic from 1669.[17]
|
16 |
+
|
17 |
+
Historians write in the context of their own time, and with due regard to the current dominant ideas of how to interpret the past, and sometimes write to provide lessons for their own society. In the words of Benedetto Croce, "All history is contemporary history". History is facilitated by the formation of a "true discourse of past" through the production of narrative and analysis of past events relating to the human race.[18] The modern discipline of history is dedicated to the institutional production of this discourse.
|
18 |
+
|
19 |
+
All events that are remembered and preserved in some authentic form constitute the historical record.[19] The task of historical discourse is to identify the sources which can most usefully contribute to the production of accurate accounts of past. Therefore, the constitution of the historian's archive is a result of circumscribing a more general archive by invalidating the usage of certain texts and documents (by falsifying their claims to represent the "true past"). Part of the historian's role is to skillfully and objectively utilize the vast amount of sources from the past, most often found in the archives. The process of creating a narrative inevitably generates a silence as historians remember or emphasize different events of the past.[20]
|
20 |
+
|
21 |
+
The study of history has sometimes been classified as part of the humanities and at other times as part of the social sciences.[21] It can also be seen as a bridge between those two broad areas, incorporating methodologies from both. Some individual historians strongly support one or the other classification.[22] In the 20th century, French historian Fernand Braudel revolutionized the study of history, by using such outside disciplines as economics, anthropology, and geography in the study of global history.
|
22 |
+
|
23 |
+
Traditionally, historians have recorded events of the past, either in writing or by passing on an oral tradition, and have attempted to answer historical questions through the study of written documents and oral accounts. From the beginning, historians have also used such sources as monuments, inscriptions, and pictures. In general, the sources of historical knowledge can be separated into three categories: what is written, what is said, and what is physically preserved, and historians often consult all three.[23] But writing is the marker that separates history from what comes before.
|
24 |
+
|
25 |
+
Archaeology is a discipline that is especially helpful in dealing with buried sites and objects, which, once unearthed, contribute to the study of history. But archaeology rarely stands alone. It uses narrative sources to complement its discoveries. However, archaeology is constituted by a range of methodologies and approaches which are independent from history; that is to say, archaeology does not "fill the gaps" within textual sources. Indeed, "historical archaeology" is a specific branch of archaeology, often contrasting its conclusions against those of contemporary textual sources. For example, Mark Leone, the excavator and interpreter of historical Annapolis, Maryland, USA; has sought to understand the contradiction between textual documents and the material record, demonstrating the possession of slaves and the inequalities of wealth apparent via the study of the total historical environment, despite the ideology of "liberty" inherent in written documents at this time.
|
26 |
+
|
27 |
+
There are varieties of ways in which history can be organized, including chronologically, culturally, territorially, and thematically. These divisions are not mutually exclusive, and significant intersections are often present. It is possible for historians to concern themselves with both the very specific and the very general, although the modern trend has been toward specialization. The area called Big History resists this specialization, and searches for universal patterns or trends. History has often been studied with some practical or theoretical aim, but also may be studied out of simple intellectual curiosity.[24]
|
28 |
+
|
29 |
+
The history of the world is the memory of the past experience of Homo sapiens sapiens around the world, as that experience has been preserved, largely in written records. By "prehistory", historians mean the recovery of knowledge of the past in an area where no written records exist, or where the writing of a culture is not understood. By studying painting, drawings, carvings, and other artifacts, some information can be recovered even in the absence of a written record. Since the 20th century, the study of prehistory is considered essential to avoid history's implicit exclusion of certain civilizations, such as those of Sub-Saharan Africa and pre-Columbian America. Historians in the West have been criticized for focusing disproportionately on the Western world.[25] In 1961, British historian E. H. Carr wrote:
|
30 |
+
|
31 |
+
The line of demarcation between prehistoric and historical times is crossed when people cease to live only in the present, and become consciously interested both in their past and in their future. History begins with the handing down of tradition; and tradition means the carrying of the habits and lessons of the past into the future. Records of the past begin to be kept for the benefit of future generations.[26]
|
32 |
+
|
33 |
+
This definition includes within the scope of history the strong interests of peoples, such as Indigenous Australians and New Zealand Māori in the past, and the oral records maintained and transmitted to succeeding generations, even before their contact with European civilization.
|
34 |
+
|
35 |
+
Historiography has a number of related meanings. Firstly, it can refer to how history has been produced: the story of the development of methodology and practices (for example, the move from short-term biographical narrative towards long-term thematic analysis). Secondly, it can refer to what has been produced: a specific body of historical writing (for example, "medieval historiography during the 1960s" means "Works of medieval history written during the 1960s"). Thirdly, it may refer to why history is produced: the Philosophy of history. As a meta-level analysis of descriptions of the past, this third conception can relate to the first two in that the analysis usually focuses on the narratives, interpretations, world view, use of evidence, or method of presentation of other historians. Professional historians also debate the question of whether history can be taught as a single coherent narrative or a series of competing narratives.[27][28]
|
36 |
+
|
37 |
+
The following questions are used by historians in modern work.
|
38 |
+
|
39 |
+
The first four are known as historical criticism; the fifth, textual criticism; and, together, external criticism. The sixth and final inquiry about a source is called internal criticism.
|
40 |
+
|
41 |
+
The historical method comprises the techniques and guidelines by which historians use primary sources and other evidence to research and then to write history.
|
42 |
+
|
43 |
+
Herodotus of Halicarnassus (484 BC – ca.425 BC)[29] has generally been acclaimed as the "father of history". However, his contemporary Thucydides (c. 460 BC – c. 400 BC) is credited with having first approached history with a well-developed historical method in his work the History of the Peloponnesian War. Thucydides, unlike Herodotus, regarded history as being the product of the choices and actions of human beings, and looked at cause and effect, rather than as the result of divine intervention (though Herodotus was not wholly committed to this idea himself).[29] In his historical method, Thucydides emphasized chronology, a nominally neutral point of view, and that the human world was the result of the actions of human beings. Greek historians also viewed history as cyclical, with events regularly recurring.[30]
|
44 |
+
|
45 |
+
There were historical traditions and sophisticated use of historical method in ancient and medieval China. The groundwork for professional historiography in East Asia was established by the Han dynasty court historian known as Sima Qian (145–90 BC), author of the Records of the Grand Historian (Shiji). For the quality of his written work, Sima Qian is posthumously known as the Father of Chinese historiography. Chinese historians of subsequent dynastic periods in China used his Shiji as the official format for historical texts, as well as for biographical literature.[citation needed]
|
46 |
+
|
47 |
+
Saint Augustine was influential in Christian and Western thought at the beginning of the medieval period. Through the Medieval and Renaissance periods, history was often studied through a sacred or religious perspective. Around 1800, German philosopher and historian Georg Wilhelm Friedrich Hegel brought philosophy and a more secular approach in historical study.[24]
|
48 |
+
|
49 |
+
In the preface to his book, the Muqaddimah (1377), the Arab historian and early sociologist, Ibn Khaldun, warned of seven mistakes that he thought that historians regularly committed. In this criticism, he approached the past as strange and in need of interpretation. The originality of Ibn Khaldun was to claim that the cultural difference of another age must govern the evaluation of relevant historical material, to distinguish the principles according to which it might be possible to attempt the evaluation, and lastly, to feel the need for experience, in addition to rational principles, in order to assess a culture of the past. Ibn Khaldun often criticized "idle superstition and uncritical acceptance of historical data." As a result, he introduced a scientific method to the study of history, and he often referred to it as his "new science".[31] His historical method also laid the groundwork for the observation of the role of state, communication, propaganda and systematic bias in history,[32] and he is thus considered to be the "father of historiography"[33][34] or the "father of the philosophy of history".[35]
|
50 |
+
|
51 |
+
In the West, historians developed modern methods of historiography in the 17th and 18th centuries, especially in France and Germany. In 1851, Herbert Spencer summarized these methods:
|
52 |
+
|
53 |
+
From the successive strata of our historical deposits, they [Historians] diligently gather all the highly colored fragments, pounce upon everything that is curious and sparkling and chuckle like children over their glittering acquisitions; meanwhile the rich veins of wisdom that ramify amidst this worthless debris, lie utterly neglected. Cumbrous volumes of rubbish are greedily accumulated, while those masses of rich ore, that should have been dug out, and from which golden truths might have been smelted, are left untaught and unsought[36]
|
54 |
+
|
55 |
+
By the "rich ore" Spencer meant scientific theory of history. Meanwhile, Henry Thomas Buckle expressed a dream of history becoming one day science:
|
56 |
+
|
57 |
+
In regard to nature, events apparently the most irregular and capricious have been explained and have been shown to be in accordance with certain fixed and universal laws. This have been done because men of ability and, above all, men of patient, untiring thought have studied events with the view of discovering their regularity, and if human events were subject to a similar treatment, we have every right to expect similar results[37]
|
58 |
+
|
59 |
+
Contrary to Buckle's dream, the 19th-century historian with greatest influence on methods became Leopold von Ranke in Germany. He limited history to “what really happened” and by this directed the field further away from science. For Ranke, historical data should be collected carefully, examined objectively and put together with critical rigor. But these procedures “are merely the prerequisites and preliminaries of science. The heart of science is searching out order and regularity in the data being examined and in formulating generalizations or laws about them.”[38]
|
60 |
+
|
61 |
+
As Historians like Ranke and many who followed him have pursued it, no, history is not a science. Thus if Historians tell us that, given the manner in which he practices his craft, it cannot be considered a science, we must take him at his word. If he is not doing science, then, whatever else he is doing, he is not doing science. The traditional Historian is thus no scientist and history, as conventionally practiced, is not a science.[39]
|
62 |
+
|
63 |
+
In the 20th century, academic historians focused less on epic nationalistic narratives, which often tended to glorify the nation or great men, to more objective and complex analyses of social and intellectual forces. A major trend of historical methodology in the 20th century was a tendency to treat history more as a social science rather than as an art, which traditionally had been the case. Some of the leading advocates of history as a social science were a diverse collection of scholars which included Fernand Braudel, E. H. Carr, Fritz Fischer, Emmanuel Le Roy Ladurie, Hans-Ulrich Wehler, Bruce Trigger, Marc Bloch, Karl Dietrich Bracher, Peter Gay, Robert Fogel, Lucien Febvre and Lawrence Stone. Many of the advocates of history as a social science were or are noted for their multi-disciplinary approach. Braudel combined history with geography, Bracher history with political science, Fogel history with economics, Gay history with psychology, Trigger history with archaeology while Wehler, Bloch, Fischer, Stone, Febvre and Le Roy Ladurie have in varying and differing ways amalgamated history with sociology, geography, anthropology, and economics. Nevertheless, these multidisciplinary approaches failed to produce a theory of history. So far only one theory of history came from the pen of a professional Historian.[40] Whatever other theories of history we have, they were written by experts from other fields (for example, Marxian theory of history). More recently, the field of digital history has begun to address ways of using computer technology to pose new questions to historical data and generate digital scholarship.
|
64 |
+
|
65 |
+
In sincere opposition to the claims of history as a social science, historians such as Hugh Trevor-Roper, John Lukacs, Donald Creighton, Gertrude Himmelfarb and Gerhard Ritter argued that the key to the historians' work was the power of the imagination, and hence contended that history should be understood as an art. French historians associated with the Annales School introduced quantitative history, using raw data to track the lives of typical individuals, and were prominent in the establishment of cultural history (cf. histoire des mentalités). Intellectual historians such as Herbert Butterfield, Ernst Nolte and George Mosse have argued for the significance of ideas in history. American historians, motivated by the civil rights era, focused on formerly overlooked ethnic, racial, and socio-economic groups. Another genre of social history to emerge in the post-WWII era was Alltagsgeschichte (History of Everyday Life). Scholars such as Martin Broszat, Ian Kershaw and Detlev Peukert sought to examine what everyday life was like for ordinary people in 20th-century Germany, especially in the Nazi period.
|
66 |
+
|
67 |
+
Marxist historians such as Eric Hobsbawm, E. P. Thompson, Rodney Hilton, Georges Lefebvre, Eugene Genovese, Isaac Deutscher, C. L. R. James, Timothy Mason, Herbert Aptheker, Arno J. Mayer and Christopher Hill have sought to validate Karl Marx's theories by analyzing history from a Marxist perspective. In response to the Marxist interpretation of history, historians such as François Furet, Richard Pipes, J. C. D. Clark, Roland Mousnier, Henry Ashby Turner and Robert Conquest have offered anti-Marxist interpretations of history. Feminist historians such as Joan Wallach Scott, Claudia Koonz, Natalie Zemon Davis, Sheila Rowbotham, Gisela Bock, Gerda Lerner, Elizabeth Fox-Genovese, and Lynn Hunt have argued for the importance of studying the experience of women in the past. In recent years, postmodernists have challenged the validity and need for the study of history on the basis that all history is based on the personal interpretation of sources. In his 1997 book In Defence of History, Richard J. Evans defended the worth of history. Another defence of history from post-modernist criticism was the Australian historian Keith Windschuttle's 1994 book, The Killing of History.
|
68 |
+
|
69 |
+
Today, most historians begin their research process in the archives, on either a physical or digital platform. They often propose an argument and use their research to support it. John H. Arnold proposed that history is an argument, which creates the possibility of creating change.[5] Digital information companies, such as Google, have sparked controversy over the role of internet censorship in information access.[41]
|
70 |
+
|
71 |
+
The Marxist theory of historical materialism theorises that society is fundamentally determined by the material conditions at any given time – in other words, the relationships which people have with each other in order to fulfill basic needs such as feeding, clothing and housing themselves and their families.[42] Overall, Marx and Engels claimed to have identified five successive stages of the development of these material conditions in Western Europe.[43] Marxist historiography was once orthodoxy in the Soviet Union, but since the collapse of communism there in 1991, Mikhail Krom says it has been reduced to the margins of scholarship.[44]
|
72 |
+
|
73 |
+
Many historians believe that the production of history is embedded with bias because events and known facts in history can be interpreted in a variety of ways. Constantin Fasolt suggested that history is linked to politics by the practice of silence itself.[45] “A second common view of the link between history and politics rests on the elementary observation that historians are often influenced by politics.”[45] According to Michel-Rolph Trouillot, the historical process is rooted in the archives, therefore silences, or parts of history that are forgotten, may be an intentional part of a narrative strategy that dictates how areas of history are remembered.[20] Historical omissions can occur in many ways and can have a profound effect on historical records. Information can also purposely be excluded or left out accidentally. Historians have coined multiple terms that describe the act of omitting historical information, including: “silencing,”[20] “selective memory,”[46] and erasures.[47] Gerda Lerner, a twentieth century historian who focused much of her work on historical omissions involving women and their accomplishments, explained the negative impact that these omissions had on minority groups.[46]
|
74 |
+
|
75 |
+
Environmental historian William Cronon proposed three ways to combat bias and ensure authentic and accurate narratives: narratives must not contradict known fact, they must make ecological sense (specifically for environmental history), and published work must be reviewed by scholarly community and other historians to ensure accountability.[47]
|
76 |
+
|
77 |
+
These are approaches to history; not listed are histories of other fields, such as history of science, history of mathematics and history of philosophy.
|
78 |
+
|
79 |
+
Historical study often focuses on events and developments that occur in particular blocks of time. Historians give these periods of time names in order to allow "organising ideas and classificatory generalisations" to be used by historians.[48] The names given to a period can vary with geographical location, as can the dates of the beginning and end of a particular period. Centuries and decades are commonly used periods and the time they represent depends on the dating system used. Most periods are constructed retrospectively and so reflect value judgments made about the past. The way periods are constructed and the names given to them can affect the way they are viewed and studied.[49]
|
80 |
+
|
81 |
+
The field of history generally leaves prehistory to the archaeologists, who have entirely different sets of tools and theories. The usual method for periodisation of the distant prehistoric past, in archaeology is to rely on changes in material culture and technology, such as the Stone Age, Bronze Age and Iron Age and their sub-divisions also based on different styles of material remains. Here prehistory is divided into a series of "chapters" so that periods in history could unfold not only in a relative chronology but also narrative chronology.[50] This narrative content could be in the form of functional-economic interpretation. There are periodisation, however, that do not have this narrative aspect, relying largely on relative chronology and, thus, devoid of any specific meaning.
|
82 |
+
|
83 |
+
Despite the development over recent decades of the ability through radiocarbon dating and other scientific methods to give actual dates for many sites or artefacts, these long-established schemes seem likely to remain in use. In many cases neighbouring cultures with writing have left some history of cultures without it, which may be used. Periodisation, however, is not viewed as a perfect framework with one account explaining that "cultural changes do not conveniently start and stop (combinedly) at periodisation boundaries" and that different trajectories of change are also needed to be studied in their own right before they get intertwined with cultural phenomena.[51]
|
84 |
+
|
85 |
+
Particular geographical locations can form the basis of historical study, for example, continents, countries, and cities. Understanding why historic events took place is important. To do this, historians often turn to geography. According to Jules Michelet in his book Histoire de France (1833), "without geographical basis, the people, the makers of history, seem to be walking on air."[52] Weather patterns, the water supply, and the landscape of a place all affect the lives of the people who live there. For example, to explain why the ancient Egyptians developed a successful civilization, studying the geography of Egypt is essential. Egyptian civilization was built on the banks of the Nile River, which flooded each year, depositing soil on its banks. The rich soil could help farmers grow enough crops to feed the people in the cities. That meant everyone did not have to farm, so some people could perform other jobs that helped develop the civilization. There is also the case of climate, which historians like Ellsworth Huntington and Allen Semple, cited as a crucial influence on the course of history and racial temperament.[53]
|
86 |
+
|
87 |
+
Military history concerns warfare, strategies, battles, weapons, and the psychology of combat. The "new military history" since the 1970s has been concerned with soldiers more than generals, with psychology more than tactics, and with the broader impact of warfare on society and culture.[54]
|
88 |
+
|
89 |
+
The history of religion has been a main theme for both secular and religious historians for centuries, and continues to be taught in seminaries and academe. Leading journals include Church History, The Catholic Historical Review, and History of Religions. Topics range widely from political and cultural and artistic dimensions, to theology and liturgy.[55] This subject studies religions from all regions and areas of the world where humans have lived.[56]
|
90 |
+
|
91 |
+
Social history, sometimes called the new social history, is the field that includes history of ordinary people and their strategies and institutions for coping with life.[57] In its "golden age" it was a major growth field in the 1960s and 1970s among scholars, and still is well represented in history departments. In two decades from 1975 to 1995, the proportion of professors of history in American universities identifying with social history rose from 31% to 41%, while the proportion of political historians fell from 40% to 30%.[58] In the history departments of British universities in 2007, of the 5723 faculty members, 1644 (29%) identified themselves with social history while political history came next with 1425 (25%).[59]
|
92 |
+
The "old" social history before the 1960s was a hodgepodge of topics without a central theme, and it often included political movements, like Populism, that were "social" in the sense of being outside the elite system. Social history was contrasted with political history, intellectual history and the history of great men. English historian G. M. Trevelyan saw it as the bridging point between economic and political history, reflecting that, "Without social history, economic history is barren and political history unintelligible."[60] While the field has often been viewed negatively as history with the politics left out, it has also been defended as "history with the people put back in."[61]
|
93 |
+
|
94 |
+
The chief subfields of social history include:
|
95 |
+
|
96 |
+
Cultural history replaced social history as the dominant form in the 1980s and 1990s. It typically combines the approaches of anthropology and history to look at language, popular cultural traditions and cultural interpretations of historical experience. It examines the records and narrative descriptions of past knowledge, customs, and arts of a group of people. How peoples constructed their memory of the past is a major topic.
|
97 |
+
Cultural history includes the study of art in society as well is the study of images and human visual production (iconography).[62]
|
98 |
+
|
99 |
+
Diplomatic history focuses on the relationships between nations, primarily regarding diplomacy and the causes of wars. More recently it looks at the causes of peace and human rights. It typically presents the viewpoints of the foreign office, and long-term strategic values, as the driving force of continuity and change in history. This type of political history is the study of the conduct of international relations between states or across state boundaries over time. Historian Muriel Chamberlain notes that after the First World War, "diplomatic history replaced constitutional history as the flagship of historical investigation, at once the most important, most exact and most sophisticated of historical studies."[63] She adds that after 1945, the trend reversed, allowing social history to replace it.
|
100 |
+
|
101 |
+
Although economic history has been well established since the late 19th century, in recent years academic studies have shifted more and more toward economics departments and away from traditional history departments.[64] Business history deals with the history of individual business organizations, business methods, government regulation, labour relations, and impact on society. It also includes biographies of individual companies, executives, and entrepreneurs. It is related to economic history; Business history is most often taught in business schools.[65]
|
102 |
+
|
103 |
+
Environmental history is a new field that emerged in the 1980s to look at the history of the environment, especially in the long run, and the impact of human activities upon it.[66] It is an offshoot of the environmental movement, which was kickstarted by Rachel Carson's Silent Spring in the 1960s.
|
104 |
+
|
105 |
+
World history is the study of major civilizations over the last 3000 years or so. World history is primarily a teaching field, rather than a research field. It gained popularity in the United States,[67] Japan[68] and other countries after the 1980s with the realization that students need a broader exposure to the world as globalization proceeds.
|
106 |
+
|
107 |
+
It has led to highly controversial interpretations by Oswald Spengler and Arnold J. Toynbee, among others.
|
108 |
+
|
109 |
+
The World History Association publishes the Journal of World History every quarter since 1990.[69] The H-World discussion list[70] serves as a network of communication among practitioners of world history, with discussions among scholars, announcements, syllabi, bibliographies and book reviews.
|
110 |
+
|
111 |
+
A people's history is a type of historical work which attempts to account for historical events from the perspective of common people. A people's history is the history of the world that is the story of mass movements and of the outsiders. Individuals or groups not included in the past in other type of writing about history are the primary focus, which includes the disenfranchised, the oppressed, the poor, the nonconformists, and the otherwise forgotten people. The authors are typically on the left and have a socialist model in mind, as in the approach of the History Workshop movement in Britain in the 1960s.[71]
|
112 |
+
|
113 |
+
Intellectual history and the history of ideas emerged in the mid-20th century, with the focus on the intellectuals and their books on the one hand, and on the other the study of ideas as disembodied objects with a career of their own.[72][73]
|
114 |
+
|
115 |
+
Gender history is a subfield of History and Gender studies, which looks at the past from the perspective of gender. The outgrowth of gender history from women's history stemmed from many non-feminist historians dismissing the importance of women in history. According to Joan W. Scott, “Gender is a constitutive element of social relationships based on perceived differences between the sexes, and gender is a primary way of signifying relations of power,”[74] meaning that gender historians study the social effects of perceived differences between the sexes and how all genders utilize allotted power in societal and political structures. Despite being a relatively new field, gender history has had a significant effect on the general study of history. Gender history traditionally differs from women's history in its inclusion of all aspects of gender such as masculinity and femininity, and today's gender history extends to include people who identify outside of that binary.
|
116 |
+
|
117 |
+
Public history describes the broad range of activities undertaken by people with some training in the discipline of history who are generally working outside of specialized academic settings. Public history practice has quite deep roots in the areas of historic preservation, archival science, oral history, museum curatorship, and other related fields. The term itself began to be used in the U.S. and Canada in the late 1970s, and the field has become increasingly professionalized since that time. Some of the most common settings for public history are museums, historic homes and historic sites, parks, battlefields, archives, film and television companies, and all levels of government.[75]
|
118 |
+
|
119 |
+
LGBT history deals with the first recorded instances of same-sex love and sexuality of ancient civilizations, involves the history of lesbian, gay, bisexual and transgender (LGBT) peoples and cultures around the world. A common feature of LGBTQ+ history is the focus on oral history and individual perspectives, in addition to traditional documents within the archives.
|
120 |
+
|
121 |
+
Professional and amateur historians discover, collect, organize, and present information about past events. They discover this information through archaeological evidence, written primary sources, verbal stories or oral histories, and other archival material. In lists of historians, historians can be grouped by order of the historical period in which they were writing, which is not necessarily the same as the period in which they specialized. Chroniclers and annalists, though they are not historians in the true sense, are also frequently included.
|
122 |
+
|
123 |
+
Since the 20th century, Western historians have disavowed the aspiration to provide the "judgement of history."[76] The goals of historical judgements or interpretations are separate to those of legal judgements, that need to be formulated quickly after the events and be final.[77] A related issue to that of the judgement of history is that of collective memory.
|
124 |
+
|
125 |
+
Pseudohistory is a term applied to texts which purport to be historical in nature but which depart from standard historiographical conventions in a way which undermines their conclusions.
|
126 |
+
It is closely related to deceptive historical revisionism. Works which draw controversial conclusions from new, speculative, or disputed historical evidence, particularly in the fields of national, political, military, and religious affairs, are often rejected as pseudohistory.
|
127 |
+
|
128 |
+
A major intellectual battle took place in Britain in the early twentieth century regarding the place of history teaching in the universities. At Oxford and Cambridge, scholarship was downplayed. Professor Charles Harding Firth, Oxford's Regius Professor of history in 1904 ridiculed the system as best suited to produce superficial journalists. The Oxford tutors, who had more votes than the professors, fought back in defence of their system saying that it successfully produced Britain's outstanding statesmen, administrators, prelates, and diplomats, and that mission was as valuable as training scholars. The tutors dominated the debate until after the Second World War. It forced aspiring young scholars to teach at outlying schools, such as Manchester University, where Thomas Frederick Tout was professionalizing the History undergraduate programme by introducing the study of original sources and requiring the writing of a thesis.[78][79]
|
129 |
+
|
130 |
+
In the United States, scholarship was concentrated at the major PhD-producing universities, while the large number of other colleges and universities focused on undergraduate teaching. A tendency in the 21st century was for the latter schools to increasingly demand scholarly productivity of their younger tenure-track faculty. Furthermore, universities have increasingly relied on inexpensive part-time adjuncts to do most of the classroom teaching.[80]
|
131 |
+
|
132 |
+
From the origins of national school systems in the 19th century, the teaching of history to promote national sentiment has been a high priority. In the United States after World War I, a strong movement emerged at the university level to teach courses in Western Civilization, so as to give students a common heritage with Europe. In the U.S. after 1980, attention increasingly moved toward teaching world history or requiring students to take courses in non-western cultures, to prepare students for life in a globalized economy.[81]
|
133 |
+
|
134 |
+
At the university level, historians debate the question of whether history belongs more to social science or to the humanities. Many view the field from both perspectives.
|
135 |
+
|
136 |
+
The teaching of history in French schools was influenced by the Nouvelle histoire as disseminated after the 1960s by Cahiers pédagogiques and Enseignement and other journals for teachers. Also influential was the Institut national de recherche et de documentation pédagogique, (INRDP). Joseph Leif, the Inspector-general of teacher training, said pupils children should learn about historians' approaches as well as facts and dates. Louis François, Dean of the History/Geography group in the Inspectorate of National Education advised that teachers should provide historic documents and promote "active methods" which would give pupils "the immense happiness of discovery." Proponents said it was a reaction against the memorization of names and dates that characterized teaching and left the students bored. Traditionalists protested loudly it was a postmodern innovation that threatened to leave the youth ignorant of French patriotism and national identity.[82]
|
137 |
+
|
138 |
+
In several countries history textbooks are tools to foster nationalism and patriotism, and give students the official narrative about national enemies.[83]
|
139 |
+
|
140 |
+
In many countries, history textbooks are sponsored by the national government and are written to put the national heritage in the most favourable light. For example, in Japan, mention of the Nanking Massacre has been removed from textbooks and the entire Second World War is given cursory treatment. Other countries have complained.[84] It was standard policy in communist countries to present only a rigid Marxist historiography.[85][86]
|
141 |
+
|
142 |
+
In the United States, textbooks published by the same company often differ in content from state to state.[87] An example of content that is represented different in different regions of the country is the history of the Southern states, where slavery and the American Civil War are treated as controversial topics. McGraw-Hill Education for example, was criticised for describing Africans brought to American plantations as "workers" instead of slaves in a textbook.[88]
|
143 |
+
|
144 |
+
Academic historians have often fought against the politicization of the textbooks, sometimes with success.[89][90]
|
145 |
+
|
146 |
+
In 21st-century Germany, the history curriculum is controlled by the 16 states, and is characterized not by superpatriotism but rather by an "almost pacifistic and deliberately unpatriotic undertone" and reflects "principles formulated by international organizations such as UNESCO or the Council of Europe, thus oriented towards human rights, democracy and peace." The result is that "German textbooks usually downplay national pride and ambitions and aim to develop an understanding of citizenship centered on democracy, progress, human rights, peace, tolerance and Europeanness."[91]
|
147 |
+
|
en/2584.html.txt
ADDED
@@ -0,0 +1,219 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Television (TV), sometimes shortened to tele or telly, is a telecommunication medium used for transmitting moving images in monochrome (black and white), or in color, and in two or three dimensions and sound. The term can refer to a television set, a television show, or the medium of television transmission. Television is a mass medium for advertising, entertainment, news, and sports.
|
6 |
+
|
7 |
+
Television became available in crude experimental forms in the late 1920s, but it would still be several years before the new technology would be marketed to consumers. After World War II, an improved form of black-and-white TV broadcasting became popular in the United States and Britain, and television sets became commonplace in homes, businesses, and institutions. During the 1950s, television was the primary medium for influencing public opinion.[1] In the mid-1960s, color broadcasting was introduced in the US and most other developed countries. The availability of multiple types of archival storage media such as Betamax and VHS tapes, high-capacity hard disk drives, DVDs, flash drives, high-definition Blu-ray Discs, and cloud digital video recorders has enabled viewers to watch pre-recorded material—such as movies—at home on their own time schedule. For many reasons, especially the convenience of remote retrieval, the storage of television and video programming now occurs on the cloud. At the end of the first decade of the 2000s, digital television transmissions greatly increased in popularity. Another development was the move from standard-definition television (SDTV) (576i, with 576 interlaced lines of resolution and 480i) to high-definition television (HDTV), which provides a resolution that is substantially higher. HDTV may be transmitted in various formats: 1080p, 1080i and 720p. Since 2010, with the invention of smart television, Internet television has increased the availability of television programs and movies via the Internet through streaming video services such as Netflix, Amazon Video, iPlayer and Hulu.
|
8 |
+
|
9 |
+
In 2013, 79% of the world's households owned a television set.[2] The replacement of early bulky, high-voltage cathode ray tube (CRT) screen displays with compact, energy-efficient, flat-panel alternative technologies such as LCDs (both fluorescent-backlit and LED), OLED displays, and plasma displays was a hardware revolution that began with computer monitors in the late 1990s. Most TV sets sold in the 2000s were flat-panel, mainly LEDs. Major manufacturers announced the discontinuation of CRT, DLP, plasma, and even fluorescent-backlit LCDs by the mid-2010s.[3][4] In the near future, LEDs are expected to be gradually replaced by OLEDs.[5] Also, major manufacturers have announced that they will increasingly produce smart TVs in the mid-2010s.[6][7][8] Smart TVs with integrated Internet and Web 2.0 functions became the dominant form of television by the late 2010s.[9]
|
10 |
+
|
11 |
+
Television signals were initially distributed only as terrestrial television using high-powered radio-frequency transmitters to broadcast the signal to individual television receivers. Alternatively television signals are distributed by coaxial cable or optical fiber, satellite systems and, since the 2000s via the Internet. Until the early 2000s, these were transmitted as analog signals, but a transition to digital television is expected to be completed worldwide by the late 2010s. A standard television set is composed of multiple internal electronic circuits, including a tuner for receiving and decoding broadcast signals. A visual display device which lacks a tuner is correctly called a video monitor rather than a television.
|
12 |
+
|
13 |
+
The word television comes from Ancient Greek τῆλε (tèle), meaning 'far', and Latin visio, meaning 'sight'. The first documented usage of the term dates back to 1900, when the Russian scientist Constantin Perskyi used it in a paper that he presented in French at the 1st International Congress of Electricity, which ran from 18 to 25 August 1900 during the International World Fair in Paris. The Anglicised version of the term is first attested in 1907, when it was still "...a theoretical system to transmit moving images over telegraph or telephone wires".[10] It was "...formed in English or borrowed from French télévision."[10] In the 19th century and early 20th century, other "...proposals for the name of a then-hypothetical technology for sending pictures over distance were telephote (1880) and televista (1904)."[10] The abbreviation "TV" is from 1948. The use of the term to mean "a television set" dates from 1941.[10] The use of the term to mean "television as a medium" dates from 1927.[10] The slang term "telly" is more common in the UK. The slang term "the tube" or the "boob tube" derives from the bulky cathode ray tube used on most TVs until the advent of flat-screen TVs. Another slang term for the TV is "idiot box".[11] Also, in the 1940s and throughout the 1950s, during the early rapid growth of television programming and television-set ownership in the United States, another slang term became widely used in that period and continues to be used today to distinguish productions originally created for broadcast on television from films developed for presentation in movie theaters.[12] The "small screen", as both a compound adjective and noun, became specific references to television, while the "big screen" was used to identify productions made for theatrical release.[12]
|
14 |
+
|
15 |
+
Facsimile transmission systems for still photographs pioneered methods of mechanical scanning of images in the early 19th century. Alexander Bain introduced the facsimile machine between 1843 and 1846. Frederick Bakewell demonstrated a working laboratory version in 1851.[citation needed] Willoughby Smith discovered the photoconductivity of the element selenium in 1873. As a 23-year-old German university student, Paul Julius Gottlieb Nipkow proposed and patented the Nipkow disk in 1884.[13] This was a spinning disk with a spiral pattern of holes in it, so each hole scanned a line of the image. Although he never built a working model of the system, variations of Nipkow's spinning-disk "image rasterizer" became exceedingly common.[14] Constantin Perskyi had coined the word television in a paper read to the International Electricity Congress at the International World Fair in Paris on 24 August 1900. Perskyi's paper reviewed the existing electromechanical technologies, mentioning the work of Nipkow and others.[15] However, it was not until 1907 that developments in amplification tube technology by Lee de Forest and Arthur Korn, among others, made the design practical.[16]
|
16 |
+
|
17 |
+
The first demonstration of the live transmission of images was by Georges Rignoux and A. Fournier in Paris in 1909. A matrix of 64 selenium cells, individually wired to a mechanical commutator, served as an electronic retina. In the receiver, a type of Kerr cell modulated the light and a series of variously angled mirrors attached to the edge of a rotating disc scanned the modulated beam onto the display screen. A separate circuit regulated synchronization. The 8x8 pixel resolution in this proof-of-concept demonstration was just sufficient to clearly transmit individual letters of the alphabet. An updated image was transmitted "several times" each second.[17]
|
18 |
+
|
19 |
+
In 1911, Boris Rosing and his student Vladimir Zworykin created a system that used a mechanical mirror-drum scanner to transmit, in Zworykin's words, "very crude images" over wires to the "Braun tube" (cathode ray tube or "CRT") in the receiver. Moving images were not possible because, in the scanner: "the sensitivity was not enough and the selenium cell was very laggy".[18]
|
20 |
+
|
21 |
+
In 1921, Edouard Belin sent the first image via radio waves with his belinograph.[citation needed]
|
22 |
+
|
23 |
+
By the 1920s, when amplification made television practical, Scottish inventor John Logie Baird employed the Nipkow disk in his prototype video systems. On 25 March 1925, Baird gave the first public demonstration of televised silhouette images in motion, at Selfridge's Department Store in London.[19] Since human faces had inadequate contrast to show up on his primitive system, he televised a ventriloquist's dummy named "Stooky Bill", whose painted face had higher contrast, talking and moving. By 26 January 1926, he demonstrated the transmission of the image of a face in motion by radio. This is widely regarded as the first television demonstration. The subject was Baird's business partner Oliver Hutchinson. Baird's system used the Nipkow disk for both scanning the image and displaying it. A bright light shining through a spinning Nipkow disk set with lenses projected a bright spot of light which swept across the subject. A Selenium photoelectric tube detected the light reflected from the subject and converted it into a proportional electrical signal. This was transmitted by AM radio waves to a receiver unit, where the video signal was applied to a neon light behind a second Nipkow disk rotating synchronized with the first. The brightness of the neon lamp was varied in proportion to the brightness of each spot on the image. As each hole in the disk passed by, one scan line of the image was reproduced. Baird's disk had 30 holes, producing an image with only 30 scan lines, just enough to recognize a human face. In 1927, Baird transmitted a signal over 438 miles (705 km) of telephone line between London and Glasgow.[citation needed]
|
24 |
+
|
25 |
+
In 1928, Baird's company (Baird Television Development Company/Cinema Television) broadcast the first transatlantic television signal, between London and New York, and the first shore-to-ship transmission. In 1929, he became involved in the first experimental mechanical television service in Germany. In November of the same year, Baird and Bernard Natan of Pathé established France's first television company, Télévision-Baird-Natan. In 1931, he made the first outdoor remote broadcast, of The Derby.[20] In 1932, he demonstrated ultra-short wave television. Baird's mechanical system reached a peak of 240-lines of resolution on BBC television broadcasts in 1936, though the mechanical system did not scan the televised scene directly. Instead a 17.5mm film was shot, rapidly developed and then scanned while the film was still wet.[citation needed]
|
26 |
+
|
27 |
+
An American inventor, Charles Francis Jenkins, also pioneered the television. He published an article on "Motion Pictures by Wireless" in 1913, but it was not until December 1923 that he transmitted moving silhouette images for witnesses; and it was on 13 June 1925, that he publicly demonstrated synchronized transmission of silhouette pictures. In 1925 Jenkins used the Nipkow disk and transmitted the silhouette image of a toy windmill in motion, over a distance of 5 miles (8 km), from a naval radio station in Maryland to his laboratory in Washington, D.C., using a lensed disk scanner with a 48-line resolution.[21][22] He was granted U.S. Patent No. 1,544,156 (Transmitting Pictures over Wireless) on 30 June 1925 (filed 13 March 1922).[23]
|
28 |
+
|
29 |
+
Herbert E. Ives and Frank Gray of Bell Telephone Laboratories gave a dramatic demonstration of mechanical television on 7 April 1927. Their reflected-light television system included both small and large viewing screens. The small receiver had a 2-inch-wide by 2.5-inch-high screen (5 by 6 cm). The large receiver had a screen 24 inches wide by 30 inches high (60 by 75 cm). Both sets were capable of reproducing reasonably accurate, monochromatic, moving images. Along with the pictures, the sets received synchronized sound. The system transmitted images over two paths: first, a copper wire link from Washington to New York City, then a radio link from Whippany, New Jersey. Comparing the two transmission methods, viewers noted no difference in quality. Subjects of the telecast included Secretary of Commerce Herbert Hoover. A flying-spot scanner beam illuminated these subjects. The scanner that produced the beam had a 50-aperture disk. The disc revolved at a rate of 18 frames per second, capturing one frame about every 56 milliseconds. (Today's systems typically transmit 30 or 60 frames per second, or one frame every 33.3 or 16.7 milliseconds respectively.) Television historian Albert Abramson underscored the significance of the Bell Labs demonstration: "It was in fact the best demonstration of a mechanical television system ever made to this time. It would be several years before any other system could even begin to compare with it in picture quality."[24]
|
30 |
+
|
31 |
+
In 1928, WRGB, then W2XB, was started as the world's first television station. It broadcast from the General Electric facility in Schenectady, NY. It was popularly known as "WGY Television". Meanwhile, in the Soviet Union, Léon Theremin had been developing a mirror drum-based television, starting with 16 lines resolution in 1925, then 32 lines and eventually 64 using interlacing in 1926. As part of his thesis, on 7 May 1926, he electrically transmitted, and then projected, near-simultaneous moving images on a 5-square-foot (0.46 m2) screen.[22]
|
32 |
+
|
33 |
+
By 1927, Theremin had achieved an image of 100 lines, a resolution that was not surpassed until May 1932 by RCA, with 120 lines.[25]
|
34 |
+
|
35 |
+
On 25 December 1926, Kenjiro Takayanagi demonstrated a television system with a 40-line resolution that employed a Nipkow disk scanner and CRT display at Hamamatsu Industrial High School in Japan. This prototype is still on display at the Takayanagi Memorial Museum in Shizuoka University, Hamamatsu Campus. His research in creating a production model was halted by the SCAP after World War II.[26]
|
36 |
+
|
37 |
+
Because only a limited number of holes could be made in the disks, and disks beyond a certain diameter became impractical, image resolution on mechanical television broadcasts was relatively low, ranging from about 30 lines up to 120 or so. Nevertheless, the image quality of 30-line transmissions steadily improved with technical advances, and by 1933 the UK broadcasts using the Baird system were remarkably clear.[27] A few systems ranging into the 200-line region also went on the air. Two of these were the 180-line system that Compagnie des Compteurs (CDC) installed in Paris in 1935, and the 180-line system that Peck Television Corp. started in 1935 at station VE9AK in Montreal.[28][29] The advancement of all-electronic television (including image dissectors and other camera tubes and cathode ray tubes for the reproducer) marked the beginning of the end for mechanical systems as the dominant form of television. Mechanical television, despite its inferior image quality and generally smaller picture, would remain the primary television technology until the 1930s. The last mechanical television broadcasts ended in 1939 at stations run by a handful of public universities in the United States.[citation needed]
|
38 |
+
|
39 |
+
In 1897, English physicist J. J. Thomson was able, in his three famous experiments, to deflect cathode rays, a fundamental function of the modern cathode ray tube (CRT). The earliest version of the CRT was invented by the German physicist Ferdinand Braun in 1897 and is also known as the "Braun" tube.[30][31] It was a cold-cathode diode, a modification of the Crookes tube, with a phosphor-coated screen. In 1906 the Germans Max Dieckmann and Gustav Glage produced raster images for the first time in a CRT.[32] In 1907, Russian scientist Boris Rosing used a CRT in the receiving end of an experimental video signal to form a picture. He managed to display simple geometric shapes onto the screen.[33]
|
40 |
+
|
41 |
+
In 1908 Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), published a letter in the scientific journal Nature in which he described how "distant electric vision" could be achieved by using a cathode ray tube, or Braun tube, as both a transmitting and receiving device,[34][35] He expanded on his vision in a speech given in London in 1911 and reported in The Times[36] and the Journal of the Röntgen Society.[37][38] In a letter to Nature published in October 1926, Campbell-Swinton also announced the results of some "not very successful experiments" he had conducted with G. M. Minchin and J. C. M. Stanton. They had attempted to generate an electrical signal by projecting an image onto a selenium-coated metal plate that was simultaneously scanned by a cathode ray beam.[39][40] These experiments were conducted before March 1914, when Minchin died,[41] but they were later repeated by two different teams in 1937, by H. Miller and J. W. Strange from EMI,[42] and by H. Iams and A. Rose from RCA.[43] Both teams succeeded in transmitting "very faint" images with the original Campbell-Swinton's selenium-coated plate. Although others had experimented with using a cathode ray tube as a receiver, the concept of using one as a transmitter was novel.[44] The first cathode ray tube to use a hot cathode was developed by John B. Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922.[citation needed]
|
42 |
+
|
43 |
+
In 1926, Hungarian engineer Kálmán Tihanyi designed a television system utilizing fully electronic scanning and display elements and employing the principle of "charge storage" within the scanning (or "camera") tube.[45][46][47][48] The problem of low sensitivity to light resulting in low electrical output from transmitting or "camera" tubes would be solved with the introduction of charge-storage technology by Kálmán Tihanyi beginning in 1924.[49] His solution was a camera tube that accumulated and stored electrical charges ("photoelectrons") within the tube throughout each scanning cycle. The device was first described in a patent application he filed in Hungary in March 1926 for a television system he dubbed "Radioskop".[50] After further refinements included in a 1928 patent application,[49] Tihanyi's patent was declared void in Great Britain in 1930,[51] so he applied for patents in the United States. Although his breakthrough would be incorporated into the design of RCA's "iconoscope" in 1931, the U.S. patent for Tihanyi's transmitting tube would not be granted until May 1939. The patent for his receiving tube had been granted the previous October. Both patents had been purchased by RCA prior to their approval.[52][53] Charge storage remains a basic principle in the design of imaging devices for television to the present day.[50] On 25 December 1926, at Hamamatsu Industrial High School in Japan, Japanese inventor Kenjiro Takayanagi demonstrated a TV system with a 40-line resolution that employed a CRT display.[26] This was the first working example of a fully electronic television receiver. Takayanagi did not apply for a patent.[54]
|
44 |
+
|
45 |
+
On 7 September 1927, American inventor Philo Farnsworth's image dissector camera tube transmitted its first image, a simple straight line, at his laboratory at 202 Green Street in San Francisco.[55][56] By 3 September 1928, Farnsworth had developed the system sufficiently to hold a demonstration for the press. This is widely regarded as the first electronic television demonstration.[56] In 1929, the system was improved further by the elimination of a motor generator, so that his television system now had no mechanical parts.[57] That year, Farnsworth transmitted the first live human images with his system, including a three and a half-inch image of his wife Elma ("Pem") with her eyes closed (possibly due to the bright lighting required).[58]
|
46 |
+
|
47 |
+
Meanwhile, Vladimir Zworykin was also experimenting with the cathode ray tube to create and show images. While working for Westinghouse Electric in 1923, he began to develop an electronic camera tube. But in a 1925 demonstration, the image was dim, had low contrast, and poor definition, and was stationary.[59] Zworykin's imaging tube never got beyond the laboratory stage. But RCA, which acquired the Westinghouse patent, asserted that the patent for Farnsworth's 1927 image dissector was written so broadly that it would exclude any other electronic imaging device. Thus RCA, on the basis of Zworykin's 1923 patent application, filed a patent interference suit against Farnsworth. The U.S. Patent Office examiner disagreed in a 1935 decision, finding priority of invention for Farnsworth against Zworykin. Farnsworth claimed that Zworykin's 1923 system would be unable to produce an electrical image of the type to challenge his patent. Zworykin received a patent in 1928 for a color transmission version of his 1923 patent application;[60] he also divided his original application in 1931.[61] Zworykin was unable or unwilling to introduce evidence of a working model of his tube that was based on his 1923 patent application. In September 1939, after losing an appeal in the courts, and determined to go forward with the commercial manufacturing of television equipment, RCA agreed to pay Farnsworth US$1 million over a ten-year period, in addition to license payments, to use his patents.[62][63]
|
48 |
+
|
49 |
+
In 1933, RCA introduced an improved camera tube that relied on Tihanyi's charge storage principle.[64] Dubbed the "Iconoscope" by Zworykin, the new tube had a light sensitivity of about 75,000 lux, and thus was claimed to be much more sensitive than Farnsworth's image dissector.[citation needed] However, Farnsworth had overcome his power problems with his Image Dissector through the invention of a completely unique "multipactor" device that he began work on in 1930, and demonstrated in 1931.[65][66] This small tube could amplify a signal reportedly to the 60th power or better[67] and showed great promise in all fields of electronics. Unfortunately, a problem with the multipactor was that it wore out at an unsatisfactory rate.[68]
|
50 |
+
|
51 |
+
At the Berlin Radio Show in August 1931, Manfred von Ardenne gave a public demonstration of a television system using a CRT for both transmission and reception. However, Ardenne had not developed a camera tube, using the CRT instead as a flying-spot scanner to scan slides and film.[69] Philo Farnsworth gave the world's first public demonstration of an all-electronic television system, using a live camera, at the Franklin Institute of Philadelphia on 25 August 1934, and for ten days afterwards.[70][71] Mexican inventor Guillermo González Camarena also played an important role in early TV. His experiments with TV (known as telectroescopía at first) began in 1931 and led to a patent for the "trichromatic field sequential system" color television in 1940.[72] In Britain, the EMI engineering team led by Isaac Shoenberg applied in 1932 for a patent for a new device they dubbed "the Emitron",[73][74] which formed the heart of the cameras they designed for the BBC. On 2 November 1936, a 405-line broadcasting service employing the Emitron began at studios in Alexandra Palace, and transmitted from a specially built mast atop one of the Victorian building's towers. It alternated for a short time with Baird's mechanical system in adjoining studios, but was more reliable and visibly superior. This was the world's first regular "high-definition" television service.[75]
|
52 |
+
|
53 |
+
The original American iconoscope was noisy, had a high ratio of interference to signal, and ultimately gave disappointing results, especially when compared to the high definition mechanical scanning systems then becoming available.[76][77] The EMI team, under the supervision of Isaac Shoenberg, analyzed how the iconoscope (or Emitron) produces an electronic signal and concluded that its real efficiency was only about 5% of the theoretical maximum.[78][79] They solved this problem by developing, and patenting in 1934, two new camera tubes dubbed super-Emitron and CPS Emitron.[80][81][82] The super-Emitron was between ten and fifteen times more sensitive than the original Emitron and iconoscope tubes and, in some cases, this ratio was considerably greater.[78] It was used for outside broadcasting by the BBC, for the first time, on Armistice Day 1937, when the general public could watch on a television set as the King laid a wreath at the Cenotaph.[83] This was the first time that anyone had broadcast a live street scene from cameras installed on the roof of neighboring buildings, because neither Farnsworth nor RCA would do the same until the 1939 New York World's Fair.
|
54 |
+
|
55 |
+
On the other hand, in 1934, Zworykin shared some patent rights with the German licensee company Telefunken.[84] The "image iconoscope" ("Superikonoskop" in Germany) was produced as a result of the collaboration. This tube is essentially identical to the super-Emitron.[citation needed] The production and commercialization of the super-Emitron and image iconoscope in Europe were not affected by the patent war between Zworykin and Farnsworth, because Dieckmann and Hell had priority in Germany for the invention of the image dissector, having submitted a patent application for their Lichtelektrische Bildzerlegerröhre für Fernseher (Photoelectric Image Dissector Tube for Television) in Germany in 1925,[85] two years before Farnsworth did the same in the United States.[86] The image iconoscope (Superikonoskop) became the industrial standard for public broadcasting in Europe from 1936 until 1960, when it was replaced by the vidicon and plumbicon tubes. Indeed, it was the representative of the European tradition in electronic tubes competing against the American tradition represented by the image orthicon.[87][88] The German company Heimann produced the Superikonoskop for the 1936 Berlin Olympic Games,[89][90] later Heimann also produced and commercialized it from 1940 to 1955;[91] finally the Dutch company Philips produced and commercialized the image iconoscope and multicon from 1952 to 1958.[88][92]
|
56 |
+
|
57 |
+
American television broadcasting, at the time, consisted of a variety of markets in a wide range of sizes, each competing for programming and dominance with separate technology, until deals were made and standards agreed upon in 1941.[93] RCA, for example, used only Iconoscopes in the New York area, but Farnsworth Image Dissectors in Philadelphia and San Francisco.[94] In September 1939, RCA agreed to pay the Farnsworth Television and Radio Corporation royalties over the next ten years for access to Farnsworth's patents.[95] With this historic agreement in place, RCA integrated much of what was best about the Farnsworth Technology into their systems.[94] In 1941, the United States implemented 525-line television.[96][97] Electrical engineer Benjamin Adler played a prominent role in the development of television.[98][99]
|
58 |
+
|
59 |
+
The world's first 625-line television standard was designed in the Soviet Union in 1944 and became a national standard in 1946.[100] The first broadcast in 625-line standard occurred in Moscow in 1948.[101] The concept of 625 lines per frame was subsequently implemented in the European CCIR standard.[102] In 1936, Kálmán Tihanyi described the principle of plasma display, the first flat panel display system.[103][104]
|
60 |
+
|
61 |
+
Early electronic television sets were large and bulky, with analog circuits made of vacuum tubes. Following the invention of the first working transistor at Bell Labs, Sony founder Masaru Ibuka predicted in 1952 that the transition to electronic circuits made of transistors would lead to smaller and more portable television sets.[105] The first fully transistorized, portable solid-state television set was the 8-inch Sony TV8-301, developed in 1959 and released in 1960.[106][107] This began the transformation of television viewership from a communal viewing experience to a solitary viewing experience.[108] By 1960, Sony had sold over 4 million portable television sets worldwide.[109]
|
62 |
+
|
63 |
+
The basic idea of using three monochrome images to produce a color image had been experimented with almost as soon as black-and-white televisions had first been built. Although he gave no practical details, among the earliest published proposals for television was one by Maurice Le Blanc, in 1880, for a color system, including the first mentions in television literature of line and frame scanning.[110] Polish inventor Jan Szczepanik patented a color television system in 1897, using a selenium photoelectric cell at the transmitter and an electromagnet controlling an oscillating mirror and a moving prism at the receiver. But his system contained no means of analyzing the spectrum of colors at the transmitting end, and could not have worked as he described it.[111] Another inventor, Hovannes Adamian, also experimented with color television as early as 1907. The first color television project is claimed by him,[112] and was patented in Germany on 31 March 1908, patent No. 197183, then in Britain, on 1 April 1908, patent No. 7219,[113] in France (patent No. 390326) and in Russia in 1910 (patent No. 17912).[114]
|
64 |
+
|
65 |
+
Scottish inventor John Logie Baird demonstrated the world's first color transmission on 3 July 1928, using scanning discs at the transmitting and receiving ends with three spirals of apertures, each spiral with filters of a different primary color; and three light sources at the receiving end, with a commutator to alternate their illumination.[115] Baird also made the world's first color broadcast on 4 February 1938, sending a mechanically scanned 120-line image from Baird's Crystal Palace studios to a projection screen at London's Dominion Theatre.[116] Mechanically scanned color television was also demonstrated by Bell Laboratories in June 1929 using three complete systems of photoelectric cells, amplifiers, glow-tubes, and color filters, with a series of mirrors to superimpose the red, green, and blue images into one full color image.
|
66 |
+
|
67 |
+
The first practical hybrid system was again pioneered by John Logie Baird. In 1940 he publicly demonstrated a color television combining a traditional black-and-white display with a rotating colored disk. This device was very "deep", but was later improved with a mirror folding the light path into an entirely practical device resembling a large conventional console.[117] However, Baird was not happy with the design, and, as early as 1944, had commented to a British government committee that a fully electronic device would be better.
|
68 |
+
|
69 |
+
In 1939, Hungarian engineer Peter Carl Goldmark introduced an electro-mechanical system while at CBS, which contained an Iconoscope sensor. The CBS field-sequential color system was partly mechanical, with a disc made of red, blue, and green filters spinning inside the television camera at 1,200 rpm, and a similar disc spinning in synchronization in front of the cathode ray tube inside the receiver set.[118] The system was first demonstrated to the Federal Communications Commission (FCC) on 29 August 1940, and shown to the press on 4 September.[119][120][121][122]
|
70 |
+
|
71 |
+
CBS began experimental color field tests using film as early as 28 August 1940, and live cameras by 12 November.[120][123] NBC (owned by RCA) made its first field test of color television on 20 February 1941. CBS began daily color field tests on 1 June 1941.[124] These color systems were not compatible with existing black-and-white television sets, and, as no color television sets were available to the public at this time, viewing of the color field tests was restricted to RCA and CBS engineers and the invited press. The War Production Board halted the manufacture of television and radio equipment for civilian use from 22 April 1942 to 20 August 1945, limiting any opportunity to introduce color television to the general public.[125][126]
|
72 |
+
|
73 |
+
As early as 1940, Baird had started work on a fully electronic system he called Telechrome. Early Telechrome devices used two electron guns aimed at either side of a phosphor plate. The phosphor was patterned so the electrons from the guns only fell on one side of the patterning or the other. Using cyan and magenta phosphors, a reasonable limited-color image could be obtained. He also demonstrated the same system using monochrome signals to produce a 3D image (called "stereoscopic" at the time). A demonstration on 16 August 1944 was the first example of a practical color television system. Work on the Telechrome continued and plans were made to introduce a three-gun version for full color. However, Baird's untimely death in 1946 ended development of the Telechrome system.[127][128]
|
74 |
+
Similar concepts were common through the 1940s and 1950s, differing primarily in the way they re-combined the colors generated by the three guns. The Geer tube was similar to Baird's concept, but used small pyramids with the phosphors deposited on their outside faces, instead of Baird's 3D patterning on a flat surface. The Penetron used three layers of phosphor on top of each other and increased the power of the beam to reach the upper layers when drawing those colors. The Chromatron used a set of focusing wires to select the colored phosphors arranged in vertical stripes on the tube.
|
75 |
+
|
76 |
+
One of the great technical challenges of introducing color broadcast television was the desire to conserve bandwidth, potentially three times that of the existing black-and-white standards, and not use an excessive amount of radio spectrum. In the United States, after considerable research, the National Television Systems Committee[129] approved an all-electronic system developed by RCA, which encoded the color information separately from the brightness information and greatly reduced the resolution of the color information in order to conserve bandwidth. As black-and-white TVs could receive the same transmission and display it in black-and-white, the color system adopted is [backwards] "compatible". ("Compatible Color", featured in RCA advertisements of the period, is mentioned in the song "America", of West Side Story, 1957.) The brightness image remained compatible with existing black-and-white television sets at slightly reduced resolution, while color televisions could decode the extra information in the signal and produce a limited-resolution color display. The higher resolution black-and-white and lower resolution color images combine in the brain to produce a seemingly high-resolution color image. The NTSC standard represented a major technical achievement.
|
77 |
+
|
78 |
+
The first color broadcast (the first episode of the live program The Marriage (TV series)) occurred on 8 July 1954, but during the following ten years most network broadcasts, and nearly all local programming, continued to be in black-and-white. It was not until the mid-1960s that color sets started selling in large numbers, due in part to the color transition of 1965 in which it was announced that over half of all network prime-time programming would be broadcast in color that fall. The first all-color prime-time season came just one year later. In 1972, the last holdout among daytime network programs converted to color, resulting in the first completely all-color network season.
|
79 |
+
|
80 |
+
Early color sets were either floor-standing console models or tabletop versions nearly as bulky and heavy, so in practice they remained firmly anchored in one place. GE's relatively compact and lightweight Porta-Color set was introduced in the spring of 1966. It used a transistor-based UHF tuner.[130] The first fully transistorized color television in the United States was the Quasar television introduced in 1967.[131] These developments made watching color television a more flexible and convenient proposition.
|
81 |
+
|
82 |
+
The MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959,[132] and presented in 1960.[133] By the mid-1960s, RCA were using MOSFETs in their consumer television products.[134] RCA Laboratories researchers W.M. Austin, J.A. Dean, D.M. Griswold and O.P. Hart in 1966 described the use of the MOSFET in television circuits, including RF amplifier, low-level video, chroma and AGC circuits.[135] The power MOSFET was later widely adopted for television receiver circuits.[136]
|
83 |
+
|
84 |
+
In 1972, sales of color sets finally surpassed sales of black-and-white sets. Color broadcasting in Europe was not standardized on the PAL format until the 1960s, and broadcasts did not start until 1967. By this point many of the technical problems in the early sets had been worked out, and the spread of color sets in Europe was fairly rapid. By the mid-1970s, the only stations broadcasting in black-and-white were a few high-numbered UHF stations in small markets, and a handful of low-power repeater stations in even smaller markets such as vacation spots. By 1979, even the last of these had converted to color and, by the early 1980s, B&W sets had been pushed into niche markets, notably low-power uses, small portable sets, or for use as video monitor screens in lower-cost consumer equipment. By the late 1980s even these areas switched to color sets.
|
85 |
+
|
86 |
+
Digital television (DTV) is the transmission of audio and video by digitally processed and multiplexed signals, in contrast to the totally analog and channel separated signals used by analog television. Due to data compression, digital TV can support more than one program in the same channel bandwidth.[137] It is an innovative service that represents the most significant evolution in television broadcast technology since color television emerged in the 1950s.[138] Digital TV's roots have been tied very closely to the availability of inexpensive, high performance computers. It was not until the 1990s that digital TV became feasible.[139] Digital television was previously not practically feasible due to the impractically high bandwidth requirements of uncompressed digital video,[140][141] requiring around 200 Mbit/s bit-rate for a standard-definition television (SDTV) signal,[140] and over 1 Gbit/s for high-definition television (HDTV).[141]
|
87 |
+
|
88 |
+
Digital TV became practically feasible in the early 1990s due to a major technological development, discrete cosine transform (DCT) video compression.[140][141] DCT coding is a lossy compression technique that was first proposed for image compression by Nasir Ahmed in 1972,[142] and was later adapted into a motion-compensated DCT video coding algorithm, for video coding standards such as the H.26x formats from 1988 onwards and the MPEG formats from 1991 onwards.[143][144] Motion-compensated DCT video compression significantly reduced the amount of bandwidth required for a digital TV signal.[140][141] DCT coding compressed down the bandwidth requirements of digital television signals to about 34 Mpps bit-rate for SDTV and around 70–140 Mbit/s for HDTV while maintaining near-studio-quality transmission, making digital television a practical reality in the 1990s.[141]
|
89 |
+
|
90 |
+
A digital TV service was proposed in 1986 by Nippon Telegraph and Telephone (NTT) and the Ministry of Posts and Telecommunication (MPT) in Japan, where there were plans to develop an "Integrated Network System" service. However, it was not possible to practically implement such a digital TV service until the adoption of DCT video compression technology made it possible in the early 1990s.[140]
|
91 |
+
|
92 |
+
In the mid-1980s, as Japanese consumer electronics firms forged ahead with the development of HDTV technology, the MUSE analog format proposed by NHK, a Japanese company, was seen as a pacesetter that threatened to eclipse U.S. electronics companies' technologies. Until June 1990, the Japanese MUSE standard, based on an analog system, was the front-runner among the more than 23 different technical concepts under consideration. Then, an American company, General Instrument, demonstrated the feasibility of a digital television signal. This breakthrough was of such significance that the FCC was persuaded to delay its decision on an ATV standard until a digitally based standard could be developed.
|
93 |
+
|
94 |
+
In March 1990, when it became clear that a digital standard was feasible, the FCC made a number of critical decisions. First, the Commission declared that the new ATV standard must be more than an enhanced analog signal, but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images.(7) Then, to ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being "simulcast" on different channels.(8)The new ATV standard also allowed the new DTV signal to be based on entirely new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements.
|
95 |
+
|
96 |
+
The final standards adopted by the FCC did not require a single standard for scanning formats, aspect ratios, or lines of resolution. This compromise resulted from a dispute between the consumer electronics industry (joined by some broadcasters) and the computer industry (joined by the film industry and some public interest groups) over which of the two scanning processes—interlaced or progressive—would be best suited for the newer digital HDTV compatible display devices.[145] Interlaced scanning, which had been specifically designed for older analogue CRT display technologies, scans even-numbered lines first, then odd-numbered ones. In fact, interlaced scanning can be looked at as the first video compression model as it was partly designed in the 1940s to double the image resolution to exceed the limitations of the television broadcast bandwidth. Another reason for its adoption was to limit the flickering on early CRT screens whose phosphor coated screens could only retain the image from the electron scanning gun for a relatively short duration.[146] However interlaced scanning does not work as efficiently on newer display devices such as Liquid-crystal (LCD), for example, which are better suited to a more frequent progressive refresh rate.[145]
|
97 |
+
|
98 |
+
Progressive scanning, the format that the computer industry had long adopted for computer display monitors, scans every line in sequence, from top to bottom. Progressive scanning in effect doubles the amount of data generated for every full screen displayed in comparison to interlaced scanning by painting the screen in one pass in 1/60-second, instead of two passes in 1/30-second. The computer industry argued that progressive scanning is superior because it does not "flicker" on the new standard of display devices in the manner of interlaced scanning. It also argued that progressive scanning enables easier connections with the Internet, and is more cheaply converted to interlaced formats than vice versa. The film industry also supported progressive scanning because it offered a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures then (and currently) feasible, i.e., 1,080 lines per picture and 1,920 pixels per line. Broadcasters also favored interlaced scanning because their vast archive of interlaced programming is not readily compatible with a progressive format. William F. Schreiber, who was director of the Advanced Television Research Program at the Massachusetts Institute of Technology from 1983 until his retirement in 1990, thought that the continued advocacy of interlaced equipment originated from consumer electronics companies that were trying to get back the substantial investments they made in the interlaced technology.[147]
|
99 |
+
|
100 |
+
Digital television transition started in late 2000s. All governments across the world set the deadline for analog shutdown by 2010s. Initially the adoption rate was low, as the first digital tuner-equipped TVs were costly. But soon, as the price of digital-capable TVs dropped, more and more households were converting to digital televisions. The transition is expected to be completed worldwide by mid to late 2010s.
|
101 |
+
|
102 |
+
The advent of digital television allowed innovations like smart TVs. A smart television, sometimes referred to as connected TV or hybrid TV, is a television set or set-top box with integrated Internet and Web 2.0 features, and is an example of technological convergence between computers, television sets and set-top boxes. Besides the traditional functions of television sets and set-top boxes provided through traditional Broadcasting media, these devices can also provide Internet TV, online interactive media, over-the-top content, as well as on-demand streaming media, and home networking access. These TVs come pre-loaded with an operating system.[9][148][149][150]
|
103 |
+
|
104 |
+
Smart TV should not to be confused with Internet TV, Internet Protocol television (IPTV) or with Web TV. Internet television refers to the receiving of television content over the Internet instead of by traditional systems—terrestrial, cable and satellite (although internet itself is received by these methods). IPTV is one of the emerging Internet television technology standards for use by television broadcasters. Web television (WebTV) is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV. A first patent was filed in 1994[151] (and extended the following year)[152] for an "intelligent" television system, linked with data processing systems, by means of a digital or analog network. Apart from being linked to data networks, one key point is its ability to automatically download necessary software routines, according to a user's demand, and process their needs. Major TV manufacturers have announced production of smart TVs only, for middle-end and high-end TVs in 2015.[6][7][8] Smart TVs have gotten more affordable compared to when they were first introduced, with 46 million of U.S. households having at least one as of 2019.[153]
|
105 |
+
|
106 |
+
3D television conveys depth perception to the viewer by employing techniques such as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern 3D television sets use an active shutter 3D system or a polarized 3D system, and some are autostereoscopic without the need of glasses. Stereoscopic 3D television was demonstrated for the first time on 10 August 1928, by John Logie Baird in his company's premises at 133 Long Acre, London.[154] Baird pioneered a variety of 3D television systems using electromechanical and cathode-ray tube techniques. The first 3D TV was produced in 1935. The advent of digital television in the 2000s greatly improved 3D TVs. Although 3D TV sets are quite popular for watching 3D home media such as on Blu-ray discs, 3D programming has largely failed to make inroads with the public. Many 3D television channels which started in the early 2010s were shut down by the mid-2010s. According to DisplaySearch 3D televisions shipments totaled 41.45 million units in 2012, compared with 24.14 in 2011 and 2.26 in 2010.[155] As of late 2013, the number of 3D TV viewers started to decline.[156][157][158][159][160]
|
107 |
+
|
108 |
+
Programming is broadcast by television stations, sometimes called "channels", as stations are licensed by their governments to broadcast only over assigned channels in the television band. At first, terrestrial broadcasting was the only way television could be widely distributed, and because bandwidth was limited, i.e., there were only a small number of channels available, government regulation was the norm. In the U.S., the Federal Communications Commission (FCC) allowed stations to broadcast advertisements beginning in July 1941, but required public service programming commitments as a requirement for a license. By contrast, the United Kingdom chose a different route, imposing a television license fee on owners of television reception equipment to fund the British Broadcasting Corporation (BBC), which had public service as part of its Royal Charter.
|
109 |
+
|
110 |
+
WRGB claims to be the world's oldest television station, tracing its roots to an experimental station founded on 13 January 1928, broadcasting from the General Electric factory in Schenectady, NY, under the call letters W2XB.[161] It was popularly known as "WGY Television" after its sister radio station. Later in 1928, General Electric started a second facility, this one in New York City, which had the call letters W2XBS and which today is known as WNBC. The two stations were experimental in nature and had no regular programming, as receivers were operated by engineers within the company. The image of a Felix the Cat doll rotating on a turntable was broadcast for 2 hours every day for several years as new technology was being tested by the engineers. On 2 November 1936, the BBC began transmitting the world's first public regular high-definition service from the Victorian Alexandra Palace in north London.[162] It therefore claims to be the birthplace of TV broadcasting as we know it today.
|
111 |
+
|
112 |
+
With the widespread adoption of cable across the United States in the 1970s and 80s, terrestrial television broadcasts have been in decline; in 2013 it was estimated that about 7% of US households used an antenna.[163][164] A slight increase in use began around 2010 due to switchover to digital terrestrial television broadcasts, which offered pristine image quality over very large areas, and offered an alternate to cable television (CATV) for cord cutters. All other countries around the world are also in the process of either shutting down analog terrestrial television or switching over to digital terrestrial television.
|
113 |
+
|
114 |
+
Cable television is a system of broadcasting television programming to paying subscribers via radio frequency (RF) signals transmitted through coaxial cables or light pulses through fiber-optic cables. This contrasts with traditional terrestrial television, in which the television signal is transmitted over the air by radio waves and received by a television antenna attached to the television. In the 2000s, FM radio programming, high-speed Internet, telephone service, and similar non-television services may also be provided through these cables. The abbreviation CATV is often used for cable television. It originally stood for Community Access Television or Community Antenna Television, from cable television's origins in 1948: in areas where over-the-air reception was limited by distance from transmitters or mountainous terrain, large "community antennas" were constructed, and cable was run from them to individual homes.[165] The origins of cable broadcasting are even older as radio programming was distributed by cable in some European cities as far back as 1924. Earlier cable television was analog, but since the 2000s, all cable operators have switched to, or are in the process of switching to, digital cable television.
|
115 |
+
|
116 |
+
Satellite television is a system of supplying television programming using broadcast signals relayed from communication satellites. The signals are received via an outdoor parabolic reflector antenna usually referred to as a satellite dish and a low-noise block downconverter (LNB). A satellite receiver then decodes the desired television program for viewing on a television set. Receivers can be external set-top boxes, or a built-in television tuner. Satellite television provides a wide range of channels and services, especially to geographic areas without terrestrial television or cable television.
|
117 |
+
|
118 |
+
The most common method of reception is direct-broadcast satellite television (DBSTV), also known as "direct to home" (DTH).[166] In DBSTV systems, signals are relayed from a direct broadcast satellite on the Ku wavelength and are completely digital.[167] Satellite TV systems formerly used systems known as television receive-only. These systems received analog signals transmitted in the C-band spectrum from FSS type satellites, and required the use of large dishes. Consequently, these systems were nicknamed "big dish" systems, and were more expensive and less popular.[168]
|
119 |
+
|
120 |
+
The direct-broadcast satellite television signals were earlier analog signals and later digital signals, both of which require a compatible receiver. Digital signals may include high-definition television (HDTV). Some transmissions and channels are free-to-air or free-to-view, while many other channels are pay television requiring a subscription.[169]
|
121 |
+
In 1945, British science fiction writer Arthur C. Clarke proposed a worldwide communications system which would function by means of three satellites equally spaced apart in earth orbit.[170][171] This was published in the October 1945 issue of the Wireless World magazine and won him the Franklin Institute's Stuart Ballantine Medal in 1963.[172][173]
|
122 |
+
|
123 |
+
The first satellite television signals from Europe to North America were relayed via the Telstar satellite over the Atlantic ocean on 23 July 1962.[174] The signals were received and broadcast in North American and European countries and watched by over 100 million.[174] Launched in 1962, the Relay 1 satellite was the first satellite to transmit television signals from the US to Japan.[175] The first geosynchronous communication satellite, Syncom 2, was launched on 26 July 1963.[176]
|
124 |
+
|
125 |
+
The world's first commercial communications satellite, called Intelsat I and nicknamed "Early Bird", was launched into geosynchronous orbit on 6 April 1965.[177] The first national network of television satellites, called Orbita, was created by the Soviet Union in October 1967, and was based on the principle of using the highly elliptical Molniya satellite for rebroadcasting and delivering of television signals to ground downlink stations.[178] The first commercial North American satellite to carry television transmissions was Canada's geostationary Anik 1, which was launched on 9 November 1972.[179] ATS-6, the world's first experimental educational and Direct Broadcast Satellite (DBS), was launched on 30 May 1974.[180] It transmitted at 860 MHz using wideband FM modulation and had two sound channels. The transmissions were focused on the Indian subcontinent but experimenters were able to receive the signal in Western Europe using home constructed equipment that drew on UHF television design techniques already in use.[181]
|
126 |
+
|
127 |
+
The first in a series of Soviet geostationary satellites to carry Direct-To-Home television, Ekran 1, was launched on 26 October 1976.[182] It used a 714 MHz UHF downlink frequency so that the transmissions could be received with existing UHF television technology rather than microwave technology.[183]
|
128 |
+
|
129 |
+
Internet television (Internet TV) (or online television) is the digital distribution of television content via the Internet as opposed to traditional systems like terrestrial, cable, and satellite, although the Internet itself is received by terrestrial, cable, or satellite methods. Internet television is a general term that covers the delivery of television shows, and other video content, over the Internet by video streaming technology, typically by major traditional television broadcasters. Internet television should not be confused with Smart TV, IPTV or with Web TV. Smart television refers to the TV set which has a built-in operating system. Internet Protocol television (IPTV) is one of the emerging Internet television technology standards for use by television broadcasters. Web television is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV.
|
130 |
+
|
131 |
+
A television set, also called a television receiver, television, TV set, TV, or "telly", is a device that combines a tuner, display, an amplifier, and speakers for the purpose of viewing television and hearing its audio components. Introduced in the late 1920s in mechanical form, television sets became a popular consumer product after World War II in electronic form, using cathode ray tubes. The addition of color to broadcast television after 1953 further increased the popularity of television sets and an outdoor antenna became a common feature of suburban homes. The ubiquitous television set became the display device for recorded media in the 1970s, such as Betamax and VHS, which enabled viewers to record TV shows and watch prerecorded movies. In the subsequent decades, TVs were used to watch DVDs and Blu-ray Discs of movies and other content. Major TV manufacturers announced the discontinuation of CRT, DLP, plasma and fluorescent-backlit LCDs by the mid-2010s. Televisions since 2010s mostly use LEDs.[3][4][184][185] LEDs are expected to be gradually replaced by OLEDs in the near future.[5]
|
132 |
+
|
133 |
+
The earliest systems employed a spinning disk to create and reproduce images.[186] These usually had a low resolution and screen size and never became popular with the public.
|
134 |
+
|
135 |
+
The cathode ray tube (CRT) is a vacuum tube containing one or more electron guns (a source of electrons or electron emitter) and a fluorescent screen used to view images.[33] It has a means to accelerate and deflect the electron beam(s) onto the screen to create the images. The images may represent electrical waveforms (oscilloscope), pictures (television, computer monitor), radar targets or others. The CRT uses an evacuated glass envelope which is large, deep (i.e. long from front screen face to rear end), fairly heavy, and relatively fragile. As a matter of safety, the face is typically made of thick lead glass so as to be highly shatter-resistant and to block most X-ray emissions, particularly if the CRT is used in a consumer product.
|
136 |
+
|
137 |
+
In television sets and computer monitors, the entire front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster. An image is produced by controlling the intensity of each of the three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference.[187] In all modern CRT monitors and televisions, the beams are bent by magnetic deflection, a varying magnetic field generated by coils and driven by electronic circuits around the neck of the tube, although electrostatic deflection is commonly used in oscilloscopes, a type of diagnostic instrument.[187]
|
138 |
+
|
139 |
+
Digital Light Processing (DLP) is a type of video projector technology that uses a digital micromirror device. Some DLPs have a TV tuner, which makes them a type of TV display. It was originally developed in 1987 by Dr. Larry Hornbeck of Texas Instruments. While the DLP imaging device was invented by Texas Instruments, the first DLP based projector was introduced by Digital Projection Ltd in 1997. Digital Projection and Texas Instruments were both awarded Emmy Awards in 1998 for invention of the DLP projector technology. DLP is used in a variety of display applications from traditional static displays to interactive displays and also non-traditional embedded applications including medical, security, and industrial uses. DLP technology is used in DLP front projectors (standalone projection units for classrooms and business primarily), but also in private homes; in these cases, the image is projected onto a projection screen. DLP is also used in DLP rear projection television sets and digital signs. It is also used in about 85% of digital cinema projection.[188]
|
140 |
+
|
141 |
+
A plasma display panel (PDP) is a type of flat panel display common to large TV displays 30 inches (76 cm) or larger. They are called "plasma" displays because the technology utilizes small cells containing electrically charged ionized gases, or what are in essence chambers more commonly known as fluorescent lamps.
|
142 |
+
|
143 |
+
Liquid-crystal-display televisions (LCD TV) are television sets that use LCD display technology to produce images. LCD televisions are much thinner and lighter than cathode ray tube (CRTs) of similar display size, and are available in much larger sizes (e.g., 90-inch diagonal). When manufacturing costs fell, this combination of features made LCDs practical for television receivers. LCDs come in two types: those using cold cathode fluorescent lamps, simply called LCDs and those using LED as backlight called as LEDs.
|
144 |
+
|
145 |
+
In 2007, LCD televisions surpassed sales of CRT-based televisions worldwide for the first time, and their sales figures relative to other technologies accelerated. LCD TVs have quickly displaced the only major competitors in the large-screen market, the Plasma display panel and rear-projection television.[189] In mid 2010s LCDs especially LEDs became, by far, the most widely produced and sold television display type.[184][185] LCDs also have disadvantages. Other technologies address these weaknesses, including OLEDs, FED and SED, but as of 2014[update] none of these have entered widespread production.
|
146 |
+
|
147 |
+
An OLED (organic light-emitting diode) is a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound which emits light in response to an electric current. This layer of organic semiconductor is situated between two electrodes. Generally, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens. It is also used for computer monitors, portable systems such as mobile phones, handheld game consoles and PDAs.
|
148 |
+
|
149 |
+
There are two main families of OLED: those based on small molecules and those employing polymers. Adding mobile ions to an OLED creates a light-emitting electrochemical cell or LEC, which has a slightly different mode of operation. OLED displays can use either passive-matrix (PMOLED) or active-matrix (AMOLED) addressing schemes. Active-matrix OLEDs require a thin-film transistor backplane to switch each individual pixel on or off, but allow for higher resolution and larger display sizes.
|
150 |
+
|
151 |
+
An OLED display works without a backlight. Thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD). In low ambient light conditions such as a dark room an OLED screen can achieve a higher contrast ratio than an LCD, whether the LCD uses cold cathode fluorescent lamps or LED backlight. OLEDs are expected to replace other forms of display in near future.[5]
|
152 |
+
|
153 |
+
Low-definition television or LDTV refers to television systems that have a lower screen resolution than standard-definition television systems such 240p (320*240). It is used in handheld television. The most common source of LDTV programming is the Internet, where mass distribution of higher-resolution video files could overwhelm computer servers and take too long to download. Many mobile phones and portable devices such as Apple's iPod Nano, or Sony's PlayStation Portable use LDTV video, as higher-resolution files would be excessive to the needs of their small screens (320×240 and 480×272 pixels respectively). The current generation of iPod Nanos have LDTV screens, as do the first three generations of iPod Touch and iPhone (480×320). For the first years of its existence, YouTube offered only one, low-definition resolution of 320x240p at 30fps or less. A standard, consumer grade VHS videotape can be considered SDTV due to its resolution (approximately 360 × 480i/576i).
|
154 |
+
|
155 |
+
Standard-definition television or SDTV refers to two different resolutions: 576i, with 576 interlaced lines of resolution, derived from the European-developed PAL and SECAM systems; and 480i based on the American National Television System Committee NTSC system. SDTV is a television system that uses a resolution that is not considered to be either high-definition television (720p, 1080i, 1080p, 1440p, 4K UHDTV, and 8K UHD) or enhanced-definition television (EDTV 480p). In North America, digital SDTV is broadcast in the same 4:3 aspect ratio as NTSC signals with widescreen content being center cut.[190] However, in other parts of the world that used the PAL or SECAM color systems, standard-definition television is now usually shown with a 16:9 aspect ratio, with the transition occurring between the mid-1990s and mid-2000s. Older programs with a 4:3 aspect ratio are shown in the US as 4:3 with non-ATSC countries preferring to reduce the horizontal resolution by anamorphically scaling a pillarboxed image.
|
156 |
+
|
157 |
+
High-definition television (HDTV) provides a resolution that is substantially higher than that of standard-definition television.
|
158 |
+
|
159 |
+
HDTV may be transmitted in various formats:
|
160 |
+
|
161 |
+
Ultra-high-definition television (also known as Super Hi-Vision, Ultra HD television, UltraHD, UHDTV, or UHD) includes 4K UHD (2160p) and 8K UHD (4320p), which are two digital video formats proposed by NHK Science & Technology Research Laboratories and defined and approved by the International Telecommunication Union (ITU). The Consumer Electronics Association announced on 17 October 2012, that "Ultra High Definition", or "Ultra HD", would be used for displays that have an aspect ratio of at least 16:9 and at least one digital input capable of carrying and presenting native video at a minimum resolution of 3840×2160 pixels.[191][192]
|
162 |
+
|
163 |
+
North American consumers purchase a new television set on average every seven years, and the average household owns 2.8 televisions. As of 2011[update], 48 million are sold each year at an average price of $460 and size of 38 in (97 cm).[193]
|
164 |
+
|
165 |
+
Getting TV programming shown to the public can happen in many different ways. After production, the next step is to market and deliver the product to whichever markets are open to using it. This typically happens on two levels:
|
166 |
+
|
167 |
+
First-run programming is increasing on subscription services outside the US, but few domestically produced programs are syndicated on domestic free-to-air (FTA) elsewhere. This practice is increasing, however, generally on digital-only FTA channels or with subscriber-only, first-run material appearing on FTA. Unlike the US, repeat FTA screenings of an FTA network program usually only occur on that network. Also, affiliates rarely buy or produce non-network programming that is not centered on local programming.
|
168 |
+
|
169 |
+
Television genres include a broad range of programming types that entertain, inform, and educate viewers. The most expensive entertainment genres to produce are usually dramas and dramatic miniseries. However, other genres, such as historical Western genres, may also have high production costs.
|
170 |
+
|
171 |
+
Popular culture entertainment genres include action-oriented shows such as police, crime, detective dramas, horror, or thriller shows. As well, there are also other variants of the drama genre, such as medical dramas and daytime soap operas. Science fiction shows can fall into either the drama or action category, depending on whether they emphasize philosophical questions or high adventure. Comedy is a popular genre which includes situation comedy (sitcom) and animated shows for the adult demographic such as South Park.
|
172 |
+
|
173 |
+
The least expensive forms of entertainment programming genres are game shows, talk shows, variety shows, and reality television. Game shows feature contestants answering questions and solving puzzles to win prizes. Talk shows contain interviews with film, television, music and sports celebrities and public figures. Variety shows feature a range of musical performers and other entertainers, such as comedians and magicians, introduced by a host or Master of Ceremonies. There is some crossover between some talk shows and variety shows because leading talk shows often feature performances by bands, singers, comedians, and other performers in between the interview segments. Reality TV shows "regular" people (i.e., not actors) facing unusual challenges or experiences ranging from arrest by police officers (COPS) to significant weight loss (The Biggest Loser). A variant version of reality shows depicts celebrities doing mundane activities such as going about their everyday life (The Osbournes, Snoop Dogg's Father Hood) or doing regular jobs (The Simple Life).
|
174 |
+
|
175 |
+
Fictional television programs that some television scholars and broadcasting advocacy groups argue are "quality television", include series such as Twin Peaks and The Sopranos. Kristin Thompson argues that some of these television series exhibit traits also found in art films, such as psychological realism, narrative complexity, and ambiguous plotlines. Nonfiction television programs that some television scholars and broadcasting advocacy groups argue are "quality television", include a range of serious, noncommercial, programming aimed at a niche audience, such as documentaries and public affairs shows.
|
176 |
+
|
177 |
+
Around the globe, broadcast TV is financed by government, advertising, licensing (a form of tax), subscription, or any combination of these. To protect revenues, subscription TV channels are usually encrypted to ensure that only subscribers receive the decryption codes to see the signal. Unencrypted channels are known as free to air or FTA. In 2009, the global TV market represented 1,217.2 million TV households with at least one TV and total revenues of 268.9 billion EUR (declining 1.2% compared to 2008).[195] North America had the biggest TV revenue market share with 39% followed by Europe (31%), Asia-Pacific (21%), Latin America (8%), and Africa and the Middle East (2%).[196] Globally, the different TV revenue sources divide into 45–50% TV advertising revenues, 40–45% subscription fees and 10% public funding.[197][198]
|
178 |
+
|
179 |
+
TV's broad reach makes it a powerful and attractive medium for advertisers. Many TV networks and stations sell blocks of broadcast time to advertisers ("sponsors") to fund their programming.[199] Television advertisements (variously called a television commercial, commercial or ad in American English, and known in British English as an advert) is a span of television programming produced and paid for by an organization, which conveys a message, typically to market a product or service. Advertising revenue provides a significant portion of the funding for most privately owned television networks. The vast majority of television advertisements today consist of brief advertising spots, ranging in length from a few seconds to several minutes (as well as program-length infomercials). Advertisements of this sort have been used to promote a wide variety of goods, services and ideas since the beginning of television.
|
180 |
+
|
181 |
+
The effects of television advertising upon the viewing public (and the effects of mass media in general) have been the subject of philosophical discourse by such luminaries as Marshall McLuhan. The viewership of television programming, as measured by companies such as Nielsen Media Research, is often used as a metric for television advertisement placement, and consequently, for the rates charged to advertisers to air within a given network, television program, or time of day (called a "daypart"). In many countries, including the United States, television campaign advertisements are considered indispensable for a political campaign. In other countries, such as France, political advertising on television is heavily restricted,[200] while some countries, such as Norway, completely ban political advertisements.
|
182 |
+
|
183 |
+
The first official, paid television advertisement was broadcast in the United States on 1 July 1941 over New York station WNBT (now WNBC) before a baseball game between the Brooklyn Dodgers and Philadelphia Phillies. The announcement for Bulova watches, for which the company paid anywhere from $4.00 to $9.00 (reports vary), displayed a WNBT test pattern modified to look like a clock with the hands showing the time. The Bulova logo, with the phrase "Bulova Watch Time", was shown in the lower right-hand quadrant of the test pattern while the second hand swept around the dial for one minute.[201][202] The first TV ad broadcast in the UK was on ITV on 22 September 1955, advertising Gibbs SR toothpaste. The first TV ad broadcast in Asia was on Nippon Television in Tokyo on 28 August 1953, advertising Seikosha (now Seiko), which also displayed a clock with the current time.[203]
|
184 |
+
|
185 |
+
Since inception in the US in 1941,[204] television commercials have become one of the most effective, persuasive, and popular methods of selling products of many sorts, especially consumer goods. During the 1940s and into the 1950s, programs were hosted by single advertisers. This, in turn, gave great creative license to the advertisers over the content of the show. Perhaps due to the quiz show scandals in the 1950s,[205] networks shifted to the magazine concept, introducing advertising breaks with multiple advertisers.
|
186 |
+
|
187 |
+
US advertising rates are determined primarily by Nielsen ratings. The time of the day and popularity of the channel determine how much a TV commercial can cost. For example, it can cost approximately $750,000 for a 30-second block of commercial time during the highly popular American Idol, while the same amount of time for the Super Bowl can cost several million dollars. Conversely, lesser-viewed time slots, such as early mornings and weekday afternoons, are often sold in bulk to producers of infomercials at far lower rates. In recent years, the paid program or infomercial has become common, usually in lengths of 30 minutes or one hour. Some drug companies and other businesses have even created "news" items for broadcast, known in the industry as video news releases, paying program directors to use them.[206]
|
188 |
+
|
189 |
+
Some TV programs also deliberately place products into their shows as advertisements, a practice started in feature films[207] and known as product placement. For example, a character could be drinking a certain kind of soda, going to a particular chain restaurant, or driving a certain make of car. (This is sometimes very subtle, with shows having vehicles provided by manufacturers for low cost in exchange as a product placement). Sometimes, a specific brand or trade mark, or music from a certain artist or group, is used. (This excludes guest appearances by artists who perform on the show.)
|
190 |
+
|
191 |
+
The TV regulator oversees TV advertising in the United Kingdom. Its restrictions have applied since the early days of commercially funded TV. Despite this, an early TV mogul, Roy Thomson, likened the broadcasting licence as being a "licence to print money".[208] Restrictions mean that the big three national commercial TV channels: ITV, Channel 4, and Channel 5 can show an average of only seven minutes of advertising per hour (eight minutes in the peak period). Other broadcasters must average no more than nine minutes (twelve in the peak). This means that many imported TV shows from the US have unnatural pauses where the UK company does not utilize the narrative breaks intended for more frequent US advertising. Advertisements must not be inserted in the course of certain specific proscribed types of programs which last less than half an hour in scheduled duration; this list includes any news or current affairs programs, documentaries, and programs for children; additionally, advertisements may not be carried in a program designed and broadcast for reception in schools or in any religious broadcasting service or other devotional program or during a formal Royal ceremony or occasion. There also must be clear demarcations in time between the programs and the advertisements. The BBC, being strictly non-commercial, is not allowed to show advertisements on television in the UK, although it has many advertising-funded channels abroad. The majority of its budget comes from television license fees (see below) and broadcast syndication, the sale of content to other broadcasters.
|
192 |
+
|
193 |
+
Broadcast advertising is regulated by the Broadcasting Authority of Ireland.[209]
|
194 |
+
|
195 |
+
Some TV channels are partly funded from subscriptions; therefore, the signals are encrypted during broadcast to ensure that only the paying subscribers have access to the decryption codes to watch pay television or specialty channels. Most subscription services are also funded by advertising.
|
196 |
+
|
197 |
+
Television services in some countries may be funded by a television licence or a form of taxation, which means that advertising plays a lesser role or no role at all. For example, some channels may carry no advertising at all and some very little, including:
|
198 |
+
|
199 |
+
The BBC carries no television advertising on its UK channels and is funded by an annual television licence paid by premises receiving live TV broadcasts. Currently, it is estimated that approximately 26.8 million UK private domestic households own televisions, with approximately 25 million TV licences in all premises in force as of 2010.[210] This television license fee is set by the government, but the BBC is not answerable to or controlled by the government.
|
200 |
+
|
201 |
+
The two main BBC TV channels are watched by almost 90% of the population each week and overall have 27% share of total viewing,[211] despite the fact that 85% of homes are multichannel, with 42% of these having access to 200 free to air channels via satellite and another 43% having access to 30 or more channels via Freeview.[212] The licence that funds the seven advertising-free BBC TV channels costs £147 a year (about US$200) as of 2018 regardless of the number of TV sets owned; the price is reduced by two-thirds if only black and white television is received.[213] When the same sporting event has been presented on both BBC and commercial channels, the BBC always attracts the lion's share of the audience, indicating that viewers prefer to watch TV uninterrupted by advertising.
|
202 |
+
|
203 |
+
Other than internal promotional material, the Australian Broadcasting Corporation (ABC) carries no advertising; it is banned under the ABC Act 1983. The ABC receives its funding from the Australian government every three years. In the 2014/15 federal budget, the ABC received A$1.11 billion.[214] The funds provide for the ABC's television, radio, online, and international outputs. The ABC also receives funds from its many ABC shops across Australia. Although funded by the Australian government, the editorial independence of the ABC is ensured through law.
|
204 |
+
|
205 |
+
In France, government-funded channels carry advertisements, yet those who own television sets have to pay an annual tax ("la redevance audiovisuelle").[215]
|
206 |
+
|
207 |
+
In Japan, NHK is paid for by license fees (known in Japanese as reception fee (受信料, Jushinryō)). The broadcast law that governs NHK's funding stipulates that any television equipped to receive NHK is required to pay. The fee is standardized, with discounts for office workers and students who commute, as well a general discount for residents of Okinawa prefecture.
|
208 |
+
|
209 |
+
Broadcast programming, or TV listings in the United Kingdom, is the practice of organizing television programs in a schedule, with broadcast automation used to regularly change the scheduling of TV programs to build an audience for a new show, retain that audience, or compete with other broadcasters' programs.
|
210 |
+
|
211 |
+
Television has played a pivotal role in the socialization of the 20th and 21st centuries. There are many aspects of television that can be addressed, including negative issues such as media violence. Current research is discovering that individuals suffering from social isolation can employ television to create what is termed a parasocial or faux relationship with characters from their favorite television shows and movies as a way of deflecting feelings of loneliness and social deprivation.[216] Several studies have found that educational television has many advantages. The article "The Good Things about Television"[217] argues that television can be a very powerful and effective learning tool for children if used wisely.
|
212 |
+
|
213 |
+
Methodist denominations in the conservative holiness movement, such as the Allegheny Wesleyan Methodist Connection and the Evangelical Wesleyan Church, eschew the use of the television.[218]
|
214 |
+
|
215 |
+
Children, especially those aged 5 or younger, are at risk of injury from falling televisions.[219] A CRT-style television that falls on a child will, because of its weight, hit with the equivalent force of falling multiple stories from a building.[220] Newer flat-screen televisions are "top-heavy and have narrow bases", which means that a small child can easily pull one over.[221] As of 2015[update], TV tip-overs were responsible for more than 10,000 injuries per year to children, at a cost of more than $8 million per year in emergency care.[219][221]
|
216 |
+
|
217 |
+
A 2017 study in The Journal of Human Resources found that exposure to cable television reduced cognitive ability and high school graduation rates for boys. This effect was stronger for boys from more educated families. The article suggests a mechanism where light television entertainment crowds out more cognitively stimulating activities.[222]
|
218 |
+
|
219 |
+
With high lead content in CRTs and the rapid diffusion of new flat-panel display technologies, some of which (LCDs) use lamps which contain mercury, there is growing concern about electronic waste from discarded televisions. Related occupational health concerns exist, as well, for disassemblers removing copper wiring and other materials from CRTs. Further environmental concerns related to television design and use relate to the devices' increasing electrical energy requirements.[223]
|
en/2585.html.txt
ADDED
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
—George Santayana
|
2 |
+
|
3 |
+
History (from Greek ἱστορία, historia, meaning 'inquiry; knowledge acquired by investigation')[2] is the study of the past.[3][4] Events occurring before the invention of writing systems are considered prehistory. "History" is an umbrella term that relates to past events as well as the memory, discovery, collection, organization, presentation, and interpretation of information about these events. Scholars who focus on history are called historians. The historian's role is to place the past in context, using sources from moments and events, and filling in the gaps to the best of their ability.[5] Written documents are not the only sources historians use to develop their understanding of the past. They also use material objects, oral accounts, ecological markers, art, and artifacts as historical sources.
|
4 |
+
|
5 |
+
History also includes the academic discipline which uses narrative to describe, examine, question, and analyze a sequence of past events, investigate the patterns of cause and effect that are related to them.[6][7] Historians seek to understand and represent the past through narratives. They often debate which narrative best explains an event, as well as the significance of different causes and effects. Historians also debate the nature of history and its usefulness by discussing the study of the discipline as an end in itself and as a way of providing "perspective" on the problems of the present.[6][8][9][10]
|
6 |
+
|
7 |
+
Stories common to a particular culture, but not supported by external sources (such as the tales surrounding King Arthur), are usually classified as cultural heritage or legends.[11][12] History differs from myth in that it is supported by evidence. However, ancient influences have helped spawn variant interpretations of the nature of history which have evolved over the centuries and continue to change today. The modern study of history is wide-ranging, and includes the study of specific regions and the study of certain topical or thematic elements of historical investigation. History is often taught as part of primary and secondary education, and the academic study of history is a major discipline in university studies.
|
8 |
+
|
9 |
+
Herodotus, a 5th-century BC Greek historian is often considered (within the Western tradition) to be the "father of history," or, by some, the "father of lies." Along with his contemporary Thucydides, he helped form the foundations for the modern study of human history. Their works continue to be read today, and the gap between the culture-focused Herodotus and the military-focused Thucydides remains a point of contention or approach in modern historical writing. In East Asia, a state chronicle, the Spring and Autumn Annals, was known to be compiled from as early as 722 BC although only 2nd-century BC texts have survived.
|
10 |
+
|
11 |
+
The word history comes from the Ancient Greek ἱστορία[13] (historía), meaning 'inquiry', 'knowledge from inquiry', or 'judge'. It was in that sense that Aristotle used the word in his History of Animals.[14] The ancestor word ἵστωρ is attested early on in Homeric Hymns, Heraclitus, the Athenian ephebes' oath, and in Boiotic inscriptions (in a legal sense, either 'judge' or 'witness', or similar). The Greek word was borrowed into Classical Latin as historia, meaning "investigation, inquiry, research, account, description, written account of past events, writing of history, historical narrative, recorded knowledge of past events, story, narrative". History was borrowed from Latin (possibly via Old Irish or Old Welsh) into Old English as stær ('history, narrative, story'), but this word fell out of use in the late Old English period.[15] Meanwhile, as Latin became Old French (and Anglo-Norman), historia developed into forms such as istorie, estoire, and historie, with new developments in the meaning: "account of the events of a person's life (beginning of the 12th century), chronicle, account of events as relevant to a group of people or people in general (1155), dramatic or pictorial representation of historical events (c. 1240), body of knowledge relative to human evolution, science (c. 1265), narrative of real or imaginary events, story (c. 1462)".[15]
|
12 |
+
|
13 |
+
It was from Anglo-Norman that history was borrowed into Middle English, and this time the loan stuck. It appears in the 13th-century Ancrene Wisse, but seems to have become a common word in the late 14th century, with an early attestation appearing in John Gower's Confessio Amantis of the 1390s (VI.1383): "I finde in a bok compiled | To this matiere an old histoire, | The which comth nou to mi memoire". In Middle English, the meaning of history was "story" in general. The restriction to the meaning "the branch of knowledge that deals with past events; the formal record or study of past events, esp. human affairs" arose in the mid-15th century.[15] With the Renaissance, older senses of the word were revived, and it was in the Greek sense that Francis Bacon used the term in the late 16th century, when he wrote about "Natural History". For him, historia was "the knowledge of objects determined by space and time", that sort of knowledge provided by memory (while science was provided by reason, and poetry was provided by fantasy).[16]
|
14 |
+
|
15 |
+
In an expression of the linguistic synthetic vs. analytic/isolating dichotomy, English like Chinese (史 vs. 诌) now designates separate words for human history and storytelling in general. In modern German, French, and most Germanic and Romance languages, which are solidly synthetic and highly inflected, the same word is still used to mean both 'history' and 'story'. Historian in the sense of a "researcher of history" is attested from 1531. In all European languages, the substantive history is still used to mean both "what happened with men", and "the scholarly study of the happened", the latter sense sometimes distinguished with a capital letter, or the word historiography.[14] The adjective historical is attested from 1661, and historic from 1669.[17]
|
16 |
+
|
17 |
+
Historians write in the context of their own time, and with due regard to the current dominant ideas of how to interpret the past, and sometimes write to provide lessons for their own society. In the words of Benedetto Croce, "All history is contemporary history". History is facilitated by the formation of a "true discourse of past" through the production of narrative and analysis of past events relating to the human race.[18] The modern discipline of history is dedicated to the institutional production of this discourse.
|
18 |
+
|
19 |
+
All events that are remembered and preserved in some authentic form constitute the historical record.[19] The task of historical discourse is to identify the sources which can most usefully contribute to the production of accurate accounts of past. Therefore, the constitution of the historian's archive is a result of circumscribing a more general archive by invalidating the usage of certain texts and documents (by falsifying their claims to represent the "true past"). Part of the historian's role is to skillfully and objectively utilize the vast amount of sources from the past, most often found in the archives. The process of creating a narrative inevitably generates a silence as historians remember or emphasize different events of the past.[20]
|
20 |
+
|
21 |
+
The study of history has sometimes been classified as part of the humanities and at other times as part of the social sciences.[21] It can also be seen as a bridge between those two broad areas, incorporating methodologies from both. Some individual historians strongly support one or the other classification.[22] In the 20th century, French historian Fernand Braudel revolutionized the study of history, by using such outside disciplines as economics, anthropology, and geography in the study of global history.
|
22 |
+
|
23 |
+
Traditionally, historians have recorded events of the past, either in writing or by passing on an oral tradition, and have attempted to answer historical questions through the study of written documents and oral accounts. From the beginning, historians have also used such sources as monuments, inscriptions, and pictures. In general, the sources of historical knowledge can be separated into three categories: what is written, what is said, and what is physically preserved, and historians often consult all three.[23] But writing is the marker that separates history from what comes before.
|
24 |
+
|
25 |
+
Archaeology is a discipline that is especially helpful in dealing with buried sites and objects, which, once unearthed, contribute to the study of history. But archaeology rarely stands alone. It uses narrative sources to complement its discoveries. However, archaeology is constituted by a range of methodologies and approaches which are independent from history; that is to say, archaeology does not "fill the gaps" within textual sources. Indeed, "historical archaeology" is a specific branch of archaeology, often contrasting its conclusions against those of contemporary textual sources. For example, Mark Leone, the excavator and interpreter of historical Annapolis, Maryland, USA; has sought to understand the contradiction between textual documents and the material record, demonstrating the possession of slaves and the inequalities of wealth apparent via the study of the total historical environment, despite the ideology of "liberty" inherent in written documents at this time.
|
26 |
+
|
27 |
+
There are varieties of ways in which history can be organized, including chronologically, culturally, territorially, and thematically. These divisions are not mutually exclusive, and significant intersections are often present. It is possible for historians to concern themselves with both the very specific and the very general, although the modern trend has been toward specialization. The area called Big History resists this specialization, and searches for universal patterns or trends. History has often been studied with some practical or theoretical aim, but also may be studied out of simple intellectual curiosity.[24]
|
28 |
+
|
29 |
+
The history of the world is the memory of the past experience of Homo sapiens sapiens around the world, as that experience has been preserved, largely in written records. By "prehistory", historians mean the recovery of knowledge of the past in an area where no written records exist, or where the writing of a culture is not understood. By studying painting, drawings, carvings, and other artifacts, some information can be recovered even in the absence of a written record. Since the 20th century, the study of prehistory is considered essential to avoid history's implicit exclusion of certain civilizations, such as those of Sub-Saharan Africa and pre-Columbian America. Historians in the West have been criticized for focusing disproportionately on the Western world.[25] In 1961, British historian E. H. Carr wrote:
|
30 |
+
|
31 |
+
The line of demarcation between prehistoric and historical times is crossed when people cease to live only in the present, and become consciously interested both in their past and in their future. History begins with the handing down of tradition; and tradition means the carrying of the habits and lessons of the past into the future. Records of the past begin to be kept for the benefit of future generations.[26]
|
32 |
+
|
33 |
+
This definition includes within the scope of history the strong interests of peoples, such as Indigenous Australians and New Zealand Māori in the past, and the oral records maintained and transmitted to succeeding generations, even before their contact with European civilization.
|
34 |
+
|
35 |
+
Historiography has a number of related meanings. Firstly, it can refer to how history has been produced: the story of the development of methodology and practices (for example, the move from short-term biographical narrative towards long-term thematic analysis). Secondly, it can refer to what has been produced: a specific body of historical writing (for example, "medieval historiography during the 1960s" means "Works of medieval history written during the 1960s"). Thirdly, it may refer to why history is produced: the Philosophy of history. As a meta-level analysis of descriptions of the past, this third conception can relate to the first two in that the analysis usually focuses on the narratives, interpretations, world view, use of evidence, or method of presentation of other historians. Professional historians also debate the question of whether history can be taught as a single coherent narrative or a series of competing narratives.[27][28]
|
36 |
+
|
37 |
+
The following questions are used by historians in modern work.
|
38 |
+
|
39 |
+
The first four are known as historical criticism; the fifth, textual criticism; and, together, external criticism. The sixth and final inquiry about a source is called internal criticism.
|
40 |
+
|
41 |
+
The historical method comprises the techniques and guidelines by which historians use primary sources and other evidence to research and then to write history.
|
42 |
+
|
43 |
+
Herodotus of Halicarnassus (484 BC – ca.425 BC)[29] has generally been acclaimed as the "father of history". However, his contemporary Thucydides (c. 460 BC – c. 400 BC) is credited with having first approached history with a well-developed historical method in his work the History of the Peloponnesian War. Thucydides, unlike Herodotus, regarded history as being the product of the choices and actions of human beings, and looked at cause and effect, rather than as the result of divine intervention (though Herodotus was not wholly committed to this idea himself).[29] In his historical method, Thucydides emphasized chronology, a nominally neutral point of view, and that the human world was the result of the actions of human beings. Greek historians also viewed history as cyclical, with events regularly recurring.[30]
|
44 |
+
|
45 |
+
There were historical traditions and sophisticated use of historical method in ancient and medieval China. The groundwork for professional historiography in East Asia was established by the Han dynasty court historian known as Sima Qian (145–90 BC), author of the Records of the Grand Historian (Shiji). For the quality of his written work, Sima Qian is posthumously known as the Father of Chinese historiography. Chinese historians of subsequent dynastic periods in China used his Shiji as the official format for historical texts, as well as for biographical literature.[citation needed]
|
46 |
+
|
47 |
+
Saint Augustine was influential in Christian and Western thought at the beginning of the medieval period. Through the Medieval and Renaissance periods, history was often studied through a sacred or religious perspective. Around 1800, German philosopher and historian Georg Wilhelm Friedrich Hegel brought philosophy and a more secular approach in historical study.[24]
|
48 |
+
|
49 |
+
In the preface to his book, the Muqaddimah (1377), the Arab historian and early sociologist, Ibn Khaldun, warned of seven mistakes that he thought that historians regularly committed. In this criticism, he approached the past as strange and in need of interpretation. The originality of Ibn Khaldun was to claim that the cultural difference of another age must govern the evaluation of relevant historical material, to distinguish the principles according to which it might be possible to attempt the evaluation, and lastly, to feel the need for experience, in addition to rational principles, in order to assess a culture of the past. Ibn Khaldun often criticized "idle superstition and uncritical acceptance of historical data." As a result, he introduced a scientific method to the study of history, and he often referred to it as his "new science".[31] His historical method also laid the groundwork for the observation of the role of state, communication, propaganda and systematic bias in history,[32] and he is thus considered to be the "father of historiography"[33][34] or the "father of the philosophy of history".[35]
|
50 |
+
|
51 |
+
In the West, historians developed modern methods of historiography in the 17th and 18th centuries, especially in France and Germany. In 1851, Herbert Spencer summarized these methods:
|
52 |
+
|
53 |
+
From the successive strata of our historical deposits, they [Historians] diligently gather all the highly colored fragments, pounce upon everything that is curious and sparkling and chuckle like children over their glittering acquisitions; meanwhile the rich veins of wisdom that ramify amidst this worthless debris, lie utterly neglected. Cumbrous volumes of rubbish are greedily accumulated, while those masses of rich ore, that should have been dug out, and from which golden truths might have been smelted, are left untaught and unsought[36]
|
54 |
+
|
55 |
+
By the "rich ore" Spencer meant scientific theory of history. Meanwhile, Henry Thomas Buckle expressed a dream of history becoming one day science:
|
56 |
+
|
57 |
+
In regard to nature, events apparently the most irregular and capricious have been explained and have been shown to be in accordance with certain fixed and universal laws. This have been done because men of ability and, above all, men of patient, untiring thought have studied events with the view of discovering their regularity, and if human events were subject to a similar treatment, we have every right to expect similar results[37]
|
58 |
+
|
59 |
+
Contrary to Buckle's dream, the 19th-century historian with greatest influence on methods became Leopold von Ranke in Germany. He limited history to “what really happened” and by this directed the field further away from science. For Ranke, historical data should be collected carefully, examined objectively and put together with critical rigor. But these procedures “are merely the prerequisites and preliminaries of science. The heart of science is searching out order and regularity in the data being examined and in formulating generalizations or laws about them.”[38]
|
60 |
+
|
61 |
+
As Historians like Ranke and many who followed him have pursued it, no, history is not a science. Thus if Historians tell us that, given the manner in which he practices his craft, it cannot be considered a science, we must take him at his word. If he is not doing science, then, whatever else he is doing, he is not doing science. The traditional Historian is thus no scientist and history, as conventionally practiced, is not a science.[39]
|
62 |
+
|
63 |
+
In the 20th century, academic historians focused less on epic nationalistic narratives, which often tended to glorify the nation or great men, to more objective and complex analyses of social and intellectual forces. A major trend of historical methodology in the 20th century was a tendency to treat history more as a social science rather than as an art, which traditionally had been the case. Some of the leading advocates of history as a social science were a diverse collection of scholars which included Fernand Braudel, E. H. Carr, Fritz Fischer, Emmanuel Le Roy Ladurie, Hans-Ulrich Wehler, Bruce Trigger, Marc Bloch, Karl Dietrich Bracher, Peter Gay, Robert Fogel, Lucien Febvre and Lawrence Stone. Many of the advocates of history as a social science were or are noted for their multi-disciplinary approach. Braudel combined history with geography, Bracher history with political science, Fogel history with economics, Gay history with psychology, Trigger history with archaeology while Wehler, Bloch, Fischer, Stone, Febvre and Le Roy Ladurie have in varying and differing ways amalgamated history with sociology, geography, anthropology, and economics. Nevertheless, these multidisciplinary approaches failed to produce a theory of history. So far only one theory of history came from the pen of a professional Historian.[40] Whatever other theories of history we have, they were written by experts from other fields (for example, Marxian theory of history). More recently, the field of digital history has begun to address ways of using computer technology to pose new questions to historical data and generate digital scholarship.
|
64 |
+
|
65 |
+
In sincere opposition to the claims of history as a social science, historians such as Hugh Trevor-Roper, John Lukacs, Donald Creighton, Gertrude Himmelfarb and Gerhard Ritter argued that the key to the historians' work was the power of the imagination, and hence contended that history should be understood as an art. French historians associated with the Annales School introduced quantitative history, using raw data to track the lives of typical individuals, and were prominent in the establishment of cultural history (cf. histoire des mentalités). Intellectual historians such as Herbert Butterfield, Ernst Nolte and George Mosse have argued for the significance of ideas in history. American historians, motivated by the civil rights era, focused on formerly overlooked ethnic, racial, and socio-economic groups. Another genre of social history to emerge in the post-WWII era was Alltagsgeschichte (History of Everyday Life). Scholars such as Martin Broszat, Ian Kershaw and Detlev Peukert sought to examine what everyday life was like for ordinary people in 20th-century Germany, especially in the Nazi period.
|
66 |
+
|
67 |
+
Marxist historians such as Eric Hobsbawm, E. P. Thompson, Rodney Hilton, Georges Lefebvre, Eugene Genovese, Isaac Deutscher, C. L. R. James, Timothy Mason, Herbert Aptheker, Arno J. Mayer and Christopher Hill have sought to validate Karl Marx's theories by analyzing history from a Marxist perspective. In response to the Marxist interpretation of history, historians such as François Furet, Richard Pipes, J. C. D. Clark, Roland Mousnier, Henry Ashby Turner and Robert Conquest have offered anti-Marxist interpretations of history. Feminist historians such as Joan Wallach Scott, Claudia Koonz, Natalie Zemon Davis, Sheila Rowbotham, Gisela Bock, Gerda Lerner, Elizabeth Fox-Genovese, and Lynn Hunt have argued for the importance of studying the experience of women in the past. In recent years, postmodernists have challenged the validity and need for the study of history on the basis that all history is based on the personal interpretation of sources. In his 1997 book In Defence of History, Richard J. Evans defended the worth of history. Another defence of history from post-modernist criticism was the Australian historian Keith Windschuttle's 1994 book, The Killing of History.
|
68 |
+
|
69 |
+
Today, most historians begin their research process in the archives, on either a physical or digital platform. They often propose an argument and use their research to support it. John H. Arnold proposed that history is an argument, which creates the possibility of creating change.[5] Digital information companies, such as Google, have sparked controversy over the role of internet censorship in information access.[41]
|
70 |
+
|
71 |
+
The Marxist theory of historical materialism theorises that society is fundamentally determined by the material conditions at any given time – in other words, the relationships which people have with each other in order to fulfill basic needs such as feeding, clothing and housing themselves and their families.[42] Overall, Marx and Engels claimed to have identified five successive stages of the development of these material conditions in Western Europe.[43] Marxist historiography was once orthodoxy in the Soviet Union, but since the collapse of communism there in 1991, Mikhail Krom says it has been reduced to the margins of scholarship.[44]
|
72 |
+
|
73 |
+
Many historians believe that the production of history is embedded with bias because events and known facts in history can be interpreted in a variety of ways. Constantin Fasolt suggested that history is linked to politics by the practice of silence itself.[45] “A second common view of the link between history and politics rests on the elementary observation that historians are often influenced by politics.”[45] According to Michel-Rolph Trouillot, the historical process is rooted in the archives, therefore silences, or parts of history that are forgotten, may be an intentional part of a narrative strategy that dictates how areas of history are remembered.[20] Historical omissions can occur in many ways and can have a profound effect on historical records. Information can also purposely be excluded or left out accidentally. Historians have coined multiple terms that describe the act of omitting historical information, including: “silencing,”[20] “selective memory,”[46] and erasures.[47] Gerda Lerner, a twentieth century historian who focused much of her work on historical omissions involving women and their accomplishments, explained the negative impact that these omissions had on minority groups.[46]
|
74 |
+
|
75 |
+
Environmental historian William Cronon proposed three ways to combat bias and ensure authentic and accurate narratives: narratives must not contradict known fact, they must make ecological sense (specifically for environmental history), and published work must be reviewed by scholarly community and other historians to ensure accountability.[47]
|
76 |
+
|
77 |
+
These are approaches to history; not listed are histories of other fields, such as history of science, history of mathematics and history of philosophy.
|
78 |
+
|
79 |
+
Historical study often focuses on events and developments that occur in particular blocks of time. Historians give these periods of time names in order to allow "organising ideas and classificatory generalisations" to be used by historians.[48] The names given to a period can vary with geographical location, as can the dates of the beginning and end of a particular period. Centuries and decades are commonly used periods and the time they represent depends on the dating system used. Most periods are constructed retrospectively and so reflect value judgments made about the past. The way periods are constructed and the names given to them can affect the way they are viewed and studied.[49]
|
80 |
+
|
81 |
+
The field of history generally leaves prehistory to the archaeologists, who have entirely different sets of tools and theories. The usual method for periodisation of the distant prehistoric past, in archaeology is to rely on changes in material culture and technology, such as the Stone Age, Bronze Age and Iron Age and their sub-divisions also based on different styles of material remains. Here prehistory is divided into a series of "chapters" so that periods in history could unfold not only in a relative chronology but also narrative chronology.[50] This narrative content could be in the form of functional-economic interpretation. There are periodisation, however, that do not have this narrative aspect, relying largely on relative chronology and, thus, devoid of any specific meaning.
|
82 |
+
|
83 |
+
Despite the development over recent decades of the ability through radiocarbon dating and other scientific methods to give actual dates for many sites or artefacts, these long-established schemes seem likely to remain in use. In many cases neighbouring cultures with writing have left some history of cultures without it, which may be used. Periodisation, however, is not viewed as a perfect framework with one account explaining that "cultural changes do not conveniently start and stop (combinedly) at periodisation boundaries" and that different trajectories of change are also needed to be studied in their own right before they get intertwined with cultural phenomena.[51]
|
84 |
+
|
85 |
+
Particular geographical locations can form the basis of historical study, for example, continents, countries, and cities. Understanding why historic events took place is important. To do this, historians often turn to geography. According to Jules Michelet in his book Histoire de France (1833), "without geographical basis, the people, the makers of history, seem to be walking on air."[52] Weather patterns, the water supply, and the landscape of a place all affect the lives of the people who live there. For example, to explain why the ancient Egyptians developed a successful civilization, studying the geography of Egypt is essential. Egyptian civilization was built on the banks of the Nile River, which flooded each year, depositing soil on its banks. The rich soil could help farmers grow enough crops to feed the people in the cities. That meant everyone did not have to farm, so some people could perform other jobs that helped develop the civilization. There is also the case of climate, which historians like Ellsworth Huntington and Allen Semple, cited as a crucial influence on the course of history and racial temperament.[53]
|
86 |
+
|
87 |
+
Military history concerns warfare, strategies, battles, weapons, and the psychology of combat. The "new military history" since the 1970s has been concerned with soldiers more than generals, with psychology more than tactics, and with the broader impact of warfare on society and culture.[54]
|
88 |
+
|
89 |
+
The history of religion has been a main theme for both secular and religious historians for centuries, and continues to be taught in seminaries and academe. Leading journals include Church History, The Catholic Historical Review, and History of Religions. Topics range widely from political and cultural and artistic dimensions, to theology and liturgy.[55] This subject studies religions from all regions and areas of the world where humans have lived.[56]
|
90 |
+
|
91 |
+
Social history, sometimes called the new social history, is the field that includes history of ordinary people and their strategies and institutions for coping with life.[57] In its "golden age" it was a major growth field in the 1960s and 1970s among scholars, and still is well represented in history departments. In two decades from 1975 to 1995, the proportion of professors of history in American universities identifying with social history rose from 31% to 41%, while the proportion of political historians fell from 40% to 30%.[58] In the history departments of British universities in 2007, of the 5723 faculty members, 1644 (29%) identified themselves with social history while political history came next with 1425 (25%).[59]
|
92 |
+
The "old" social history before the 1960s was a hodgepodge of topics without a central theme, and it often included political movements, like Populism, that were "social" in the sense of being outside the elite system. Social history was contrasted with political history, intellectual history and the history of great men. English historian G. M. Trevelyan saw it as the bridging point between economic and political history, reflecting that, "Without social history, economic history is barren and political history unintelligible."[60] While the field has often been viewed negatively as history with the politics left out, it has also been defended as "history with the people put back in."[61]
|
93 |
+
|
94 |
+
The chief subfields of social history include:
|
95 |
+
|
96 |
+
Cultural history replaced social history as the dominant form in the 1980s and 1990s. It typically combines the approaches of anthropology and history to look at language, popular cultural traditions and cultural interpretations of historical experience. It examines the records and narrative descriptions of past knowledge, customs, and arts of a group of people. How peoples constructed their memory of the past is a major topic.
|
97 |
+
Cultural history includes the study of art in society as well is the study of images and human visual production (iconography).[62]
|
98 |
+
|
99 |
+
Diplomatic history focuses on the relationships between nations, primarily regarding diplomacy and the causes of wars. More recently it looks at the causes of peace and human rights. It typically presents the viewpoints of the foreign office, and long-term strategic values, as the driving force of continuity and change in history. This type of political history is the study of the conduct of international relations between states or across state boundaries over time. Historian Muriel Chamberlain notes that after the First World War, "diplomatic history replaced constitutional history as the flagship of historical investigation, at once the most important, most exact and most sophisticated of historical studies."[63] She adds that after 1945, the trend reversed, allowing social history to replace it.
|
100 |
+
|
101 |
+
Although economic history has been well established since the late 19th century, in recent years academic studies have shifted more and more toward economics departments and away from traditional history departments.[64] Business history deals with the history of individual business organizations, business methods, government regulation, labour relations, and impact on society. It also includes biographies of individual companies, executives, and entrepreneurs. It is related to economic history; Business history is most often taught in business schools.[65]
|
102 |
+
|
103 |
+
Environmental history is a new field that emerged in the 1980s to look at the history of the environment, especially in the long run, and the impact of human activities upon it.[66] It is an offshoot of the environmental movement, which was kickstarted by Rachel Carson's Silent Spring in the 1960s.
|
104 |
+
|
105 |
+
World history is the study of major civilizations over the last 3000 years or so. World history is primarily a teaching field, rather than a research field. It gained popularity in the United States,[67] Japan[68] and other countries after the 1980s with the realization that students need a broader exposure to the world as globalization proceeds.
|
106 |
+
|
107 |
+
It has led to highly controversial interpretations by Oswald Spengler and Arnold J. Toynbee, among others.
|
108 |
+
|
109 |
+
The World History Association publishes the Journal of World History every quarter since 1990.[69] The H-World discussion list[70] serves as a network of communication among practitioners of world history, with discussions among scholars, announcements, syllabi, bibliographies and book reviews.
|
110 |
+
|
111 |
+
A people's history is a type of historical work which attempts to account for historical events from the perspective of common people. A people's history is the history of the world that is the story of mass movements and of the outsiders. Individuals or groups not included in the past in other type of writing about history are the primary focus, which includes the disenfranchised, the oppressed, the poor, the nonconformists, and the otherwise forgotten people. The authors are typically on the left and have a socialist model in mind, as in the approach of the History Workshop movement in Britain in the 1960s.[71]
|
112 |
+
|
113 |
+
Intellectual history and the history of ideas emerged in the mid-20th century, with the focus on the intellectuals and their books on the one hand, and on the other the study of ideas as disembodied objects with a career of their own.[72][73]
|
114 |
+
|
115 |
+
Gender history is a subfield of History and Gender studies, which looks at the past from the perspective of gender. The outgrowth of gender history from women's history stemmed from many non-feminist historians dismissing the importance of women in history. According to Joan W. Scott, “Gender is a constitutive element of social relationships based on perceived differences between the sexes, and gender is a primary way of signifying relations of power,”[74] meaning that gender historians study the social effects of perceived differences between the sexes and how all genders utilize allotted power in societal and political structures. Despite being a relatively new field, gender history has had a significant effect on the general study of history. Gender history traditionally differs from women's history in its inclusion of all aspects of gender such as masculinity and femininity, and today's gender history extends to include people who identify outside of that binary.
|
116 |
+
|
117 |
+
Public history describes the broad range of activities undertaken by people with some training in the discipline of history who are generally working outside of specialized academic settings. Public history practice has quite deep roots in the areas of historic preservation, archival science, oral history, museum curatorship, and other related fields. The term itself began to be used in the U.S. and Canada in the late 1970s, and the field has become increasingly professionalized since that time. Some of the most common settings for public history are museums, historic homes and historic sites, parks, battlefields, archives, film and television companies, and all levels of government.[75]
|
118 |
+
|
119 |
+
LGBT history deals with the first recorded instances of same-sex love and sexuality of ancient civilizations, involves the history of lesbian, gay, bisexual and transgender (LGBT) peoples and cultures around the world. A common feature of LGBTQ+ history is the focus on oral history and individual perspectives, in addition to traditional documents within the archives.
|
120 |
+
|
121 |
+
Professional and amateur historians discover, collect, organize, and present information about past events. They discover this information through archaeological evidence, written primary sources, verbal stories or oral histories, and other archival material. In lists of historians, historians can be grouped by order of the historical period in which they were writing, which is not necessarily the same as the period in which they specialized. Chroniclers and annalists, though they are not historians in the true sense, are also frequently included.
|
122 |
+
|
123 |
+
Since the 20th century, Western historians have disavowed the aspiration to provide the "judgement of history."[76] The goals of historical judgements or interpretations are separate to those of legal judgements, that need to be formulated quickly after the events and be final.[77] A related issue to that of the judgement of history is that of collective memory.
|
124 |
+
|
125 |
+
Pseudohistory is a term applied to texts which purport to be historical in nature but which depart from standard historiographical conventions in a way which undermines their conclusions.
|
126 |
+
It is closely related to deceptive historical revisionism. Works which draw controversial conclusions from new, speculative, or disputed historical evidence, particularly in the fields of national, political, military, and religious affairs, are often rejected as pseudohistory.
|
127 |
+
|
128 |
+
A major intellectual battle took place in Britain in the early twentieth century regarding the place of history teaching in the universities. At Oxford and Cambridge, scholarship was downplayed. Professor Charles Harding Firth, Oxford's Regius Professor of history in 1904 ridiculed the system as best suited to produce superficial journalists. The Oxford tutors, who had more votes than the professors, fought back in defence of their system saying that it successfully produced Britain's outstanding statesmen, administrators, prelates, and diplomats, and that mission was as valuable as training scholars. The tutors dominated the debate until after the Second World War. It forced aspiring young scholars to teach at outlying schools, such as Manchester University, where Thomas Frederick Tout was professionalizing the History undergraduate programme by introducing the study of original sources and requiring the writing of a thesis.[78][79]
|
129 |
+
|
130 |
+
In the United States, scholarship was concentrated at the major PhD-producing universities, while the large number of other colleges and universities focused on undergraduate teaching. A tendency in the 21st century was for the latter schools to increasingly demand scholarly productivity of their younger tenure-track faculty. Furthermore, universities have increasingly relied on inexpensive part-time adjuncts to do most of the classroom teaching.[80]
|
131 |
+
|
132 |
+
From the origins of national school systems in the 19th century, the teaching of history to promote national sentiment has been a high priority. In the United States after World War I, a strong movement emerged at the university level to teach courses in Western Civilization, so as to give students a common heritage with Europe. In the U.S. after 1980, attention increasingly moved toward teaching world history or requiring students to take courses in non-western cultures, to prepare students for life in a globalized economy.[81]
|
133 |
+
|
134 |
+
At the university level, historians debate the question of whether history belongs more to social science or to the humanities. Many view the field from both perspectives.
|
135 |
+
|
136 |
+
The teaching of history in French schools was influenced by the Nouvelle histoire as disseminated after the 1960s by Cahiers pédagogiques and Enseignement and other journals for teachers. Also influential was the Institut national de recherche et de documentation pédagogique, (INRDP). Joseph Leif, the Inspector-general of teacher training, said pupils children should learn about historians' approaches as well as facts and dates. Louis François, Dean of the History/Geography group in the Inspectorate of National Education advised that teachers should provide historic documents and promote "active methods" which would give pupils "the immense happiness of discovery." Proponents said it was a reaction against the memorization of names and dates that characterized teaching and left the students bored. Traditionalists protested loudly it was a postmodern innovation that threatened to leave the youth ignorant of French patriotism and national identity.[82]
|
137 |
+
|
138 |
+
In several countries history textbooks are tools to foster nationalism and patriotism, and give students the official narrative about national enemies.[83]
|
139 |
+
|
140 |
+
In many countries, history textbooks are sponsored by the national government and are written to put the national heritage in the most favourable light. For example, in Japan, mention of the Nanking Massacre has been removed from textbooks and the entire Second World War is given cursory treatment. Other countries have complained.[84] It was standard policy in communist countries to present only a rigid Marxist historiography.[85][86]
|
141 |
+
|
142 |
+
In the United States, textbooks published by the same company often differ in content from state to state.[87] An example of content that is represented different in different regions of the country is the history of the Southern states, where slavery and the American Civil War are treated as controversial topics. McGraw-Hill Education for example, was criticised for describing Africans brought to American plantations as "workers" instead of slaves in a textbook.[88]
|
143 |
+
|
144 |
+
Academic historians have often fought against the politicization of the textbooks, sometimes with success.[89][90]
|
145 |
+
|
146 |
+
In 21st-century Germany, the history curriculum is controlled by the 16 states, and is characterized not by superpatriotism but rather by an "almost pacifistic and deliberately unpatriotic undertone" and reflects "principles formulated by international organizations such as UNESCO or the Council of Europe, thus oriented towards human rights, democracy and peace." The result is that "German textbooks usually downplay national pride and ambitions and aim to develop an understanding of citizenship centered on democracy, progress, human rights, peace, tolerance and Europeanness."[91]
|
147 |
+
|
en/2586.html.txt
ADDED
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
—George Santayana
|
2 |
+
|
3 |
+
History (from Greek ἱστορία, historia, meaning 'inquiry; knowledge acquired by investigation')[2] is the study of the past.[3][4] Events occurring before the invention of writing systems are considered prehistory. "History" is an umbrella term that relates to past events as well as the memory, discovery, collection, organization, presentation, and interpretation of information about these events. Scholars who focus on history are called historians. The historian's role is to place the past in context, using sources from moments and events, and filling in the gaps to the best of their ability.[5] Written documents are not the only sources historians use to develop their understanding of the past. They also use material objects, oral accounts, ecological markers, art, and artifacts as historical sources.
|
4 |
+
|
5 |
+
History also includes the academic discipline which uses narrative to describe, examine, question, and analyze a sequence of past events, investigate the patterns of cause and effect that are related to them.[6][7] Historians seek to understand and represent the past through narratives. They often debate which narrative best explains an event, as well as the significance of different causes and effects. Historians also debate the nature of history and its usefulness by discussing the study of the discipline as an end in itself and as a way of providing "perspective" on the problems of the present.[6][8][9][10]
|
6 |
+
|
7 |
+
Stories common to a particular culture, but not supported by external sources (such as the tales surrounding King Arthur), are usually classified as cultural heritage or legends.[11][12] History differs from myth in that it is supported by evidence. However, ancient influences have helped spawn variant interpretations of the nature of history which have evolved over the centuries and continue to change today. The modern study of history is wide-ranging, and includes the study of specific regions and the study of certain topical or thematic elements of historical investigation. History is often taught as part of primary and secondary education, and the academic study of history is a major discipline in university studies.
|
8 |
+
|
9 |
+
Herodotus, a 5th-century BC Greek historian is often considered (within the Western tradition) to be the "father of history," or, by some, the "father of lies." Along with his contemporary Thucydides, he helped form the foundations for the modern study of human history. Their works continue to be read today, and the gap between the culture-focused Herodotus and the military-focused Thucydides remains a point of contention or approach in modern historical writing. In East Asia, a state chronicle, the Spring and Autumn Annals, was known to be compiled from as early as 722 BC although only 2nd-century BC texts have survived.
|
10 |
+
|
11 |
+
The word history comes from the Ancient Greek ἱστορία[13] (historía), meaning 'inquiry', 'knowledge from inquiry', or 'judge'. It was in that sense that Aristotle used the word in his History of Animals.[14] The ancestor word ἵστωρ is attested early on in Homeric Hymns, Heraclitus, the Athenian ephebes' oath, and in Boiotic inscriptions (in a legal sense, either 'judge' or 'witness', or similar). The Greek word was borrowed into Classical Latin as historia, meaning "investigation, inquiry, research, account, description, written account of past events, writing of history, historical narrative, recorded knowledge of past events, story, narrative". History was borrowed from Latin (possibly via Old Irish or Old Welsh) into Old English as stær ('history, narrative, story'), but this word fell out of use in the late Old English period.[15] Meanwhile, as Latin became Old French (and Anglo-Norman), historia developed into forms such as istorie, estoire, and historie, with new developments in the meaning: "account of the events of a person's life (beginning of the 12th century), chronicle, account of events as relevant to a group of people or people in general (1155), dramatic or pictorial representation of historical events (c. 1240), body of knowledge relative to human evolution, science (c. 1265), narrative of real or imaginary events, story (c. 1462)".[15]
|
12 |
+
|
13 |
+
It was from Anglo-Norman that history was borrowed into Middle English, and this time the loan stuck. It appears in the 13th-century Ancrene Wisse, but seems to have become a common word in the late 14th century, with an early attestation appearing in John Gower's Confessio Amantis of the 1390s (VI.1383): "I finde in a bok compiled | To this matiere an old histoire, | The which comth nou to mi memoire". In Middle English, the meaning of history was "story" in general. The restriction to the meaning "the branch of knowledge that deals with past events; the formal record or study of past events, esp. human affairs" arose in the mid-15th century.[15] With the Renaissance, older senses of the word were revived, and it was in the Greek sense that Francis Bacon used the term in the late 16th century, when he wrote about "Natural History". For him, historia was "the knowledge of objects determined by space and time", that sort of knowledge provided by memory (while science was provided by reason, and poetry was provided by fantasy).[16]
|
14 |
+
|
15 |
+
In an expression of the linguistic synthetic vs. analytic/isolating dichotomy, English like Chinese (史 vs. 诌) now designates separate words for human history and storytelling in general. In modern German, French, and most Germanic and Romance languages, which are solidly synthetic and highly inflected, the same word is still used to mean both 'history' and 'story'. Historian in the sense of a "researcher of history" is attested from 1531. In all European languages, the substantive history is still used to mean both "what happened with men", and "the scholarly study of the happened", the latter sense sometimes distinguished with a capital letter, or the word historiography.[14] The adjective historical is attested from 1661, and historic from 1669.[17]
|
16 |
+
|
17 |
+
Historians write in the context of their own time, and with due regard to the current dominant ideas of how to interpret the past, and sometimes write to provide lessons for their own society. In the words of Benedetto Croce, "All history is contemporary history". History is facilitated by the formation of a "true discourse of past" through the production of narrative and analysis of past events relating to the human race.[18] The modern discipline of history is dedicated to the institutional production of this discourse.
|
18 |
+
|
19 |
+
All events that are remembered and preserved in some authentic form constitute the historical record.[19] The task of historical discourse is to identify the sources which can most usefully contribute to the production of accurate accounts of past. Therefore, the constitution of the historian's archive is a result of circumscribing a more general archive by invalidating the usage of certain texts and documents (by falsifying their claims to represent the "true past"). Part of the historian's role is to skillfully and objectively utilize the vast amount of sources from the past, most often found in the archives. The process of creating a narrative inevitably generates a silence as historians remember or emphasize different events of the past.[20]
|
20 |
+
|
21 |
+
The study of history has sometimes been classified as part of the humanities and at other times as part of the social sciences.[21] It can also be seen as a bridge between those two broad areas, incorporating methodologies from both. Some individual historians strongly support one or the other classification.[22] In the 20th century, French historian Fernand Braudel revolutionized the study of history, by using such outside disciplines as economics, anthropology, and geography in the study of global history.
|
22 |
+
|
23 |
+
Traditionally, historians have recorded events of the past, either in writing or by passing on an oral tradition, and have attempted to answer historical questions through the study of written documents and oral accounts. From the beginning, historians have also used such sources as monuments, inscriptions, and pictures. In general, the sources of historical knowledge can be separated into three categories: what is written, what is said, and what is physically preserved, and historians often consult all three.[23] But writing is the marker that separates history from what comes before.
|
24 |
+
|
25 |
+
Archaeology is a discipline that is especially helpful in dealing with buried sites and objects, which, once unearthed, contribute to the study of history. But archaeology rarely stands alone. It uses narrative sources to complement its discoveries. However, archaeology is constituted by a range of methodologies and approaches which are independent from history; that is to say, archaeology does not "fill the gaps" within textual sources. Indeed, "historical archaeology" is a specific branch of archaeology, often contrasting its conclusions against those of contemporary textual sources. For example, Mark Leone, the excavator and interpreter of historical Annapolis, Maryland, USA; has sought to understand the contradiction between textual documents and the material record, demonstrating the possession of slaves and the inequalities of wealth apparent via the study of the total historical environment, despite the ideology of "liberty" inherent in written documents at this time.
|
26 |
+
|
27 |
+
There are varieties of ways in which history can be organized, including chronologically, culturally, territorially, and thematically. These divisions are not mutually exclusive, and significant intersections are often present. It is possible for historians to concern themselves with both the very specific and the very general, although the modern trend has been toward specialization. The area called Big History resists this specialization, and searches for universal patterns or trends. History has often been studied with some practical or theoretical aim, but also may be studied out of simple intellectual curiosity.[24]
|
28 |
+
|
29 |
+
The history of the world is the memory of the past experience of Homo sapiens sapiens around the world, as that experience has been preserved, largely in written records. By "prehistory", historians mean the recovery of knowledge of the past in an area where no written records exist, or where the writing of a culture is not understood. By studying painting, drawings, carvings, and other artifacts, some information can be recovered even in the absence of a written record. Since the 20th century, the study of prehistory is considered essential to avoid history's implicit exclusion of certain civilizations, such as those of Sub-Saharan Africa and pre-Columbian America. Historians in the West have been criticized for focusing disproportionately on the Western world.[25] In 1961, British historian E. H. Carr wrote:
|
30 |
+
|
31 |
+
The line of demarcation between prehistoric and historical times is crossed when people cease to live only in the present, and become consciously interested both in their past and in their future. History begins with the handing down of tradition; and tradition means the carrying of the habits and lessons of the past into the future. Records of the past begin to be kept for the benefit of future generations.[26]
|
32 |
+
|
33 |
+
This definition includes within the scope of history the strong interests of peoples, such as Indigenous Australians and New Zealand Māori in the past, and the oral records maintained and transmitted to succeeding generations, even before their contact with European civilization.
|
34 |
+
|
35 |
+
Historiography has a number of related meanings. Firstly, it can refer to how history has been produced: the story of the development of methodology and practices (for example, the move from short-term biographical narrative towards long-term thematic analysis). Secondly, it can refer to what has been produced: a specific body of historical writing (for example, "medieval historiography during the 1960s" means "Works of medieval history written during the 1960s"). Thirdly, it may refer to why history is produced: the Philosophy of history. As a meta-level analysis of descriptions of the past, this third conception can relate to the first two in that the analysis usually focuses on the narratives, interpretations, world view, use of evidence, or method of presentation of other historians. Professional historians also debate the question of whether history can be taught as a single coherent narrative or a series of competing narratives.[27][28]
|
36 |
+
|
37 |
+
The following questions are used by historians in modern work.
|
38 |
+
|
39 |
+
The first four are known as historical criticism; the fifth, textual criticism; and, together, external criticism. The sixth and final inquiry about a source is called internal criticism.
|
40 |
+
|
41 |
+
The historical method comprises the techniques and guidelines by which historians use primary sources and other evidence to research and then to write history.
|
42 |
+
|
43 |
+
Herodotus of Halicarnassus (484 BC – ca.425 BC)[29] has generally been acclaimed as the "father of history". However, his contemporary Thucydides (c. 460 BC – c. 400 BC) is credited with having first approached history with a well-developed historical method in his work the History of the Peloponnesian War. Thucydides, unlike Herodotus, regarded history as being the product of the choices and actions of human beings, and looked at cause and effect, rather than as the result of divine intervention (though Herodotus was not wholly committed to this idea himself).[29] In his historical method, Thucydides emphasized chronology, a nominally neutral point of view, and that the human world was the result of the actions of human beings. Greek historians also viewed history as cyclical, with events regularly recurring.[30]
|
44 |
+
|
45 |
+
There were historical traditions and sophisticated use of historical method in ancient and medieval China. The groundwork for professional historiography in East Asia was established by the Han dynasty court historian known as Sima Qian (145–90 BC), author of the Records of the Grand Historian (Shiji). For the quality of his written work, Sima Qian is posthumously known as the Father of Chinese historiography. Chinese historians of subsequent dynastic periods in China used his Shiji as the official format for historical texts, as well as for biographical literature.[citation needed]
|
46 |
+
|
47 |
+
Saint Augustine was influential in Christian and Western thought at the beginning of the medieval period. Through the Medieval and Renaissance periods, history was often studied through a sacred or religious perspective. Around 1800, German philosopher and historian Georg Wilhelm Friedrich Hegel brought philosophy and a more secular approach in historical study.[24]
|
48 |
+
|
49 |
+
In the preface to his book, the Muqaddimah (1377), the Arab historian and early sociologist, Ibn Khaldun, warned of seven mistakes that he thought that historians regularly committed. In this criticism, he approached the past as strange and in need of interpretation. The originality of Ibn Khaldun was to claim that the cultural difference of another age must govern the evaluation of relevant historical material, to distinguish the principles according to which it might be possible to attempt the evaluation, and lastly, to feel the need for experience, in addition to rational principles, in order to assess a culture of the past. Ibn Khaldun often criticized "idle superstition and uncritical acceptance of historical data." As a result, he introduced a scientific method to the study of history, and he often referred to it as his "new science".[31] His historical method also laid the groundwork for the observation of the role of state, communication, propaganda and systematic bias in history,[32] and he is thus considered to be the "father of historiography"[33][34] or the "father of the philosophy of history".[35]
|
50 |
+
|
51 |
+
In the West, historians developed modern methods of historiography in the 17th and 18th centuries, especially in France and Germany. In 1851, Herbert Spencer summarized these methods:
|
52 |
+
|
53 |
+
From the successive strata of our historical deposits, they [Historians] diligently gather all the highly colored fragments, pounce upon everything that is curious and sparkling and chuckle like children over their glittering acquisitions; meanwhile the rich veins of wisdom that ramify amidst this worthless debris, lie utterly neglected. Cumbrous volumes of rubbish are greedily accumulated, while those masses of rich ore, that should have been dug out, and from which golden truths might have been smelted, are left untaught and unsought[36]
|
54 |
+
|
55 |
+
By the "rich ore" Spencer meant scientific theory of history. Meanwhile, Henry Thomas Buckle expressed a dream of history becoming one day science:
|
56 |
+
|
57 |
+
In regard to nature, events apparently the most irregular and capricious have been explained and have been shown to be in accordance with certain fixed and universal laws. This have been done because men of ability and, above all, men of patient, untiring thought have studied events with the view of discovering their regularity, and if human events were subject to a similar treatment, we have every right to expect similar results[37]
|
58 |
+
|
59 |
+
Contrary to Buckle's dream, the 19th-century historian with greatest influence on methods became Leopold von Ranke in Germany. He limited history to “what really happened” and by this directed the field further away from science. For Ranke, historical data should be collected carefully, examined objectively and put together with critical rigor. But these procedures “are merely the prerequisites and preliminaries of science. The heart of science is searching out order and regularity in the data being examined and in formulating generalizations or laws about them.”[38]
|
60 |
+
|
61 |
+
As Historians like Ranke and many who followed him have pursued it, no, history is not a science. Thus if Historians tell us that, given the manner in which he practices his craft, it cannot be considered a science, we must take him at his word. If he is not doing science, then, whatever else he is doing, he is not doing science. The traditional Historian is thus no scientist and history, as conventionally practiced, is not a science.[39]
|
62 |
+
|
63 |
+
In the 20th century, academic historians focused less on epic nationalistic narratives, which often tended to glorify the nation or great men, to more objective and complex analyses of social and intellectual forces. A major trend of historical methodology in the 20th century was a tendency to treat history more as a social science rather than as an art, which traditionally had been the case. Some of the leading advocates of history as a social science were a diverse collection of scholars which included Fernand Braudel, E. H. Carr, Fritz Fischer, Emmanuel Le Roy Ladurie, Hans-Ulrich Wehler, Bruce Trigger, Marc Bloch, Karl Dietrich Bracher, Peter Gay, Robert Fogel, Lucien Febvre and Lawrence Stone. Many of the advocates of history as a social science were or are noted for their multi-disciplinary approach. Braudel combined history with geography, Bracher history with political science, Fogel history with economics, Gay history with psychology, Trigger history with archaeology while Wehler, Bloch, Fischer, Stone, Febvre and Le Roy Ladurie have in varying and differing ways amalgamated history with sociology, geography, anthropology, and economics. Nevertheless, these multidisciplinary approaches failed to produce a theory of history. So far only one theory of history came from the pen of a professional Historian.[40] Whatever other theories of history we have, they were written by experts from other fields (for example, Marxian theory of history). More recently, the field of digital history has begun to address ways of using computer technology to pose new questions to historical data and generate digital scholarship.
|
64 |
+
|
65 |
+
In sincere opposition to the claims of history as a social science, historians such as Hugh Trevor-Roper, John Lukacs, Donald Creighton, Gertrude Himmelfarb and Gerhard Ritter argued that the key to the historians' work was the power of the imagination, and hence contended that history should be understood as an art. French historians associated with the Annales School introduced quantitative history, using raw data to track the lives of typical individuals, and were prominent in the establishment of cultural history (cf. histoire des mentalités). Intellectual historians such as Herbert Butterfield, Ernst Nolte and George Mosse have argued for the significance of ideas in history. American historians, motivated by the civil rights era, focused on formerly overlooked ethnic, racial, and socio-economic groups. Another genre of social history to emerge in the post-WWII era was Alltagsgeschichte (History of Everyday Life). Scholars such as Martin Broszat, Ian Kershaw and Detlev Peukert sought to examine what everyday life was like for ordinary people in 20th-century Germany, especially in the Nazi period.
|
66 |
+
|
67 |
+
Marxist historians such as Eric Hobsbawm, E. P. Thompson, Rodney Hilton, Georges Lefebvre, Eugene Genovese, Isaac Deutscher, C. L. R. James, Timothy Mason, Herbert Aptheker, Arno J. Mayer and Christopher Hill have sought to validate Karl Marx's theories by analyzing history from a Marxist perspective. In response to the Marxist interpretation of history, historians such as François Furet, Richard Pipes, J. C. D. Clark, Roland Mousnier, Henry Ashby Turner and Robert Conquest have offered anti-Marxist interpretations of history. Feminist historians such as Joan Wallach Scott, Claudia Koonz, Natalie Zemon Davis, Sheila Rowbotham, Gisela Bock, Gerda Lerner, Elizabeth Fox-Genovese, and Lynn Hunt have argued for the importance of studying the experience of women in the past. In recent years, postmodernists have challenged the validity and need for the study of history on the basis that all history is based on the personal interpretation of sources. In his 1997 book In Defence of History, Richard J. Evans defended the worth of history. Another defence of history from post-modernist criticism was the Australian historian Keith Windschuttle's 1994 book, The Killing of History.
|
68 |
+
|
69 |
+
Today, most historians begin their research process in the archives, on either a physical or digital platform. They often propose an argument and use their research to support it. John H. Arnold proposed that history is an argument, which creates the possibility of creating change.[5] Digital information companies, such as Google, have sparked controversy over the role of internet censorship in information access.[41]
|
70 |
+
|
71 |
+
The Marxist theory of historical materialism theorises that society is fundamentally determined by the material conditions at any given time – in other words, the relationships which people have with each other in order to fulfill basic needs such as feeding, clothing and housing themselves and their families.[42] Overall, Marx and Engels claimed to have identified five successive stages of the development of these material conditions in Western Europe.[43] Marxist historiography was once orthodoxy in the Soviet Union, but since the collapse of communism there in 1991, Mikhail Krom says it has been reduced to the margins of scholarship.[44]
|
72 |
+
|
73 |
+
Many historians believe that the production of history is embedded with bias because events and known facts in history can be interpreted in a variety of ways. Constantin Fasolt suggested that history is linked to politics by the practice of silence itself.[45] “A second common view of the link between history and politics rests on the elementary observation that historians are often influenced by politics.”[45] According to Michel-Rolph Trouillot, the historical process is rooted in the archives, therefore silences, or parts of history that are forgotten, may be an intentional part of a narrative strategy that dictates how areas of history are remembered.[20] Historical omissions can occur in many ways and can have a profound effect on historical records. Information can also purposely be excluded or left out accidentally. Historians have coined multiple terms that describe the act of omitting historical information, including: “silencing,”[20] “selective memory,”[46] and erasures.[47] Gerda Lerner, a twentieth century historian who focused much of her work on historical omissions involving women and their accomplishments, explained the negative impact that these omissions had on minority groups.[46]
|
74 |
+
|
75 |
+
Environmental historian William Cronon proposed three ways to combat bias and ensure authentic and accurate narratives: narratives must not contradict known fact, they must make ecological sense (specifically for environmental history), and published work must be reviewed by scholarly community and other historians to ensure accountability.[47]
|
76 |
+
|
77 |
+
These are approaches to history; not listed are histories of other fields, such as history of science, history of mathematics and history of philosophy.
|
78 |
+
|
79 |
+
Historical study often focuses on events and developments that occur in particular blocks of time. Historians give these periods of time names in order to allow "organising ideas and classificatory generalisations" to be used by historians.[48] The names given to a period can vary with geographical location, as can the dates of the beginning and end of a particular period. Centuries and decades are commonly used periods and the time they represent depends on the dating system used. Most periods are constructed retrospectively and so reflect value judgments made about the past. The way periods are constructed and the names given to them can affect the way they are viewed and studied.[49]
|
80 |
+
|
81 |
+
The field of history generally leaves prehistory to the archaeologists, who have entirely different sets of tools and theories. The usual method for periodisation of the distant prehistoric past, in archaeology is to rely on changes in material culture and technology, such as the Stone Age, Bronze Age and Iron Age and their sub-divisions also based on different styles of material remains. Here prehistory is divided into a series of "chapters" so that periods in history could unfold not only in a relative chronology but also narrative chronology.[50] This narrative content could be in the form of functional-economic interpretation. There are periodisation, however, that do not have this narrative aspect, relying largely on relative chronology and, thus, devoid of any specific meaning.
|
82 |
+
|
83 |
+
Despite the development over recent decades of the ability through radiocarbon dating and other scientific methods to give actual dates for many sites or artefacts, these long-established schemes seem likely to remain in use. In many cases neighbouring cultures with writing have left some history of cultures without it, which may be used. Periodisation, however, is not viewed as a perfect framework with one account explaining that "cultural changes do not conveniently start and stop (combinedly) at periodisation boundaries" and that different trajectories of change are also needed to be studied in their own right before they get intertwined with cultural phenomena.[51]
|
84 |
+
|
85 |
+
Particular geographical locations can form the basis of historical study, for example, continents, countries, and cities. Understanding why historic events took place is important. To do this, historians often turn to geography. According to Jules Michelet in his book Histoire de France (1833), "without geographical basis, the people, the makers of history, seem to be walking on air."[52] Weather patterns, the water supply, and the landscape of a place all affect the lives of the people who live there. For example, to explain why the ancient Egyptians developed a successful civilization, studying the geography of Egypt is essential. Egyptian civilization was built on the banks of the Nile River, which flooded each year, depositing soil on its banks. The rich soil could help farmers grow enough crops to feed the people in the cities. That meant everyone did not have to farm, so some people could perform other jobs that helped develop the civilization. There is also the case of climate, which historians like Ellsworth Huntington and Allen Semple, cited as a crucial influence on the course of history and racial temperament.[53]
|
86 |
+
|
87 |
+
Military history concerns warfare, strategies, battles, weapons, and the psychology of combat. The "new military history" since the 1970s has been concerned with soldiers more than generals, with psychology more than tactics, and with the broader impact of warfare on society and culture.[54]
|
88 |
+
|
89 |
+
The history of religion has been a main theme for both secular and religious historians for centuries, and continues to be taught in seminaries and academe. Leading journals include Church History, The Catholic Historical Review, and History of Religions. Topics range widely from political and cultural and artistic dimensions, to theology and liturgy.[55] This subject studies religions from all regions and areas of the world where humans have lived.[56]
|
90 |
+
|
91 |
+
Social history, sometimes called the new social history, is the field that includes history of ordinary people and their strategies and institutions for coping with life.[57] In its "golden age" it was a major growth field in the 1960s and 1970s among scholars, and still is well represented in history departments. In two decades from 1975 to 1995, the proportion of professors of history in American universities identifying with social history rose from 31% to 41%, while the proportion of political historians fell from 40% to 30%.[58] In the history departments of British universities in 2007, of the 5723 faculty members, 1644 (29%) identified themselves with social history while political history came next with 1425 (25%).[59]
|
92 |
+
The "old" social history before the 1960s was a hodgepodge of topics without a central theme, and it often included political movements, like Populism, that were "social" in the sense of being outside the elite system. Social history was contrasted with political history, intellectual history and the history of great men. English historian G. M. Trevelyan saw it as the bridging point between economic and political history, reflecting that, "Without social history, economic history is barren and political history unintelligible."[60] While the field has often been viewed negatively as history with the politics left out, it has also been defended as "history with the people put back in."[61]
|
93 |
+
|
94 |
+
The chief subfields of social history include:
|
95 |
+
|
96 |
+
Cultural history replaced social history as the dominant form in the 1980s and 1990s. It typically combines the approaches of anthropology and history to look at language, popular cultural traditions and cultural interpretations of historical experience. It examines the records and narrative descriptions of past knowledge, customs, and arts of a group of people. How peoples constructed their memory of the past is a major topic.
|
97 |
+
Cultural history includes the study of art in society as well is the study of images and human visual production (iconography).[62]
|
98 |
+
|
99 |
+
Diplomatic history focuses on the relationships between nations, primarily regarding diplomacy and the causes of wars. More recently it looks at the causes of peace and human rights. It typically presents the viewpoints of the foreign office, and long-term strategic values, as the driving force of continuity and change in history. This type of political history is the study of the conduct of international relations between states or across state boundaries over time. Historian Muriel Chamberlain notes that after the First World War, "diplomatic history replaced constitutional history as the flagship of historical investigation, at once the most important, most exact and most sophisticated of historical studies."[63] She adds that after 1945, the trend reversed, allowing social history to replace it.
|
100 |
+
|
101 |
+
Although economic history has been well established since the late 19th century, in recent years academic studies have shifted more and more toward economics departments and away from traditional history departments.[64] Business history deals with the history of individual business organizations, business methods, government regulation, labour relations, and impact on society. It also includes biographies of individual companies, executives, and entrepreneurs. It is related to economic history; Business history is most often taught in business schools.[65]
|
102 |
+
|
103 |
+
Environmental history is a new field that emerged in the 1980s to look at the history of the environment, especially in the long run, and the impact of human activities upon it.[66] It is an offshoot of the environmental movement, which was kickstarted by Rachel Carson's Silent Spring in the 1960s.
|
104 |
+
|
105 |
+
World history is the study of major civilizations over the last 3000 years or so. World history is primarily a teaching field, rather than a research field. It gained popularity in the United States,[67] Japan[68] and other countries after the 1980s with the realization that students need a broader exposure to the world as globalization proceeds.
|
106 |
+
|
107 |
+
It has led to highly controversial interpretations by Oswald Spengler and Arnold J. Toynbee, among others.
|
108 |
+
|
109 |
+
The World History Association publishes the Journal of World History every quarter since 1990.[69] The H-World discussion list[70] serves as a network of communication among practitioners of world history, with discussions among scholars, announcements, syllabi, bibliographies and book reviews.
|
110 |
+
|
111 |
+
A people's history is a type of historical work which attempts to account for historical events from the perspective of common people. A people's history is the history of the world that is the story of mass movements and of the outsiders. Individuals or groups not included in the past in other type of writing about history are the primary focus, which includes the disenfranchised, the oppressed, the poor, the nonconformists, and the otherwise forgotten people. The authors are typically on the left and have a socialist model in mind, as in the approach of the History Workshop movement in Britain in the 1960s.[71]
|
112 |
+
|
113 |
+
Intellectual history and the history of ideas emerged in the mid-20th century, with the focus on the intellectuals and their books on the one hand, and on the other the study of ideas as disembodied objects with a career of their own.[72][73]
|
114 |
+
|
115 |
+
Gender history is a subfield of History and Gender studies, which looks at the past from the perspective of gender. The outgrowth of gender history from women's history stemmed from many non-feminist historians dismissing the importance of women in history. According to Joan W. Scott, “Gender is a constitutive element of social relationships based on perceived differences between the sexes, and gender is a primary way of signifying relations of power,”[74] meaning that gender historians study the social effects of perceived differences between the sexes and how all genders utilize allotted power in societal and political structures. Despite being a relatively new field, gender history has had a significant effect on the general study of history. Gender history traditionally differs from women's history in its inclusion of all aspects of gender such as masculinity and femininity, and today's gender history extends to include people who identify outside of that binary.
|
116 |
+
|
117 |
+
Public history describes the broad range of activities undertaken by people with some training in the discipline of history who are generally working outside of specialized academic settings. Public history practice has quite deep roots in the areas of historic preservation, archival science, oral history, museum curatorship, and other related fields. The term itself began to be used in the U.S. and Canada in the late 1970s, and the field has become increasingly professionalized since that time. Some of the most common settings for public history are museums, historic homes and historic sites, parks, battlefields, archives, film and television companies, and all levels of government.[75]
|
118 |
+
|
119 |
+
LGBT history deals with the first recorded instances of same-sex love and sexuality of ancient civilizations, involves the history of lesbian, gay, bisexual and transgender (LGBT) peoples and cultures around the world. A common feature of LGBTQ+ history is the focus on oral history and individual perspectives, in addition to traditional documents within the archives.
|
120 |
+
|
121 |
+
Professional and amateur historians discover, collect, organize, and present information about past events. They discover this information through archaeological evidence, written primary sources, verbal stories or oral histories, and other archival material. In lists of historians, historians can be grouped by order of the historical period in which they were writing, which is not necessarily the same as the period in which they specialized. Chroniclers and annalists, though they are not historians in the true sense, are also frequently included.
|
122 |
+
|
123 |
+
Since the 20th century, Western historians have disavowed the aspiration to provide the "judgement of history."[76] The goals of historical judgements or interpretations are separate to those of legal judgements, that need to be formulated quickly after the events and be final.[77] A related issue to that of the judgement of history is that of collective memory.
|
124 |
+
|
125 |
+
Pseudohistory is a term applied to texts which purport to be historical in nature but which depart from standard historiographical conventions in a way which undermines their conclusions.
|
126 |
+
It is closely related to deceptive historical revisionism. Works which draw controversial conclusions from new, speculative, or disputed historical evidence, particularly in the fields of national, political, military, and religious affairs, are often rejected as pseudohistory.
|
127 |
+
|
128 |
+
A major intellectual battle took place in Britain in the early twentieth century regarding the place of history teaching in the universities. At Oxford and Cambridge, scholarship was downplayed. Professor Charles Harding Firth, Oxford's Regius Professor of history in 1904 ridiculed the system as best suited to produce superficial journalists. The Oxford tutors, who had more votes than the professors, fought back in defence of their system saying that it successfully produced Britain's outstanding statesmen, administrators, prelates, and diplomats, and that mission was as valuable as training scholars. The tutors dominated the debate until after the Second World War. It forced aspiring young scholars to teach at outlying schools, such as Manchester University, where Thomas Frederick Tout was professionalizing the History undergraduate programme by introducing the study of original sources and requiring the writing of a thesis.[78][79]
|
129 |
+
|
130 |
+
In the United States, scholarship was concentrated at the major PhD-producing universities, while the large number of other colleges and universities focused on undergraduate teaching. A tendency in the 21st century was for the latter schools to increasingly demand scholarly productivity of their younger tenure-track faculty. Furthermore, universities have increasingly relied on inexpensive part-time adjuncts to do most of the classroom teaching.[80]
|
131 |
+
|
132 |
+
From the origins of national school systems in the 19th century, the teaching of history to promote national sentiment has been a high priority. In the United States after World War I, a strong movement emerged at the university level to teach courses in Western Civilization, so as to give students a common heritage with Europe. In the U.S. after 1980, attention increasingly moved toward teaching world history or requiring students to take courses in non-western cultures, to prepare students for life in a globalized economy.[81]
|
133 |
+
|
134 |
+
At the university level, historians debate the question of whether history belongs more to social science or to the humanities. Many view the field from both perspectives.
|
135 |
+
|
136 |
+
The teaching of history in French schools was influenced by the Nouvelle histoire as disseminated after the 1960s by Cahiers pédagogiques and Enseignement and other journals for teachers. Also influential was the Institut national de recherche et de documentation pédagogique, (INRDP). Joseph Leif, the Inspector-general of teacher training, said pupils children should learn about historians' approaches as well as facts and dates. Louis François, Dean of the History/Geography group in the Inspectorate of National Education advised that teachers should provide historic documents and promote "active methods" which would give pupils "the immense happiness of discovery." Proponents said it was a reaction against the memorization of names and dates that characterized teaching and left the students bored. Traditionalists protested loudly it was a postmodern innovation that threatened to leave the youth ignorant of French patriotism and national identity.[82]
|
137 |
+
|
138 |
+
In several countries history textbooks are tools to foster nationalism and patriotism, and give students the official narrative about national enemies.[83]
|
139 |
+
|
140 |
+
In many countries, history textbooks are sponsored by the national government and are written to put the national heritage in the most favourable light. For example, in Japan, mention of the Nanking Massacre has been removed from textbooks and the entire Second World War is given cursory treatment. Other countries have complained.[84] It was standard policy in communist countries to present only a rigid Marxist historiography.[85][86]
|
141 |
+
|
142 |
+
In the United States, textbooks published by the same company often differ in content from state to state.[87] An example of content that is represented different in different regions of the country is the history of the Southern states, where slavery and the American Civil War are treated as controversial topics. McGraw-Hill Education for example, was criticised for describing Africans brought to American plantations as "workers" instead of slaves in a textbook.[88]
|
143 |
+
|
144 |
+
Academic historians have often fought against the politicization of the textbooks, sometimes with success.[89][90]
|
145 |
+
|
146 |
+
In 21st-century Germany, the history curriculum is controlled by the 16 states, and is characterized not by superpatriotism but rather by an "almost pacifistic and deliberately unpatriotic undertone" and reflects "principles formulated by international organizations such as UNESCO or the Council of Europe, thus oriented towards human rights, democracy and peace." The result is that "German textbooks usually downplay national pride and ambitions and aim to develop an understanding of citizenship centered on democracy, progress, human rights, peace, tolerance and Europeanness."[91]
|
147 |
+
|
en/2587.html.txt
ADDED
@@ -0,0 +1,239 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Adolf Hitler (German: [ˈadɔlf ˈhɪtlɐ] (listen); 20 April 1889 – 30 April 1945) was a German politician and leader of the Nazi Party (Nationalsozialistische Deutsche Arbeiterpartei; NSDAP). He rose to power as the chancellor of Germany in 1933 and then as Führer in 1934.[a] During his dictatorship from 1933 to 1945, he initiated World War II in Europe by invading Poland on 1 September 1939. He was closely involved in military operations throughout the war and was central to the perpetration of the Holocaust.
|
4 |
+
|
5 |
+
Hitler was born in Austria—then part of Austria-Hungary—and was raised near Linz. He moved to Germany in 1913 and was decorated during his service in the German Army in World War I. In 1919, he joined the German Workers' Party (DAP), the precursor of the NSDAP, and was appointed leader of the NSDAP in 1921. In 1923, he attempted to seize power in a failed coup in Munich and was imprisoned. In jail, he dictated the first volume of his autobiography and political manifesto Mein Kampf ("My Struggle"). After his release in 1924, Hitler gained popular support by attacking the Treaty of Versailles and promoting Pan-Germanism, anti-semitism and anti-communism with charismatic oratory and Nazi propaganda. He frequently denounced international capitalism and communism as part of a Jewish conspiracy.
|
6 |
+
|
7 |
+
By November 1932, the Nazi Party had the most seats in the German Reichstag but did not have a majority. As a result, no party was able to form a majority parliamentary coalition in support of a candidate for chancellor. Former chancellor Franz von Papen and other conservative leaders persuaded President Paul von Hindenburg to appoint Hitler as chancellor on 30 January 1933. Shortly after, the Reichstag passed the Enabling Act of 1933 which began the process of transforming the Weimar Republic into Nazi Germany, a one-party dictatorship based on the totalitarian and autocratic ideology of National Socialism. Hitler aimed to eliminate Jews from Germany and establish a New Order to counter what he saw as the injustice of the post-World War I international order dominated by Britain and France. His first six years in power resulted in rapid economic recovery from the Great Depression, the abrogation of restrictions imposed on Germany after World War I, and the annexation of territories inhabited by millions of ethnic Germans, which gave him significant popular support.
|
8 |
+
|
9 |
+
Hitler sought Lebensraum ("living space") for the German people in Eastern Europe, and his aggressive foreign policy is considered the primary cause of World War II in Europe. He directed large-scale rearmament and, on 1 September 1939, invaded Poland, resulting in Britain and France declaring war on Germany. In June 1941, Hitler ordered an invasion of the Soviet Union. By the end of 1941, German forces and the European Axis powers occupied most of Europe and North Africa. These gains were gradually reversed after 1941, and in 1945 the Allied armies defeated the German army. On 29 April 1945, he married his longtime lover Eva Braun. Less than two days later, the couple committed suicide to avoid capture by the Soviet Red Army. Their corpses were burned.
|
10 |
+
|
11 |
+
Under Hitler's leadership and racially motivated ideology, the Nazi regime was responsible for the genocide of about 6 million Jews and millions of other victims whom he and his followers deemed Untermenschen (subhumans) or socially undesirable. Hitler and the Nazi regime were also responsible for the killing of an estimated 19.3 million civilians and prisoners of war. In addition, 28.7 million soldiers and civilians died as a result of military action in the European theatre. The number of civilians killed during World War II was unprecedented in warfare, and the casualties constitute the deadliest conflict in history.
|
12 |
+
|
13 |
+
Hitler's actions and Nazi ideology are almost universally regarded as gravely immoral. According to Ian Kershaw, "Never in history has such ruination – physical and moral – been associated with the name of one man."[4]
|
14 |
+
|
15 |
+
Hitler's father Alois Hitler Sr. (1837–1903) was the illegitimate child of Maria Anna Schicklgruber.[5] The baptismal register did not show the name of his father, and Alois initially bore his mother's surname Schicklgruber. In 1842, Johann Georg Hiedler married Alois's mother Maria Anna. Alois was brought up in the family of Hiedler's brother, Johann Nepomuk Hiedler.[6] In 1876, Alois was legitimated and the baptismal record received an annotation made by a priest to register Johann Georg Hiedler as Alois's father (recorded as "Georg Hitler").[7][8] Alois then assumed the surname "Hitler",[8] also spelled Hiedler, Hüttler, or Huettler. The name is probably based on "one who lives in a hut" (German Hütte for "hut").[9]
|
16 |
+
|
17 |
+
Nazi official Hans Frank suggested that Alois' mother had been employed as a housekeeper by a Jewish family in Graz, and that the family's 19-year-old son Leopold Frankenberger had fathered Alois.[10] No Frankenberger was registered in Graz during that period, and no record has been produced of Leopold Frankenberger's existence,[11] so historians dismiss the claim that Alois' father was Jewish.[12][13]
|
18 |
+
|
19 |
+
Adolf Hitler was born on 20 April 1889 in Braunau am Inn, a town in Austria-Hungary (in present-day Austria), close to the border with the German Empire.[14] He was the fourth of six children born to Alois Hitler and his third wife, Klara Pölzl. Three of Hitler's siblings—Gustav, Ida, and Otto—died in infancy.[15] Also living in the household were Alois's children from his second marriage: Alois Jr. (born 1882) and Angela (born 1883).[16] When Hitler was three, the family moved to Passau, Germany.[17] There he acquired the distinctive lower Bavarian dialect, rather than Austrian German, which marked his speech throughout his life.[18][19][20] The family returned to Austria and settled in Leonding in 1894, and in June 1895 Alois retired to Hafeld, near Lambach, where he farmed and kept bees. Hitler attended Volksschule (a state-owned primary school) in nearby Fischlham.[21][22]
|
20 |
+
|
21 |
+
The move to Hafeld coincided with the onset of intense father-son conflicts caused by Hitler's refusal to conform to the strict discipline of his school.[23] His father beat him, although his mother tried to protect him.[24] Alois Hitler's farming efforts at Hafeld ended in failure, and in 1897 the family moved to Lambach. The eight-year-old Hitler took singing lessons, sang in the church choir, and even considered becoming a priest.[25] In 1898 the family returned permanently to Leonding. Hitler was deeply affected by the death of his younger brother Edmund, who died in 1900 from measles. Hitler changed from a confident, outgoing, conscientious student to a morose, detached boy who constantly fought with his father and teachers.[26]
|
22 |
+
|
23 |
+
Alois had made a successful career in the customs bureau, and wanted his son to follow in his footsteps.[27] Hitler later dramatised an episode from this period when his father took him to visit a customs office, depicting it as an event that gave rise to an unforgiving antagonism between father and son, who were both strong-willed.[28][29][30] Ignoring his son's desire to attend a classical high school and become an artist, Alois sent Hitler to the Realschule in Linz in September 1900.[b][31] Hitler rebelled against this decision, and in Mein Kampf states that he intentionally did poorly in school, hoping that once his father saw "what little progress I was making at the technical school he would let me devote myself to my dream".[32]
|
24 |
+
|
25 |
+
Like many Austrian Germans, Hitler began to develop German nationalist ideas from a young age.[33] He expressed loyalty only to Germany, despising the declining Habsburg Monarchy and its rule over an ethnically variegated empire.[34][35] Hitler and his friends used the greeting "Heil", and sang the "Deutschlandlied" instead of the Austrian Imperial anthem.[36]
|
26 |
+
|
27 |
+
After Alois's sudden death on 3 January 1903, Hitler's performance at school deteriorated and his mother allowed him to leave.[37] He enrolled at the Realschule in Steyr in September 1904, where his behaviour and performance improved.[38] In 1905, after passing a repeat of the final exam, Hitler left the school without any ambitions for further education or clear plans for a career.[39]
|
28 |
+
|
29 |
+
In 1907 Hitler left Linz to live and study fine art in Vienna, financed by orphan's benefits and support from his mother. He applied for admission to the Academy of Fine Arts Vienna but was rejected twice.[40][41] The director suggested Hitler should apply to the School of Architecture, but he lacked the necessary academic credentials because he had not finished secondary school.[42]
|
30 |
+
|
31 |
+
On 21 December 1907, his mother died of breast cancer at the age of 47, when he himself was 18. In 1909 Hitler ran out of money and was forced to live a bohemian life in homeless shelters and a men's dormitory.[43][44] He earned money as a casual labourer and by painting and selling watercolours of Vienna's sights.[40] During his time in Vienna, he pursued a growing passion for architecture and music, attending ten performances of Lohengrin, his favourite Wagner opera.[45]
|
32 |
+
|
33 |
+
It was in Vienna that Hitler first became exposed to racist rhetoric.[46] Populists such as mayor Karl Lueger exploited the climate of virulent anti-Semitism and occasionally espoused German nationalist notions for political effect. German nationalism had a particularly widespread following in the Mariahilf district, where Hitler lived.[47] Georg Ritter von Schönerer became a major influence on Hitler.[48] He also developed an admiration for Martin Luther.[49] Hitler read local newspapers such as Deutsches Volksblatt that fanned prejudice and played on Christian fears of being swamped by an influx of Eastern European Jews.[50] He read newspapers and pamphlets that published the thoughts of philosophers and theoreticians such as Houston Stewart Chamberlain, Charles Darwin, Friedrich Nietzsche, Gustave Le Bon and Arthur Schopenhauer.[51]
|
34 |
+
|
35 |
+
The origin and development of Hitler's anti-Semitism remains a matter of debate.[52] His friend, August Kubizek, claimed that Hitler was a "confirmed anti-Semite" before he left Linz.[53] However, historian Brigitte Hamann describes Kubizek's claim as "problematical".[54] While Hitler states in Mein Kampf that he first became an anti-Semite in Vienna,[55] Reinhold Hanisch, who helped him sell his paintings, disagrees. Hitler had dealings with Jews while living in Vienna.[56][57][58] Historian Richard J. Evans states that "historians now generally agree that his notorious, murderous anti-Semitism emerged well after Germany's defeat [in World War I], as a product of the paranoid "stab-in-the-back" explanation for the catastrophe".[59]
|
36 |
+
|
37 |
+
Hitler received the final part of his father's estate in May 1913 and moved to Munich, Germany.[60] When he was conscripted into the Austro-Hungarian Army,[61] he journeyed to Salzburg on 5 February 1914 for medical assessment. After he was deemed unfit for service, he returned to Munich.[62] Hitler later claimed that he did not wish to serve the Habsburg Empire because of the mixture of races in its army and his belief that the collapse of Austria-Hungary was imminent.[63]
|
38 |
+
|
39 |
+
In August 1914, at the outbreak of World War I, Hitler was living in Munich and voluntarily enlisted in the Bavarian Army.[64] According to a 1924 report by the Bavarian authorities, allowing Hitler to serve was almost certainly an administrative error, since as an Austrian citizen, he should have been returned to Austria.[64] Posted to the Bavarian Reserve Infantry Regiment 16 (1st Company of the List Regiment),[65][64] he served as a dispatch runner on the Western Front in France and Belgium,[66] spending nearly half his time at the regimental headquarters in Fournes-en-Weppes, well behind the front lines.[67][68] He was present at the First Battle of Ypres, the Battle of the Somme, the Battle of Arras, and the Battle of Passchendaele, and was wounded at the Somme.[69] He was decorated for bravery, receiving the Iron Cross, Second Class, in 1914.[69] On a recommendation by Lieutenant Hugo Gutmann, Hitler's Jewish superior, he received the Iron Cross, First Class on 4 August 1918, a decoration rarely awarded to one of Hitler's Gefreiter rank.[70][71] He received the Black Wound Badge on 18 May 1918.[72]
|
40 |
+
|
41 |
+
During his service at headquarters, Hitler pursued his artwork, drawing cartoons and instructions for an army newspaper. During the Battle of the Somme in October 1916, he was wounded in the left thigh when a shell exploded in the dispatch runners' dugout.[73] Hitler spent almost two months in hospital at Beelitz, returning to his regiment on 5 March 1917.[74] On 15 October 1918, he was temporarily blinded in a mustard gas attack and was hospitalised in Pasewalk.[75] While there, Hitler learned of Germany's defeat, and—by his own account—upon receiving this news, he suffered a second bout of blindness.[76]
|
42 |
+
|
43 |
+
Hitler described the war as "the greatest of all experiences", and was praised by his commanding officers for his bravery.[77] His wartime experience reinforced his German patriotism and he was shocked by Germany's capitulation in November 1918.[78] His bitterness over the collapse of the war effort began to shape his ideology.[79] Like other German nationalists, he believed the Dolchstoßlegende (stab-in-the-back myth), which claimed that the German army, "undefeated in the field", had been "stabbed in the back" on the home front by civilian leaders, Jews, Marxists, and those who signed the armistice that ended the fighting—later dubbed the "November criminals".[80]
|
44 |
+
|
45 |
+
The Treaty of Versailles stipulated that Germany must relinquish several of its territories and demilitarise the Rhineland. The treaty imposed economic sanctions and levied heavy reparations on the country. Many Germans saw the treaty as an unjust humiliation—they especially objected to Article 231, which they interpreted as declaring Germany responsible for the war.[81] The Versailles Treaty and the economic, social, and political conditions in Germany after the war were later exploited by Hitler for political gain.[82]
|
46 |
+
|
47 |
+
After World War I, Hitler returned to Munich.[83] Without formal education or career prospects, he remained in the army.[84] In July 1919 he was appointed Verbindungsmann (intelligence agent) of an Aufklärungskommando (reconnaissance unit) of the Reichswehr, assigned to influence other soldiers and to infiltrate the German Workers' Party (DAP). At a DAP meeting on 12 September 1919, Party Chairman Anton Drexler was impressed with Hitler's oratorical skills. He gave him a copy of his pamphlet My Political Awakening, which contained anti-Semitic, nationalist, anti-capitalist, and anti-Marxist ideas.[85] On the orders of his army superiors, Hitler applied to join the party,[86] and within a week was accepted as party member 555 (the party began counting membership at 500 to give the impression they were a much larger party).[87][88]
|
48 |
+
|
49 |
+
Around this time, Hitler made his earliest known recorded statement about the Jews in a letter (now known as the Gemlich letter) dated 16 September 1919 to Adolf Gemlich about the Jewish question. In the letter, Hitler argues that the aim of the government "must unshakably be the removal of the Jews altogether".[89]
|
50 |
+
|
51 |
+
At the DAP, Hitler met Dietrich Eckart, one of the party's founders and a member of the occult Thule Society.[90] Eckart became Hitler's mentor, exchanging ideas with him and introducing him to a wide range of Munich society.[91] To increase its appeal, the DAP changed its name to the Nationalsozialistische Deutsche Arbeiterpartei (National Socialist German Workers Party; NSDAP).[92] Hitler designed the party's banner of a swastika in a white circle on a red background.[93]
|
52 |
+
|
53 |
+
Hitler was discharged from the army on 31 March 1920 and began working full-time for the NSDAP.[94] The party headquarters was in Munich, a hotbed of anti-government German nationalists determined to crush Marxism and undermine the Weimar Republic.[95] In February 1921—already highly effective at crowd manipulation—he spoke to a crowd of over 6,000.[96] To publicise the meeting, two truckloads of party supporters drove around Munich waving swastika flags and distributing leaflets. Hitler soon gained notoriety for his rowdy polemic speeches against the Treaty of Versailles, rival politicians, and especially against Marxists and Jews.[97]
|
54 |
+
|
55 |
+
In June 1921, while Hitler and Eckart were on a fundraising trip to Berlin, a mutiny broke out within the NSDAP in Munich. Members of its executive committee wanted to merge with the rival German Socialist Party (DSP).[98] Hitler returned to Munich on 11 July and angrily tendered his resignation. The committee members realised that the resignation of their leading public figure and speaker would mean the end of the party.[99] Hitler announced he would rejoin on the condition that he would replace Drexler as party chairman, and that the party headquarters would remain in Munich.[100] The committee agreed, and he rejoined the party on 26 July as member 3,680. Hitler continued to face some opposition within the NSDAP: Opponents of Hitler in the leadership had Hermann Esser expelled from the party, and they printed 3,000 copies of a pamphlet attacking Hitler as a traitor to the party.[100][c] In the following days, Hitler spoke to several packed houses and defended himself and Esser, to thunderous applause. His strategy proved successful, and at a special party congress on 29 July, he was granted absolute powers as party chairman, replacing Drexler, by a vote of 533 to 1.[101]
|
56 |
+
|
57 |
+
Hitler's vitriolic beer hall speeches began attracting regular audiences. A demagogue,[102] he became adept at using populist themes, including the use of scapegoats, who were blamed for his listeners' economic hardships.[103][104][105] Hitler used personal magnetism and an understanding of crowd psychology to his advantage while engaged in public speaking.[106][107] Historians have noted the hypnotic effect of his rhetoric on large audiences, and of his eyes in small groups.[108] Algis Budrys recalled the crowd noise and behaviour when Hitler appeared in a 1936 parade; some in the audience writhed and rolled on the ground or experienced fecal incontinence.[109] Alfons Heck, a former member of the Hitler Youth, recalled a similar experience:
|
58 |
+
|
59 |
+
We erupted into a frenzy of nationalistic pride that bordered on hysteria. For minutes on end, we shouted at the top of our lungs, with tears streaming down our faces: Sieg Heil, Sieg Heil, Sieg Heil! From that moment on, I belonged to Adolf Hitler body and soul.[110]
|
60 |
+
|
61 |
+
Early followers included Rudolf Hess, former air force ace Hermann Göring, and army captain Ernst Röhm. Röhm became head of the Nazis' paramilitary organisation, the Sturmabteilung (SA, "Stormtroopers"), which protected meetings and attacked political opponents. A critical influence on Hitler's thinking during this period was the Aufbau Vereinigung,[111] a conspiratorial group of White Russian exiles and early National Socialists. The group, financed with funds channelled from wealthy industrialists, introduced Hitler to the idea of a Jewish conspiracy, linking international finance with Bolshevism.[112]
|
62 |
+
|
63 |
+
The programme of the NSDAP, known colloquially as the "Nazi Party", was laid out in their 25-point programme on 24 February 1920. This did not represent a coherent ideology, but was a conglomeration of received ideas which had currency in the völkisch Pan-Germanic movement, such as ultranationalism, opposition to the Treaty of Versailles, distrust of capitalism, as well as some socialist ideas. For Hitler, though, the most important aspect of it was its strong anti-Semitic stance. He also perceived the programme as primarily a basis for propaganda and for attracting people to the party.[113]
|
64 |
+
|
65 |
+
In 1923, Hitler enlisted the help of World War I General Erich Ludendorff for an attempted coup known as the "Beer Hall Putsch". The NSDAP used Italian Fascism as a model for their appearance and policies. Hitler wanted to emulate Benito Mussolini's "March on Rome" of 1922 by staging his own coup in Bavaria, to be followed by a challenge to the government in Berlin. Hitler and Ludendorff sought the support of Staatskommissar (state commissioner) Gustav Ritter von Kahr, Bavaria's de facto ruler. However, Kahr, along with Police Chief Hans Ritter von Seisser and Reichswehr General Otto von Lossow, wanted to install a nationalist dictatorship without Hitler.[114]
|
66 |
+
|
67 |
+
On 8 November 1923, Hitler and the SA stormed a public meeting of 3,000 people organised by Kahr in the Bürgerbräukeller, a beer hall in Munich. Interrupting Kahr's speech, he announced that the national revolution had begun and declared the formation of a new government with Ludendorff.[115] Retiring to a back room, Hitler, with handgun drawn, demanded and got the support of Kahr, Seisser, and Lossow.[115] Hitler's forces initially succeeded in occupying the local Reichswehr and police headquarters, but Kahr and his cohorts quickly withdrew their support. Neither the army, nor the state police, joined forces with Hitler.[116] The next day, Hitler and his followers marched from the beer hall to the Bavarian War Ministry to overthrow the Bavarian government, but police dispersed them.[117] Sixteen NSDAP members and four police officers were killed in the failed coup.[118]
|
68 |
+
|
69 |
+
Hitler fled to the home of Ernst Hanfstaengl and by some accounts contemplated suicide.[119] He was depressed but calm when arrested on 11 November 1923 for high treason.[120] His trial before the special People's Court in Munich began in February 1924,[121] and Alfred Rosenberg became temporary leader of the NSDAP. On 1 April, Hitler was sentenced to five years' imprisonment at Landsberg Prison.[122] There, he received friendly treatment from the guards, and was allowed mail from supporters and regular visits by party comrades. Pardoned by the Bavarian Supreme Court, he was released from jail on 20 December 1924, against the state prosecutor's objections.[123] Including time on remand, Hitler served just over one year in prison.[124]
|
70 |
+
|
71 |
+
While at Landsberg, Hitler dictated most of the first volume of Mein Kampf (My Struggle; originally entitled Four and a Half Years of Struggle against Lies, Stupidity, and Cowardice) at first to his chauffeur, Emil Maurice, and then to his deputy, Rudolf Hess.[124][125] The book, dedicated to Thule Society member Dietrich Eckart, was an autobiography and exposition of his ideology. The book laid out Hitler's plans for transforming German society into one based on race. Throughout the book, Jews are equated with "germs" and presented as the "international poisoners" of society. According to Hitler's ideology, the only solution was their extermination. While Hitler did not describe exactly how this was to be accomplished, his "inherent genocidal thrust is undeniable," according to Ian Kershaw.[126]
|
72 |
+
|
73 |
+
Published in two volumes in 1925 and 1926, Mein Kampf sold 228,000 copies between 1925 and 1932. One million copies were sold in 1933, Hitler's first year in office.[127]
|
74 |
+
|
75 |
+
Shortly before Hitler was eligible for parole, the Bavarian government attempted to have him deported to Austria.[128] The Austrian federal chancellor rejected the request on the specious grounds that his service in the German Army made his Austrian citizenship void.[129] In response, Hitler formally renounced his Austrian citizenship on 7 April 1925.[129]
|
76 |
+
|
77 |
+
At the time of Hitler's release from prison, politics in Germany had become less combative and the economy had improved, limiting Hitler's opportunities for political agitation. As a result of the failed Beer Hall Putsch, the Nazi Party and its affiliated organisations were banned in Bavaria. In a meeting with the Prime Minister of Bavaria Heinrich Held on 4 January 1925, Hitler agreed to respect the state's authority and promised that he would seek political power only through the democratic process. The meeting paved the way for the ban on the NSDAP to be lifted on 16 February.[130] However, after an inflammatory speech he gave on 27 February, Hitler was barred from public speaking by the Bavarian authorities, a ban that remained in place until 1927.[131][132] To advance his political ambitions in spite of the ban, Hitler appointed Gregor Strasser, Otto Strasser and Joseph Goebbels to organise and enlarge the NSDAP in northern Germany. Gregor Strasser steered a more independent political course, emphasising the socialist elements of the party's programme.[133]
|
78 |
+
|
79 |
+
The stock market in the United States crashed on 24 October 1929. The impact in Germany was dire: millions were thrown out of work and several major banks collapsed. Hitler and the NSDAP prepared to take advantage of the emergency to gain support for their party. They promised to repudiate the Versailles Treaty, strengthen the economy, and provide jobs.[134]
|
80 |
+
|
81 |
+
The Great Depression provided a political opportunity for Hitler. Germans were ambivalent about the parliamentary republic, which faced challenges from right- and left-wing extremists. The moderate political parties were increasingly unable to stem the tide of extremism, and the German referendum of 1929 helped to elevate Nazi ideology.[136] The elections of September 1930 resulted in the break-up of a grand coalition and its replacement with a minority cabinet. Its leader, chancellor Heinrich Brüning of the Centre Party, governed through emergency decrees from President Paul von Hindenburg. Governance by decree became the new norm and paved the way for authoritarian forms of government.[137] The NSDAP rose from obscurity to win 18.3 per cent of the vote and 107 parliamentary seats in the 1930 election, becoming the second-largest party in parliament.[138]
|
82 |
+
|
83 |
+
Hitler made a prominent appearance at the trial of two Reichswehr officers, Lieutenants Richard Scheringer and Hanns Ludin, in late 1930. Both were charged with membership in the NSDAP, at that time illegal for Reichswehr personnel.[139] The prosecution argued that the NSDAP was an extremist party, prompting defence lawyer Hans Frank to call on Hitler to testify.[140] On 25 September 1930, Hitler testified that his party would pursue political power solely through democratic elections,[141] which won him many supporters in the officer corps.[142]
|
84 |
+
|
85 |
+
Brüning's austerity measures brought little economic improvement and were extremely unpopular.[143] Hitler exploited this by targeting his political messages specifically at people who had been affected by the inflation of the 1920s and the Depression, such as farmers, war veterans, and the middle class.[144]
|
86 |
+
|
87 |
+
Although Hitler had terminated his Austrian citizenship in 1925, he did not acquire German citizenship for almost seven years. This meant that he was stateless, legally unable to run for public office, and still faced the risk of deportation.[145] On 25 February 1932, the interior minister of Brunswick, Dietrich Klagges, who was a member of the NSDAP, appointed Hitler as administrator for the state's delegation to the Reichsrat in Berlin, making Hitler a citizen of Brunswick,[146] and thus of Germany.[147]
|
88 |
+
|
89 |
+
Hitler ran against Hindenburg in the 1932 presidential elections. A speech to the Industry Club in Düsseldorf on 27 January 1932 won him support from many of Germany's most powerful industrialists.[148] Hindenburg had support from various nationalist, monarchist, Catholic, and republican parties, and some Social Democrats. Hitler used the campaign slogan "Hitler über Deutschland" ("Hitler over Germany"), a reference to his political ambitions and his campaigning by aircraft.[149] He was one of the first politicians to use aircraft travel for political purposes, and used it effectively.[150][151] Hitler came in second in both rounds of the election, garnering more than 35 per cent of the vote in the final election. Although he lost to Hindenburg, this election established Hitler as a strong force in German politics.[152]
|
90 |
+
|
91 |
+
The absence of an effective government prompted two influential politicians, Franz von Papen and Alfred Hugenberg, along with several other industrialists and businessmen, to write a letter to Hindenburg. The signers urged Hindenburg to appoint Hitler as leader of a government "independent from parliamentary parties", which could turn into a movement that would "enrapture millions of people".[153][154]
|
92 |
+
|
93 |
+
Hindenburg reluctantly agreed to appoint Hitler as chancellor after two further parliamentary elections—in July and November 1932—had not resulted in the formation of a majority government. Hitler headed a short-lived coalition government formed by the NSDAP (which had the most seats in the Reichstag) and Hugenberg's party, the German National People's Party (DNVP). On 30 January 1933, the new cabinet was sworn in during a brief ceremony in Hindenburg's office. The NSDAP gained three posts: Hitler was named chancellor, Wilhelm Frick Minister of the Interior, and Hermann Göring Minister of the Interior for Prussia.[155] Hitler had insisted on the ministerial positions as a way to gain control over the police in much of Germany.[156]
|
94 |
+
|
95 |
+
As chancellor, Hitler worked against attempts by the NSDAP's opponents to build a majority government. Because of the political stalemate, he asked Hindenburg to again dissolve the Reichstag, and elections were scheduled for early March. On 27 February 1933, the Reichstag building was set on fire. Göring blamed a communist plot, because Dutch communist Marinus van der Lubbe was found in incriminating circumstances inside the burning building.[157] According to Kershaw, the consensus of nearly all historians is that van der Lubbe actually set the fire.[158] Others, including William L. Shirer and Alan Bullock, are of the opinion that the NSDAP itself was responsible.[159][160] At Hitler's urging, Hindenburg responded with the Reichstag Fire Decree of 28 February, which suspended basic rights and allowed detention without trial. The decree was permitted under Article 48 of the Weimar Constitution, which gave the president the power to take emergency measures to protect public safety and order.[161] Activities of the German Communist Party (KPD) were suppressed, and some 4,000 KPD members were arrested.[162]
|
96 |
+
|
97 |
+
In addition to political campaigning, the NSDAP engaged in paramilitary violence and the spread of anti-communist propaganda in the days preceding the election. On election day, 6 March 1933, the NSDAP's share of the vote increased to 43.9 per cent, and the party acquired the largest number of seats in parliament. Hitler's party failed to secure an absolute majority, necessitating another coalition with the DNVP.[163]
|
98 |
+
|
99 |
+
On 21 March 1933, the new Reichstag was constituted with an opening ceremony at the Garrison Church in Potsdam. This "Day of Potsdam" was held to demonstrate unity between the Nazi movement and the old Prussian elite and military. Hitler appeared in a morning coat and humbly greeted Hindenburg.[164][165]
|
100 |
+
|
101 |
+
To achieve full political control despite not having an absolute majority in parliament, Hitler's government brought the Ermächtigungsgesetz (Enabling Act) to a vote in the newly elected Reichstag. The Act—officially titled the Gesetz zur Behebung der Not von Volk und Reich ("Law to Remedy the Distress of People and Reich")—gave Hitler's cabinet the power to enact laws without the consent of the Reichstag for four years. These laws could (with certain exceptions) deviate from the constitution.[166] Since it would affect the constitution, the Enabling Act required a two-thirds majority to pass. Leaving nothing to chance, the Nazis used the provisions of the Reichstag Fire Decree to arrest all 81 Communist deputies (in spite of their virulent campaign against the party, the Nazis had allowed the KPD to contest the election)[167] and prevent several Social Democrats from attending.[168]
|
102 |
+
|
103 |
+
On 23 March 1933, the Reichstag assembled at the Kroll Opera House under turbulent circumstances. Ranks of SA men served as guards inside the building, while large groups outside opposing the proposed legislation shouted slogans and threats towards the arriving members of parliament.[169] The position of the Centre Party, the third largest party in the Reichstag, was decisive. After Hitler verbally promised party leader Ludwig Kaas that Hindenburg would retain his power of veto, Kaas announced the Centre Party would support the Enabling Act. The Act passed by a vote of 441–84, with all parties except the Social Democrats voting in favour. The Enabling Act, along with the Reichstag Fire Decree, transformed Hitler's government into a de facto legal dictatorship.[170]
|
104 |
+
|
105 |
+
At the risk of appearing to talk nonsense I tell you that the National Socialist movement will go on for 1,000 years! ... Don't forget how people laughed at me 15 years ago when I declared that one day I would govern Germany. They laugh now, just as foolishly, when I declare that I shall remain in power![171]
|
106 |
+
|
107 |
+
Having achieved full control over the legislative and executive branches of government, Hitler and his allies began to suppress the remaining opposition. The Social Democratic Party was banned and its assets seized.[172] While many trade union delegates were in Berlin for May Day activities, SA stormtroopers demolished union offices around the country. On 2 May 1933 all trade unions were forced to dissolve and their leaders were arrested. Some were sent to concentration camps.[173] The German Labour Front was formed as an umbrella organisation to represent all workers, administrators, and company owners, thus reflecting the concept of national socialism in the spirit of Hitler's Volksgemeinschaft ("people's community").[174]
|
108 |
+
|
109 |
+
By the end of June, the other parties had been intimidated into disbanding. This included the Nazis' nominal coalition partner, the DNVP; with the SA's help, Hitler forced its leader, Hugenberg, to resign on 29 June. On 14 July 1933, the NSDAP was declared the only legal political party in Germany.[174][172] The demands of the SA for more political and military power caused anxiety among military, industrial, and political leaders. In response, Hitler purged the entire SA leadership in the Night of the Long Knives, which took place from 30 June to 2 July 1934.[175] Hitler targeted Ernst Röhm and other SA leaders who, along with a number of Hitler's political adversaries (such as Gregor Strasser and former chancellor Kurt von Schleicher), were rounded up, arrested, and shot.[176] While the international community and some Germans were shocked by the murders, many in Germany believed Hitler was restoring order.[177]
|
110 |
+
|
111 |
+
On 2 August 1934, Hindenburg died. The previous day, the cabinet had enacted the "Law Concerning the Highest State Office of the Reich".[3] This law stated that upon Hindenburg's death, the office of president would be abolished and its powers merged with those of the chancellor. Hitler thus became head of state as well as head of government, and was formally named as Führer und Reichskanzler (leader and chancellor),[2] although Reichskanzler was eventually quietly dropped.[178] With this action, Hitler eliminated the last legal remedy by which he could be removed from office.[179]
|
112 |
+
|
113 |
+
As head of state, Hitler became commander-in-chief of the armed forces. Immediately after Hindenburg's death, at the instigation of the leadership of the Reichswehr, the traditional loyalty oath of soldiers was altered to affirm loyalty to Hitler personally, by name, rather than to the office of commander-in-chief (which was later renamed to supreme commander) or the state.[180] On 19 August, the merger of the presidency with the chancellorship was approved by 88 per cent of the electorate voting in a plebiscite.[181]
|
114 |
+
|
115 |
+
In early 1938, Hitler used blackmail to consolidate his hold over the military by instigating the Blomberg–Fritsch affair. Hitler forced his War Minister, Field Marshal Werner von Blomberg, to resign by using a police dossier that showed that Blomberg's new wife had a record for prostitution.[182][183] Army commander Colonel-General Werner von Fritsch was removed after the Schutzstaffel (SS) produced allegations that he had engaged in a homosexual relationship.[184] Both men had fallen into disfavour because they objected to Hitler's demand to make the Wehrmacht ready for war as early as 1938.[185] Hitler assumed Blomberg's title of Commander-in-Chief, thus taking personal command of the armed forces. He replaced the Ministry of War with the Oberkommando der Wehrmacht (OKW), headed by General Wilhelm Keitel. On the same day, sixteen generals were stripped of their commands and 44 more were transferred; all were suspected of not being sufficiently pro-Nazi.[186] By early February 1938, twelve more generals had been removed.[187]
|
116 |
+
|
117 |
+
Hitler took care to give his dictatorship the appearance of legality. Many of his decrees were explicitly based on the Reichstag Fire Decree and hence on Article 48 of the Weimar Constitution. The Reichstag renewed the Enabling Act twice, each time for a four-year period.[188] While elections to the Reichstag were still held (in 1933, 1936, and 1938), voters were presented with a single list of Nazis and pro-Nazi "guests" which carried with well over 90 per cent of the vote.[189] These elections were held in far-from-secret conditions; the Nazis threatened severe reprisals against anyone who did not vote or dared to vote no.[190]
|
118 |
+
|
119 |
+
In August 1934, Hitler appointed Reichsbank President Hjalmar Schacht as Minister of Economics, and in the following year, as Plenipotentiary for War Economy in charge of preparing the economy for war.[191] Reconstruction and rearmament were financed through Mefo bills, printing money, and seizing the assets of people arrested as enemies of the State, including Jews.[192] Unemployment fell from six million in 1932 to one million in 1936.[193] Hitler oversaw one of the largest infrastructure improvement campaigns in German history, leading to the construction of dams, autobahns, railroads, and other civil works. Wages were slightly lower in the mid to late 1930s compared with wages during the Weimar Republic, while the cost of living increased by 25 per cent.[194] The average work week increased during the shift to a war economy; by 1939, the average German was working between 47 and 50 hours a week.[195]
|
120 |
+
|
121 |
+
Hitler's government sponsored architecture on an immense scale. Albert Speer, instrumental in implementing Hitler's classicist reinterpretation of German culture, was placed in charge of the proposed architectural renovations of Berlin.[196] Despite a threatened multi-nation boycott, Germany hosted the 1936 Olympic Games. Hitler officiated at the opening ceremonies and attended events at both the Winter Games in Garmisch-Partenkirchen and the Summer Games in Berlin.[197]
|
122 |
+
|
123 |
+
In a meeting with German military leaders on 3 February 1933, Hitler spoke of "conquest for Lebensraum in the East and its ruthless Germanisation" as his ultimate foreign policy objectives.[198] In March, Prince Bernhard Wilhelm von Bülow, secretary at the Auswärtiges Amt (Foreign Office), issued a statement of major foreign policy aims: Anschluss with Austria, the restoration of Germany's national borders of 1914, rejection of military restrictions under the Treaty of Versailles, the return of the former German colonies in Africa, and a German zone of influence in Eastern Europe. Hitler found Bülow's goals to be too modest.[199] In speeches during this period, he stressed the peaceful goals of his policies and a willingness to work within international agreements.[200] At the first meeting of his cabinet in 1933, Hitler prioritised military spending over unemployment relief.[201]
|
124 |
+
|
125 |
+
Germany withdrew from the League of Nations and the World Disarmament Conference in October 1933.[202] In January 1935, over 90 per cent of the people of the Saarland, then under League of Nations administration, voted to unite with Germany.[203] That March, Hitler announced an expansion of the Wehrmacht to 600,000 members—six times the number permitted by the Versailles Treaty—including development of an air force (Luftwaffe) and an increase in the size of the navy (Kriegsmarine). Britain, France, Italy, and the League of Nations condemned these violations of the Treaty, but did nothing to stop it.[204][205] The Anglo-German Naval Agreement (AGNA) of 18 June allowed German tonnage to increase to 35 per cent of that of the British navy. Hitler called the signing of the AGNA "the happiest day of his life", believing that the agreement marked the beginning of the Anglo-German alliance he had predicted in Mein Kampf.[206] France and Italy were not consulted before the signing, directly undermining the League of Nations and setting the Treaty of Versailles on the path towards irrelevance.[207]
|
126 |
+
|
127 |
+
Germany reoccupied the demilitarised zone in the Rhineland in March 1936, in violation of the Versailles Treaty. Hitler also sent troops to Spain to support General Franco during the Spanish Civil War after receiving an appeal for help in July 1936. At the same time, Hitler continued his efforts to create an Anglo-German alliance.[208] In August 1936, in response to a growing economic crisis caused by his rearmament efforts, Hitler ordered Göring to implement a Four Year Plan to prepare Germany for war within the next four years.[209] The plan envisaged an all-out struggle between "Judeo-Bolshevism" and German national socialism, which in Hitler's view required a committed effort of rearmament regardless of the economic costs.[210]
|
128 |
+
|
129 |
+
Count Galeazzo Ciano, foreign minister of Mussolini's government, declared an axis between Germany and Italy, and on 25 November, Germany signed the Anti-Comintern Pact with Japan. Britain, China, Italy, and Poland were also invited to join the Anti-Comintern Pact, but only Italy signed in 1937. Hitler abandoned his plan of an Anglo-German alliance, blaming "inadequate" British leadership.[211] At a meeting in the Reich Chancellery with his foreign ministers and military chiefs that November, Hitler restated his intention of acquiring Lebensraum for the German people. He ordered preparations for war in the East, to begin as early as 1938 and no later than 1943. In the event of his death, the conference minutes, recorded as the Hossbach Memorandum, were to be regarded as his "political testament".[212] He felt that a severe decline in living standards in Germany as a result of the economic crisis could only be stopped by military aggression aimed at seizing Austria and Czechoslovakia.[213][214] Hitler urged quick action before Britain and France gained a permanent lead in the arms race.[213] In early 1938, in the wake of the Blomberg–Fritsch Affair, Hitler asserted control of the military-foreign policy apparatus, dismissing Neurath as foreign minister and appointing himself as War Minister.[209] From early 1938 onwards, Hitler was carrying out a foreign policy ultimately aimed at war.[215]
|
130 |
+
|
131 |
+
In February 1938, on the advice of his newly appointed foreign minister, the strongly pro-Japanese Joachim von Ribbentrop, Hitler ended the Sino-German alliance with the Republic of China to instead enter into an alliance with the more modern and powerful Empire of Japan. Hitler announced German recognition of Manchukuo, the Japanese-occupied state in Manchuria, and renounced German claims to their former colonies in the Pacific held by Japan.[216] Hitler ordered an end to arms shipments to China and recalled all German officers working with the Chinese Army.[216] In retaliation, Chinese General Chiang Kai-shek cancelled all Sino-German economic agreements, depriving the Germans of many Chinese raw materials.[217]
|
132 |
+
|
133 |
+
On 12 March 1938, Hitler announced the unification of Austria with Nazi Germany in the Anschluss.[218][219] Hitler then turned his attention to the ethnic German population of the Sudetenland region of Czechoslovakia.[220] On 28–29 March 1938, Hitler held a series of secret meetings in Berlin with Konrad Henlein of the Sudeten German Party, the largest of the ethnic German parties of the Sudetenland. The men agreed that Henlein would demand increased autonomy for Sudeten Germans from the Czechoslovakian government, thus providing a pretext for German military action against Czechoslovakia. In April 1938 Henlein told the foreign minister of Hungary that "whatever the Czech government might offer, he would always raise still higher demands ... he wanted to sabotage an understanding by any means because this was the only method to blow up Czechoslovakia quickly".[221] In private, Hitler considered the Sudeten issue unimportant; his real intention was a war of conquest against Czechoslovakia.[222]
|
134 |
+
|
135 |
+
In April Hitler ordered the OKW to prepare for
|
136 |
+
Fall Grün (Case Green), the code name for an invasion of Czechoslovakia.[223] As a result of intense French and British diplomatic pressure, on 5 September Czechoslovakian President Edvard Beneš unveiled the "Fourth Plan" for constitutional reorganisation of his country, which agreed to most of Henlein's demands for Sudeten autonomy.[224] Henlein's party responded to Beneš' offer by instigating a series of violent clashes with the Czechoslovakian police that led to the declaration of martial law in certain Sudeten districts.[225][226]
|
137 |
+
|
138 |
+
Germany was dependent on imported oil; a confrontation with Britain over the Czechoslovakian dispute could curtail Germany's oil supplies. This forced Hitler to call off Fall Grün, originally planned for 1 October 1938.[227] On 29 September Hitler, Neville Chamberlain, Édouard Daladier, and Mussolini attended a one-day conference in Munich that led to the Munich Agreement, which handed over the Sudetenland districts to Germany.[228][229]
|
139 |
+
|
140 |
+
Chamberlain was satisfied with the Munich conference, calling the outcome "peace for our time", while Hitler was angered about the missed opportunity for war in 1938;[230][231] he expressed his disappointment in a speech on 9 October in Saarbrücken.[232] In Hitler's view, the British-brokered peace, although favourable to the ostensible German demands, was a diplomatic defeat which spurred his intent of limiting British power to pave the way for the eastern expansion of Germany.[233][234] As a result of the summit, Hitler was selected Time magazine's Man of the Year for 1938.[235]
|
141 |
+
|
142 |
+
In late 1938 and early 1939, the continuing economic crisis caused by rearmament forced Hitler to make major defence cuts.[236] In his "Export or die" speech of 30 January 1939, he called for an economic offensive to increase German foreign exchange holdings to pay for raw materials such as high-grade iron needed for military weapons.[236]
|
143 |
+
|
144 |
+
On 14 March 1939, under threat from Hungary, Slovakia declared independence and received protection from Germany.[237] The next day, in violation of the Munich accord and possibly as a result of the deepening economic crisis requiring additional assets,[238] Hitler ordered the Wehrmacht to invade the Czech rump state, and from Prague Castle he proclaimed the territory a German protectorate.[239]
|
145 |
+
|
146 |
+
In private discussions in 1939, Hitler declared Britain the main enemy to be defeated and that Poland's obliteration was a necessary prelude for that goal.[240] The eastern flank would be secured and land would be added to Germany's Lebensraum.[241] Offended by the British "guarantee" on 31 March 1939 of Polish independence, he said, "I shall brew them a devil's drink".[242] In a speech in Wilhelmshaven for the launch of the battleship Tirpitz on 1 April, he threatened to denounce the Anglo-German Naval Agreement if the British continued to guarantee Polish independence, which he perceived as an "encirclement" policy.[242] Poland was to either become a German satellite state or it would be neutralised in order to secure the Reich's eastern flank and prevent a possible British blockade.[243] Hitler initially favoured the idea of a satellite state, but upon its rejection by the Polish government, he decided to invade and made this the main foreign policy goal of 1939.[244] On 3 April, Hitler ordered the military to prepare for Fall Weiss ("Case White"), the plan for invading Poland on 25 August.[244] In a Reichstag speech on 28 April, he renounced both the Anglo-German Naval Agreement and the German–Polish Non-Aggression Pact.[245] Historians such as William Carr, Gerhard Weinberg, and Ian Kershaw have argued that one reason for Hitler's rush to war was his fear of an early death. He had repeatedly claimed that he must lead Germany into war before he got too old, as his successors might lack his strength of will.[246][247][248]
|
147 |
+
|
148 |
+
Hitler was concerned that a military attack against Poland could result in a premature war with Britain.[243][249] Hitler's foreign minister and former Ambassador to London, Joachim von Ribbentrop, assured him that neither Britain nor France would honour their commitments to Poland.[250][251] Accordingly, on 22 August 1939 Hitler ordered a military mobilisation against Poland.[252]
|
149 |
+
|
150 |
+
This plan required tacit Soviet support,[253] and the non-aggression pact (the Molotov–Ribbentrop Pact) between Germany and the Soviet Union, led by Joseph Stalin, included a secret agreement to partition Poland between the two countries.[254] Contrary to Ribbentrop's prediction that Britain would sever Anglo-Polish ties, Britain and Poland signed the Anglo-Polish alliance on 25 August 1939. This, along with news from Italy that Mussolini would not honour the Pact of Steel, prompted Hitler to postpone the attack on Poland from 25 August to 1 September.[255] Hitler unsuccessfully tried to manoeuvre the British into neutrality by offering them a non-aggression guarantee on 25 August; he then instructed Ribbentrop to present a last-minute peace plan with an impossibly short time limit in an effort to blame the imminent war on British and Polish inaction.[256][257]
|
151 |
+
|
152 |
+
On 1 September 1939, Germany invaded western Poland under the pretext of having been denied claims to the Free City of Danzig and the right to extraterritorial roads across the Polish Corridor, which Germany had ceded under the Versailles Treaty.[258] In response, Britain and France declared war on Germany on 3 September, surprising Hitler and prompting him to angrily ask Ribbentrop, "Now what?"[259] France and Britain did not act on their declarations immediately, and on 17 September, Soviet forces invaded eastern Poland.[260]
|
153 |
+
|
154 |
+
The fall of Poland was followed by what contemporary journalists dubbed the "Phoney War" or Sitzkrieg ("sitting war"). Hitler instructed the two newly appointed Gauleiters of north-western Poland, Albert Forster of Reichsgau Danzig-West Prussia and Arthur Greiser of Reichsgau Wartheland, to Germanise their areas, with "no questions asked" about how this was accomplished.[261] In Forster's area, ethnic Poles merely had to sign forms stating that they had German blood.[262] In contrast, Greiser agreed with Himmler and carried out an ethnic cleansing campaign towards Poles. Greiser soon complained that Forster was allowing thousands of Poles to be accepted as "racial" Germans and thus endangered German "racial purity".[261] Hitler refrained from getting involved. This inaction has been advanced as an example of the theory of "working towards the Führer", in which Hitler issued vague instructions and expected his subordinates to work out policies on their own.[261][263]
|
155 |
+
|
156 |
+
Another dispute pitched one side represented by Heinrich Himmler and Greiser, who championed ethnic cleansing in Poland, against another represented by Göring and Hans Frank (governor-general of occupied Poland), who called for turning Poland into the "granary" of the Reich.[264] On 12 February 1940, the dispute was initially settled in favour of the Göring–Frank view, which ended the economically disruptive mass expulsions.[264] On 15 May 1940, Himmler issued a memo entitled "Some Thoughts on the Treatment of Alien Population in the East", calling for the expulsion of the entire Jewish population of Europe into Africa and the reduction of the Polish population to a "leaderless class of labourers".[264] Hitler called Himmler's memo "good and correct",[264] and, ignoring Göring and Frank, implemented the Himmler–Greiser policy in Poland.
|
157 |
+
|
158 |
+
On 9 April, German forces invaded Denmark and Norway. On the same day Hitler proclaimed the birth of the Greater Germanic Reich, his vision of a united empire of Germanic nations of Europe in which the Dutch, Flemish, and Scandinavians were joined into a "racially pure" polity under German leadership.[265] In May 1940, Germany attacked France, and conquered Luxembourg, the Netherlands, and Belgium. These victories prompted Mussolini to have Italy join forces with Hitler on 10 June. France and Germany signed an armistice on 22 June.[266] Kershaw notes that Hitler's popularity within Germany—and German support for the war—reached its peak when he returned to Berlin on 6 July from his tour of Paris.[267] Following the unexpected swift victory, Hitler promoted twelve generals to the rank of field marshal during the 1940 Field Marshal Ceremony.[268][269]
|
159 |
+
|
160 |
+
Britain, whose troops were forced to evacuate France by sea from Dunkirk,[270] continued to fight alongside other British dominions in the Battle of the Atlantic. Hitler made peace overtures to the new British leader, Winston Churchill, and upon their rejection he ordered a series of aerial attacks on Royal Air Force airbases and radar stations in south-east England. On 7 September the systematic nightly bombing of London began. The German Luftwaffe failed to defeat the Royal Air Force in what became known as the Battle of Britain.[271] By the end of September, Hitler realised that air superiority for the invasion of Britain (in Operation Sea Lion) could not be achieved, and ordered the operation postponed. The nightly air raids on British cities intensified and continued for months, including London, Plymouth, and Coventry.[272]
|
161 |
+
|
162 |
+
On 27 September 1940, the Tripartite Pact was signed in Berlin by Saburō Kurusu of Imperial Japan, Hitler, and Italian foreign minister Ciano,[273] and later expanded to include Hungary, Romania, and Bulgaria, thus yielding the Axis powers. Hitler's attempt to integrate the Soviet Union into the anti-British bloc failed after inconclusive talks between Hitler and Molotov in Berlin in November, and he ordered preparations for the invasion of the Soviet Union.[274]
|
163 |
+
|
164 |
+
In early 1941, German forces were deployed to North Africa, the Balkans, and the Middle East. In February, German forces arrived in Libya to bolster the Italian presence. In April, Hitler launched the invasion of Yugoslavia, quickly followed by the invasion of Greece.[275] In May, German forces were sent to support Iraqi rebel forces fighting against the British and to invade Crete.[276]
|
165 |
+
|
166 |
+
On 22 June 1941, contravening the Molotov–Ribbentrop Pact of 1939, over three million Axis troops attacked the Soviet Union.[277] This offensive (codenamed Operation Barbarossa) was intended to destroy the Soviet Union and seize its natural resources for subsequent aggression against the Western powers.[278][279] The invasion conquered a huge area, including the Baltic republics, Belarus, and West Ukraine. By early August, Axis troops had advanced 500 km (310 mi) and won the Battle of Smolensk. Hitler ordered Army Group Centre to temporarily halt its advance to Moscow and divert its Panzer groups to aid in the encirclement of Leningrad and Kiev.[280] His generals disagreed with this change, having advanced within 400 km (250 mi) of Moscow, and his decision caused a crisis among the military leadership.[281][282] The pause provided the Red Army with an opportunity to mobilise fresh reserves; historian Russel Stolfi considers it to be one of the major factors that caused the failure of the Moscow offensive, which was resumed in October 1941 and ended disastrously in December.[280] During this crisis, Hitler appointed himself as head of the Oberkommando des Heeres.[283]
|
167 |
+
|
168 |
+
On 7 December 1941, Japan attacked the American fleet based at Pearl Harbor, Hawaii. Four days later, Hitler declared war against the United States.[284]
|
169 |
+
|
170 |
+
On 18 December 1941, Himmler asked Hitler, "What to do with the Jews of Russia?", to which Hitler replied, "als Partisanen auszurotten" ("exterminate them as partisans").[285] Israeli historian Yehuda Bauer has commented that the remark is probably as close as historians will ever get to a definitive order from Hitler for the genocide carried out during the Holocaust.[285]
|
171 |
+
|
172 |
+
In late 1942, German forces were defeated in the second battle of El Alamein,[286] thwarting Hitler's plans to seize the Suez Canal and the Middle East. Overconfident in his own military expertise following the earlier victories in 1940, Hitler became distrustful of his Army High Command and began to interfere in military and tactical planning, with damaging consequences.[287] In December 1942 and January 1943, Hitler's repeated refusal to allow their withdrawal at the Battle of Stalingrad led to the almost total destruction of the 6th Army. Over 200,000 Axis soldiers were killed and 235,000 were taken prisoner.[288] Thereafter came a decisive strategic defeat at the Battle of Kursk.[289] Hitler's military judgement became increasingly erratic, and Germany's military and economic position deteriorated, as did Hitler's health.[290]
|
173 |
+
|
174 |
+
Following the allied invasion of Sicily in 1943, Mussolini was removed from power by Victor Emmanuel III after a vote of no confidence of the Grand Council. Marshal Pietro Badoglio, placed in charge of the government, soon surrendered to the Allies.[291] Throughout 1943 and 1944, the Soviet Union steadily forced Hitler's armies into retreat along the Eastern Front. On 6 June 1944, the Western Allied armies landed in northern France in one of the largest amphibious operations in history, Operation Overlord.[292] Many German officers concluded that defeat was inevitable and that continuing under Hitler's leadership would result in the complete destruction of the country.[293]
|
175 |
+
|
176 |
+
Between 1939 and 1945, there were many plans to assassinate Hitler, some of which proceeded to significant degrees.[294] The most well known, the 20 July plot of 1944, came from within Germany and was at least partly driven by the increasing prospect of a German defeat in the war.[295] Part of Operation Valkyrie, the plot involved Claus von Stauffenberg planting a bomb in one of Hitler's headquarters, the Wolf's Lair at Rastenburg. Hitler narrowly survived because staff officer Heinz Brandt moved the briefcase containing the bomb behind a leg of the heavy conference table, which deflected much of the blast. Later, Hitler ordered savage reprisals resulting in the execution of more than 4,900 people.[296]
|
177 |
+
|
178 |
+
By late 1944, both the Red Army and the Western Allies were advancing into Germany. Recognising the strength and determination of the Red Army, Hitler decided to use his remaining mobile reserves against the American and British troops, which he perceived as far weaker.[297] On 16 December, he launched the Ardennes Offensive to incite disunity among the Western Allies and perhaps convince them to join his fight against the Soviets.[298] The offensive failed after some temporary successes.[299] With much of Germany in ruins in January 1945, Hitler spoke on the radio: "However grave as the crisis may be at this moment, it will, despite everything, be mastered by our unalterable will."[300] Acting on his view that Germany's military failures meant it had forfeited its right to survive as a nation, Hitler ordered the destruction of all German industrial infrastructure before it could fall into Allied hands.[301] Minister for Armaments Albert Speer was entrusted with executing this scorched earth policy, but he secretly disobeyed the order.[301][302] Hitler's hope to negotiate peace with the United States and Britain was encouraged by the death of US President Franklin D. Roosevelt on 12 April 1945, but contrary to his expectations, this caused no rift among the Allies.[298][303]
|
179 |
+
|
180 |
+
On 20 April, his 56th birthday, Hitler made his last trip from the Führerbunker (Führer's shelter) to the surface. In the ruined garden of the Reich Chancellery, he awarded Iron Crosses to boy soldiers of the Hitler Youth, who were now fighting the Red Army at the front near Berlin.[304] By 21 April, Georgy Zhukov's 1st Belorussian Front had broken through the defences of General Gotthard Heinrici's Army Group Vistula during the Battle of the Seelow Heights and advanced to the outskirts of Berlin.[305] In denial about the dire situation, Hitler placed his hopes on the undermanned and under-equipped Armeeabteilung Steiner (Army Detachment Steiner), commanded by Waffen-SS General Felix Steiner. Hitler ordered Steiner to attack the northern flank of the salient, while the German Ninth Army was ordered to attack northward in a pincer attack.[306]
|
181 |
+
|
182 |
+
During a military conference on 22 April, Hitler asked about Steiner's offensive. He was told that the attack had not been launched and that the Soviets had entered Berlin. Hitler asked everyone except Wilhelm Keitel, Alfred Jodl, Hans Krebs, and Wilhelm Burgdorf to leave the room,[307] then launched into a tirade against the treachery and incompetence of his commanders, culminating in his declaration—for the first time—that "everything was lost".[308] He announced that he would stay in Berlin until the end and then shoot himself.[309]
|
183 |
+
|
184 |
+
By 23 April the Red Army had surrounded Berlin,[310] and Goebbels made a proclamation urging its citizens to defend the city.[307] That same day, Göring sent a telegram from Berchtesgaden, arguing that since Hitler was isolated in Berlin, Göring should assume leadership of Germany. Göring set a deadline, after which he would consider Hitler incapacitated.[311] Hitler responded by having Göring arrested, and in his last will and testament of 29 April, he removed Göring from all government positions.[312][313] On 28 April Hitler discovered that Himmler, who had left Berlin on 20 April, was trying to negotiate a surrender to the Western Allies.[314][315] He ordered Himmler's arrest and had Hermann Fegelein (Himmler's SS representative at Hitler's HQ in Berlin) shot.[316]
|
185 |
+
|
186 |
+
After midnight on the night of 28–29 April, Hitler married Eva Braun in a small civil ceremony in the Führerbunker.[317][d] Later that afternoon, Hitler was informed that Mussolini had been executed by the Italian resistance movement on the previous day; this presumably increased his determination to avoid capture.[318]
|
187 |
+
|
188 |
+
On 30 April 1945, Soviet troops were within a block or two of the Reich Chancellery when Hitler shot himself in the head and Braun bit into a cyanide capsule.[319][320] Their bodies were carried outside to the garden behind the Reich Chancellery, where they were placed in a bomb crater, doused with petrol,[321] and set on fire as the Red Army shelling continued.[322][323] Grand Admiral Karl Dönitz and Joseph Goebbels assumed Hitler's roles as head of state and chancellor respectively.[324]
|
189 |
+
|
190 |
+
Berlin surrendered on 2 May. Records in the Soviet archives obtained after the fall of the Soviet Union state that the remains of Hitler, Braun, Joseph and Magda Goebbels, the six Goebbels children, General Hans Krebs, and Hitler's dogs were repeatedly buried and exhumed.[325] On 4 April 1970, a Soviet KGB team used detailed burial charts to exhume five wooden boxes at the SMERSH facility in Magdeburg. The remains from the boxes were burned, crushed, and scattered into the Biederitz river, a tributary of the Elbe.[326] According to Kershaw, the corpses of Braun and Hitler were fully burned when the Red Army found them, and only a lower jaw with dental work could be identified as Hitler's remains.[327]
|
191 |
+
|
192 |
+
If the international Jewish financiers in and outside Europe should succeed in plunging the nations once more into a world war, then the result will not be the Bolshevisation of the earth, and thus the victory of Jewry, but the annihilation of the Jewish race in Europe![328]
|
193 |
+
|
194 |
+
The Holocaust and Germany's war in the East were based on Hitler's long-standing view that the Jews were the enemy of the German people and that Lebensraum was needed for Germany's expansion. He focused on Eastern Europe for this expansion, aiming to defeat Poland and the Soviet Union and then removing or killing the Jews and Slavs.[329] The Generalplan Ost (General Plan East) called for deporting the population of occupied Eastern Europe and the Soviet Union to West Siberia, for use as slave labour or to be murdered;[330] the conquered territories were to be colonised by German or "Germanised" settlers.[331] The goal was to implement this plan after the conquest of the Soviet Union, but when this failed, Hitler moved the plans forward.[330][332] By January 1942, he had decided that the Jews, Slavs, and other deportees considered undesirable should be killed.[333][e]
|
195 |
+
|
196 |
+
The genocide was organised and executed by Heinrich Himmler and Reinhard Heydrich. The records of the Wannsee Conference, held on 20 January 1942 and led by Heydrich, with fifteen senior Nazi officials participating, provide the clearest evidence of systematic planning for the Holocaust. On 22 February, Hitler was recorded saying, "we shall regain our health only by eliminating the Jews".[334] Similarly, at a meeting in July 1941 with leading functionaries of the Eastern territories, Hitler said that the easiest way to quickly pacify the areas would be best achieved by "shooting everyone who even looks odd".[335] Although no direct order from Hitler authorising the mass killings has surfaced,[336] his public speeches, orders to his generals, and the diaries of Nazi officials demonstrate that he conceived and authorised the extermination of European Jewry.[337][338] During the war, Hitler repeatedly stated his prophecy of 1939 was being fulfilled, namely, that a world war would bring about the annihilation of the Jewish race.[339] Hitler approved the Einsatzgruppen—killing squads that followed the German army through Poland, the Baltic, and the Soviet Union[340]—and was well informed about their activities.[337][341] By summer 1942, Auschwitz concentration camp was expanded to accommodate large numbers of deportees for killing or enslavement.[342] Scores of other concentration camps and satellite camps were set up throughout Europe, with several camps devoted exclusively to extermination.[343]
|
197 |
+
|
198 |
+
Between 1939 and 1945, the Schutzstaffel (SS), assisted by collaborationist governments and recruits from occupied countries, was responsible for the deaths of at least eleven million non-combatants,[344][330] including about 6 million Jews (representing two-thirds of the Jewish population of Europe),[345][f] and between 200,000 and 1,500,000 Romani people.[347][345] Deaths took place in concentration and extermination camps, ghettos, and through mass executions. Many victims of the Holocaust were gassed to death, while others died of starvation or disease or while working as slave labourers.[348] In addition to eliminating Jews, the Nazis planned to reduce the population of the conquered territories by 30 million people through starvation in an action called the Hunger Plan. Food supplies would be diverted to the German army and German civilians. Cities would be razed and the land allowed to return to forest or resettled by German colonists.[349] Together, the Hunger Plan and Generalplan Ost would have led to the starvation of 80 million people in the Soviet Union.[350] These partially fulfilled plans resulted in additional deaths, bringing the total number of civilians and prisoners of war who died in the democide to an estimated 19.3 million people.[351]
|
199 |
+
|
200 |
+
Hitler's policies resulted in the killing of nearly two million non-Jewish Polish civilians,[352] over three million Soviet prisoners of war,[353] communists and other political opponents, homosexuals, the physically and mentally disabled,[354][355] Jehovah's Witnesses, Adventists, and trade unionists. Hitler did not speak publicly about the killings, and seems never to have visited the concentration camps.[356]
|
201 |
+
|
202 |
+
The Nazis embraced the concept of racial hygiene. On 15 September 1935, Hitler presented two laws—known as the Nuremberg Laws—to the Reichstag. The laws banned sexual relations and marriages between Aryans and Jews and were later extended to include "Gypsies, Negroes or their bastard offspring".[357] The laws stripped all non-Aryans of their German citizenship and forbade the employment of non-Jewish women under the age of 45 in Jewish households.[358] Hitler's early eugenic policies targeted children with physical and developmental disabilities in a programme dubbed Action Brandt, and he later authorised a euthanasia programme for adults with serious mental and physical disabilities, now referred to as Aktion T4.[359]
|
203 |
+
|
204 |
+
Hitler ruled the NSDAP autocratically by asserting the Führerprinzip (leader principle). The principle relied on absolute obedience of all subordinates to their superiors; thus he viewed the government structure as a pyramid, with himself—the infallible leader—at the apex. Rank in the party was not determined by elections—positions were filled through appointment by those of higher rank, who demanded unquestioning obedience to the will of the leader.[360] Hitler's leadership style was to give contradictory orders to his subordinates and to place them into positions where their duties and responsibilities overlapped with those of others, to have "the stronger one [do] the job".[361] In this way, Hitler fostered distrust, competition, and infighting among his subordinates to consolidate and maximise his own power. His cabinet never met after 1938, and he discouraged his ministers from meeting independently.[362][363] Hitler typically did not give written orders; instead he communicated verbally, or had them conveyed through his close associate, Martin Bormann.[364] He entrusted Bormann with his paperwork, appointments, and personal finances; Bormann used his position to control the flow of information and access to Hitler.[365]
|
205 |
+
|
206 |
+
Hitler dominated his country's war effort during World War II to a greater extent than any other national leader. He strengthened his control of the armed forces in 1938, and subsequently made all major decisions regarding Germany's military strategy. His decision to mount a risky series of offensives against Norway, France, and the Low Countries in 1940 against the advice of the military proved successful, though the diplomatic and military strategies he employed in attempts to force the United Kingdom out of the war ended in failure.[366] Hitler deepened his involvement in the war effort by appointing himself commander-in-chief of the Army in December 1941; from this point forward he personally directed the war against the Soviet Union, while his military commanders facing the Western Allies retained a degree of autonomy.[367] Hitler's leadership became increasingly disconnected from reality as the war turned against Germany, with the military's defensive strategies often hindered by his slow decision making and frequent directives to hold untenable positions. Nevertheless, he continued to believe that only his leadership could deliver victory.[366] In the final months of the war Hitler refused to consider peace negotiations, regarding the destruction of Germany as preferable to surrender.[368] The military did not challenge Hitler's dominance of the war effort, and senior officers generally supported and enacted his decisions.[369]
|
207 |
+
|
208 |
+
Hitler created a public image as a celibate man without a domestic life, dedicated entirely to his political mission and the nation.[145][370] He met his lover, Eva Braun, in 1929,[371] and married her on 29 April 1945, one day before they both committed suicide.[372] In September 1931, his half-niece, Geli Raubal, took her own life with Hitler's gun in his Munich apartment. It was rumoured among contemporaries that Geli was in a romantic relationship with him, and her death was a source of deep, lasting pain.[373] Paula Hitler, the younger sister of Hitler and the last living member of his immediate family, died in June 1960.[15]
|
209 |
+
|
210 |
+
Hitler was born to a practising Catholic mother and an anticlerical father; after leaving home Hitler never again attended Mass or received the sacraments.[374][375][376] Speer states that Hitler railed against the church to his political associates and though he never officially left it, he had no attachment to it.[377] He adds that Hitler felt that in the absence of organised religion, people would turn to mysticism, which he considered regressive.[377] According to Speer, Hitler believed that Japanese religious beliefs or Islam would have been a more suitable religion for Germans than Christianity, with its "meekness and flabbiness".[378]
|
211 |
+
|
212 |
+
Historian John S. Conway states that Hitler was fundamentally opposed to the Christian churches.[379] According to Bullock, Hitler did not believe in God, was anticlerical, and held Christian ethics in contempt because they contravened his preferred view of "survival of the fittest".[380] He favoured aspects of Protestantism that suited his own views, and adopted some elements of the Catholic Church's hierarchical organisation, liturgy, and phraseology.[381]
|
213 |
+
|
214 |
+
Hitler viewed the church as an important politically conservative influence on society,[382] and he adopted a strategic relationship with it that "suited his immediate political purposes".[379] In public, Hitler often praised Christian heritage and German Christian culture, though professing a belief in an "Aryan Jesus" who fought against the Jews.[383] Any pro-Christian public rhetoric contradicted his private statements, which described Christianity as "absurdity"[384] and nonsense founded on lies.[385]
|
215 |
+
|
216 |
+
According to a US Office of Strategic Services (OSS) report, "The Nazi Master Plan", Hitler planned to destroy the influence of Christian churches within the Reich.[386][387] His eventual goal was the total elimination of Christianity.[388] This goal informed Hitler's movement early on, but he saw it as inexpedient to publicly express this extreme position.[389] According to Bullock, Hitler wanted to wait until after the war before executing this plan.[390]
|
217 |
+
|
218 |
+
Speer wrote that Hitler had a negative view of Himmler's and Alfred Rosenberg's mystical notions and Himmler's attempt to mythologise the SS. Hitler was more pragmatic, and his ambitions centred on more practical concerns.[391][392]
|
219 |
+
|
220 |
+
Researchers have variously suggested that Hitler suffered from irritable bowel syndrome, skin lesions, irregular heartbeat, coronary sclerosis,[393] Parkinson's disease,[290][394] syphilis,[394] giant-cell arteritis,[395] and tinnitus.[396] In a report prepared for the OSS in 1943, Walter C. Langer of Harvard University described Hitler as a "neurotic psychopath".[397] In his 1977 book The Psychopathic God: Adolf Hitler, historian Robert G. L. Waite proposes that he suffered from borderline personality disorder.[398] Historians Henrik Eberle and Hans-Joachim Neumann consider that while he suffered from a number of illnesses including Parkinson's disease, Hitler did not experience pathological delusions and was always fully aware of, and therefore responsible for, his decisions.[399][308] Theories about Hitler's medical condition are difficult to prove, and placing too much weight on them may have the effect of attributing many of the events and consequences of Nazi Germany to the possibly impaired physical health of one individual.[400] Kershaw feels that it is better to take a broader view of German history by examining what social forces led to the Nazi dictatorship and its policies rather than to pursue narrow explanations for the Holocaust and World War II based on only one person.[401]
|
221 |
+
|
222 |
+
Sometime in the 1930s Hitler adopted a mainly vegetarian diet,[402][403] avoiding all meat and fish from 1942 onwards. At social events he sometimes gave graphic accounts of the slaughter of animals in an effort to make his guests shun meat.[404] Bormann had a greenhouse constructed near the Berghof (near Berchtesgaden) to ensure a steady supply of fresh fruit and vegetables for Hitler.[405]
|
223 |
+
|
224 |
+
Hitler stopped drinking alcohol around the time he became vegetarian and thereafter only very occasionally drank beer or wine on social occasions.[406][407] He was a non-smoker for most of his adult life, but smoked heavily in his youth (25 to 40 cigarettes a day); he eventually quit, calling the habit "a waste of money".[408] He encouraged his close associates to quit by offering a gold watch to anyone able to break the habit.[409] Hitler began using amphetamine occasionally after 1937 and became addicted to it in late 1942.[410] Speer linked this use of amphetamine to Hitler's increasingly erratic behaviour and inflexible decision making (for example, rarely allowing military retreats).[411]
|
225 |
+
|
226 |
+
Prescribed 90 medications during the war years by his personal physician, Theodor Morell, Hitler took many pills each day for chronic stomach problems and other ailments.[412] He regularly consumed amphetamine, barbiturates, opiates, and cocaine,[413][414] as well as potassium bromide and atropa belladonna (the latter in the form of Doktor Koster's Antigaspills).[415] He suffered ruptured eardrums as a result of the 20 July plot bomb blast in 1944, and 200 wood splinters had to be removed from his legs.[416] Newsreel footage of Hitler shows tremors in his left hand and a shuffling walk, which began before the war and worsened towards the end of his life.[412] Ernst-Günther Schenck and several other doctors who met Hitler in the last weeks of his life also formed a diagnosis of Parkinson's disease.[417]
|
227 |
+
|
228 |
+
For peace, freedom
|
229 |
+
and democracy
|
230 |
+
never again fascism
|
231 |
+
millions of dead warn [us]
|
232 |
+
|
233 |
+
Hitler's suicide was likened by contemporaries to a "spell" being broken.[419][420] Public support for Hitler had collapsed by the time of his death and few Germans mourned his passing; Kershaw argues that most civilians and military personnel were too busy adjusting to the collapse of the country or fleeing from the fighting to take any interest.[421] According to historian John Toland, National Socialism "burst like a bubble" without its leader.[422]
|
234 |
+
|
235 |
+
Hitler's actions and Nazi ideology are almost universally regarded as gravely immoral;[423] according to Kershaw, "Never in history has such ruination—physical and moral—been associated with the name of one man."[4] Hitler's political programme brought about a world war, leaving behind a devastated and impoverished Eastern and Central Europe. Germany suffered wholesale destruction, characterised as Stunde Null (Zero Hour).[424] Hitler's policies inflicted human suffering on an unprecedented scale;[425] according to R. J. Rummel, the Nazi regime was responsible for the democidal killing of an estimated 19.3 million civilians and prisoners of war.[344] In addition, 28.7 million soldiers and civilians died as a result of military action in the European Theatre of World War II.[344] The number of civilians killed during the Second World War was unprecedented in the history of warfare.[426] Historians, philosophers, and politicians often use the word "evil" to describe the Nazi regime.[427] Many European countries have criminalised both the promotion of Nazism and Holocaust denial.[428]
|
236 |
+
|
237 |
+
Historian Friedrich Meinecke described Hitler as "one of the great examples of the singular and incalculable power of personality in historical life".[429] English historian Hugh Trevor-Roper saw him as "among the 'terrible simplifiers' of history, the most systematic, the most historical, the most philosophical, and yet the coarsest, cruelest, least magnanimous conqueror the world has ever known".[430] For the historian John M. Roberts, Hitler's defeat marked the end of a phase of European history dominated by Germany.[431] In its place emerged the Cold War, a global confrontation between the Western Bloc, dominated by the United States and other NATO nations, and the Eastern Bloc, dominated by the Soviet Union.[432] Historian Sebastian Haffner asserts that without Hitler and the displacement of the Jews, the modern nation state of Israel would not exist. He contends that without Hitler, the de-colonisation of former European spheres of influence would have been postponed.[433] Further, Haffner claims that other than Alexander the Great, Hitler had a more significant impact than any other comparable historical figure, in that he too caused a wide range of worldwide changes in a relatively short time span.[434]
|
238 |
+
|
239 |
+
Hitler exploited documentary films and newsreels to inspire a cult of personality. He was involved and appeared in a series of propaganda films throughout his political career, many made by Leni Riefenstahl, regarded as a pioneer of modern filmmaking.[435] Hitler's propaganda film appearances include:
|
en/2588.html.txt
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Winter is the coldest season of the year in polar and temperate zones (winter does not occur in most of the tropical zone). It occurs after autumn and before spring in each year. Winter is caused by the axis of the Earth in that hemisphere being oriented away from the Sun. Different cultures define different dates as the start of winter, and some use a definition based on weather. When it is winter in the Northern Hemisphere, it is summer in the Southern Hemisphere, and vice versa. In many regions, winter is associated with snow and freezing temperatures. The moment of winter solstice is when the Sun's elevation with respect to the North or South Pole is at its most negative value (that is, the Sun is at its farthest below the horizon as measured from the pole). The day on which this occurs has the shortest day and the longest night, with day length increasing and night length decreasing as the season progresses after the solstice. The earliest sunset and latest sunrise dates outside the polar regions differ from the date of the winter solstice, however, and these depend on latitude, due to the variation in the solar day throughout the year caused by the Earth's elliptical orbit (see earliest and latest sunrise and sunset).
|
6 |
+
|
7 |
+
The English word winter comes from the Proto-Germanic noun *wintru-, whose origin is unclear. Several proposals exist, a commonly mentioned one connecting it to the Proto-Indo-European root *wed- 'water' or a nasal infix variant *wend-.[1]
|
8 |
+
|
9 |
+
The tilt of the Earth's axis relative to its orbital plane plays a large role in the formation of weather. The Earth is tilted at an angle of 23.44° to the plane of its orbit, causing different latitudes to directly face the Sun as the Earth moves through its orbit. This variation brings about seasons. When it is winter in the Northern Hemisphere, the Southern Hemisphere faces the Sun more directly and thus experiences warmer temperatures than the Northern Hemisphere. Conversely, winter in the Southern Hemisphere occurs when the Northern Hemisphere is tilted more toward the Sun. From the perspective of an observer on the Earth, the winter Sun has a lower maximum altitude in the sky than the summer Sun.
|
10 |
+
|
11 |
+
During winter in either hemisphere, the lower altitude of the Sun causes the sunlight to hit the Earth at an oblique angle. Thus a lower amount of solar radiation strikes the Earth per unit of surface area. Furthermore, the light must travel a longer distance through the atmosphere, allowing the atmosphere to dissipate more heat. Compared with these effects, the effect of the changes in the distance of the Earth from the Sun (due to the Earth's elliptical orbit) is negligible.
|
12 |
+
|
13 |
+
The manifestation of the meteorological winter (freezing temperatures) in the northerly snow–prone latitudes is highly variable depending on elevation, position versus marine winds and the amount of precipitation. For instance, within Canada (a country of cold winters), Winnipeg on the Great Plains, a long way from the ocean, has a January high of −11.3 °C (11.7 °F) and a low of −21.4 °C (−6.5 °F).[2] In comparison, Vancouver on the west coast with a marine influence from moderating Pacific winds has a January low of 1.4 °C (34.5 °F) with days well above freezing at 6.9 °C (44.4 °F).[3] Both places are at 49°N latitude, and in the same western half of the continent. A similar but less extreme effect is found in Europe: in spite of their northerly latitude, the British Isles have not a single non-mountain weather station with a below-freezing mean January temperature.[4]
|
14 |
+
|
15 |
+
Meteorological reckoning is the method of measuring the winter season used by meteorologists based on "sensible weather patterns" for record keeping purposes,[5] so the start of meteorological winter varies with latitude.[6] Winter is often defined by meteorologists to be the three calendar months with the lowest average temperatures. This corresponds to the months of December, January and February in the Northern Hemisphere, and June, July and August in the Southern Hemisphere. The coldest average temperatures of the season are typically experienced in January or February in the Northern Hemisphere and in June, July or August in the Southern Hemisphere. Nighttime predominates in the winter season, and in some regions winter has the highest rate of precipitation as well as prolonged dampness because of permanent snow cover or high precipitation rates coupled with low temperatures, precluding evaporation. Blizzards often develop and cause many transportation delays. Diamond dust, also known as ice needles or ice crystals, forms at temperatures approaching −40 °C (−40 °F) due to air with slightly higher moisture from above mixing with colder, surface-based air.[7] They are made of simple hexagonal ice crystals.[8] The Swedish meteorological institute (SMHI) defines thermal winter as when the daily mean temperatures are below 0 °C (32 °F) for five consecutive days.[9] According to the SMHI, winter in Scandinavia is more pronounced when Atlantic low-pressure systems take more southerly and northerly routes, leaving the path open for high-pressure systems to come in and cold temperatures to occur. As a result, the coldest January on record in Stockholm, in 1987, was also the sunniest.[10][11]
|
16 |
+
|
17 |
+
Accumulations of snow and ice are commonly associated with winter in the Northern Hemisphere, due to the large land masses there. In the Southern Hemisphere, the more maritime climate and the relative lack of land south of 40°S makes the winters milder; thus, snow and ice are less common in inhabited regions of the Southern Hemisphere. In this region, snow occurs every year in elevated regions such as the Andes, the Great Dividing Range in Australia, and the mountains of New Zealand, and also occurs in the southerly Patagonia region of South Argentina. Snow occurs year-round in Antarctica.
|
18 |
+
|
19 |
+
In the Northern Hemisphere, some authorities define the period of winter based on astronomical fixed points (i.e. based solely on the position of the Earth in its orbit around the Sun), regardless of weather conditions. In one version of this definition, winter begins at the winter solstice and ends at the March equinox.[12] These dates are somewhat later than those used to define the beginning and end of the meteorological winter – usually considered to span the entirety of December, January, and February in the Northern Hemisphere and June, July, and August in the Southern.[12]
|
20 |
+
|
21 |
+
Astronomically, the winter solstice, being the day of the year which has fewest hours of daylight, ought to be in the middle of the season,[13][14] but seasonal lag means that the coldest period normally follows the solstice by a few weeks. In some cultures, the season is regarded as beginning at the solstice and ending on the following equinox[15][16] – in the Northern Hemisphere, depending on the year, this corresponds to the period between 20, 21 or 22 December and 19, 20 or 21 March.[12]
|
22 |
+
|
23 |
+
In the United Kingdom, meteorologists consider winter to be the three coldest months of December, January and February.[17] In Scandinavia, winter in one tradition begins on 14 October and ends on the last day of February.[18] In Russia, calendar winter is widely regarded to start on 1 December[19] and end on 28 February.[20] In many countries in the Southern Hemisphere, including Australia,[21][22] New Zealand and South Africa, winter begins on 1 June and ends on 31 August. In Celtic nations such as Ireland (using the Irish calendar) and in Scandinavia, the winter solstice is traditionally considered as midwinter, with the winter season beginning 1 November, on All Hallows, or Samhain. Winter ends and spring begins on Imbolc, or Candlemas, which is 1 or 2 February. This system of seasons is based on the length of days exclusively. (The three-month period of the shortest days and weakest solar radiation occurs during November, December and January in the Northern Hemisphere and May, June and July in the Southern Hemisphere.)
|
24 |
+
|
25 |
+
Also, many mainland European countries tended to recognize Martinmas or St. Martin's Day (11 November), as the first calendar day of winter.[23] The day falls at the midpoint between the old Julian equinox and solstice dates. Also, Valentine's Day (14 February) is recognized by some countries as heralding the first rites of spring, such as flowers blooming.[24]
|
26 |
+
|
27 |
+
In Chinese astronomy and other East Asian calendars, winter is taken to commence on or around 7 November, with the Jiéqì (known as 立冬 lì dōng – literally, "establishment of winter").
|
28 |
+
|
29 |
+
The three-month period associated with the coldest average temperatures typically begins somewhere in late November or early December in the Northern Hemisphere and lasts through late February or early March. This "thermological winter" is earlier than the solstice delimited definition, but later than the daylight (Celtic) definition. Depending on seasonal lag, this period will vary between climatic regions.
|
30 |
+
|
31 |
+
Cultural influences such as Christmas creep may have led to the winter season being perceived as beginning earlier in recent years, although high latitude countries like Canada are usually well into their real winters before the December solstice.
|
32 |
+
|
33 |
+
Since by almost all definitions valid for the Northern Hemisphere, winter spans 31 December and 1 January, the season is split across years, just like summer in the Southern Hemisphere. Each calendar year includes parts of two winters. This causes ambiguity in associating a winter with a particular year, e.g. "Winter 2018". Solutions for this problem include naming both years, e.g. "Winter 18/19", or settling on the year the season starts in or on the year most of its days belong to, which is the later year for most definitions.
|
34 |
+
|
35 |
+
Ecological reckoning of winter differs from calendar-based by avoiding the use of fixed dates. It is one of six seasons recognized by most ecologists who customarily use the term hibernal for this period of the year (the other ecological seasons being prevernal, vernal, estival, serotinal, and autumnal).[25] The hibernal season coincides with the main period of biological dormancy each year whose dates vary according to local and regional climates in temperate zones of the Earth. The appearance of flowering plants like the crocus can mark the change from ecological winter to the prevernal season as early as late January in mild temperate climates.
|
36 |
+
|
37 |
+
To survive the harshness of winter, many animals have developed different behavioral and morphological adaptations for overwintering:
|
38 |
+
|
39 |
+
Some annual plants never survive the winter. Other annual plants require winter cold to complete their life cycle; this is known as vernalization. As for perennials, many small ones profit from the insulating effects of snow by being buried in it. Larger plants, particularly deciduous trees, usually let their upper part go dormant, but their roots are still protected by the snow layer. Few plants bloom in the winter, one exception being the flowering plum, which flowers in time for Chinese New Year. The process by which plants become acclimated to cold weather is called hardening.
|
40 |
+
|
41 |
+
Humans are sensitive to cold, see hypothermia. Snowblindness, norovirus, seasonal depression. Slipping on black ice and falling icicles are other health concerns associated with cold and snowy weather. In the Northern Hemisphere, it is not unusual for homeless people to die from hypothermia in the winter.
|
42 |
+
|
43 |
+
One of the most common diseases associated with winter is influenza.[28]
|
44 |
+
|
45 |
+
In Persian culture, the winter solstice is called Yaldā (meaning: birth) and it has been celebrated for thousands of years. It is referred to as the eve of the birth of Mithra, who symbolised light, goodness and strength on earth.
|
46 |
+
|
47 |
+
In Greek mythology, Hades kidnapped Persephone to be his wife. Zeus ordered Hades to return her to Demeter, the goddess of the Earth and her mother. However, Hades tricked Persephone into eating the food of the dead, so Zeus decreed that Persephone would spend six months with Demeter and six months with Hades. During the time her daughter is with Hades, Demeter became depressed and caused winter.
|
48 |
+
|
49 |
+
In Welsh mythology, Gwyn ap Nudd abducted a maiden named Creiddylad. On May Day, her lover, Gwythr ap Greidawl, fought Gwyn to win her back. The battle between them represented the contest between summer and winter.
|
en/2589.html.txt
ADDED
@@ -0,0 +1,224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Field hockey is a widely played team sport of the hockey family. The game can be played on grass, watered turf, artificial turf or synthetic field, as well as an indoor boarded surface. Each team plays with ten field players and a goalkeeper. Players commonly use sticks made out of wood, carbon fibre, fibre glass or a combination of carbon fibre and fibre glass in different quantities (with the higher carbon fibre stick being more expensive and less likely to break) to hit a round, hard, plastic hockey ball. The length of the hockey stick is based on the player's individual height: the top of the stick usually comes to the players hip, and taller players typically have taller sticks.[1] The sticks have a round side and a flat side only the flat face of the stick is allowed to be used; if the other side is used it results in a foul. Goalies often have a different kind of stick; however, they can also use an ordinary field hockey stick. The specific goal-keeping sticks have another curve at the end of the stick, this is to give them more surface area to save the ball. The uniform consists of shin guards, shoes, shorts or a skirt, a mouthguard and a jersey.
|
4 |
+
|
5 |
+
Today, the game is played globally, mainly in parts of Western Europe, South Asia, Southern Africa, Australia, New Zealand, Argentina, and parts of the United States (primarily New England and the Mid-Atlantic states).[2][3]
|
6 |
+
|
7 |
+
Known simply as "hockey" in most territories, the term "field hockey" is used primarily in Canada and the United States where ice hockey is more popular. In Sweden, the term landhockey is used and to some degree also in Norway where it is governed by Norway's Bandy Association.[4]
|
8 |
+
|
9 |
+
During play, goal keepers are the only players who are allowed to touch the ball with any part of their body (the player's hand is considered part of the stick if on the stick), while field players play the ball with the flat side of their stick. If the ball is touched with the rounded part of the stick, it will result in a penalty. Goal keepers also cannot play the ball with the back of their stick.
|
10 |
+
|
11 |
+
Whoever scores the most goals by the end of the match wins. If the score is tied at the end of the game, either a draw is declared or the game goes into extra time or a penalty shootout, depending on the competition's format. There are many variations to overtime play that depend on the league and tournament play. In college play, a seven-aside overtime period consists of a 10-minute golden goal period with seven players for each team. If a tie still remains, the game enters a one-on-one competition where each team chooses 5 players to dribble from the 25-yard line down to the circle against the opposing goalie. The player has 8 seconds to score on the goalie keeping it in bounds. The play ends after a goal is scored, the ball goes out of bounds, a foul is committed (ending in either a penalty stroke or flick or the end of the one on one) or time expires. If the tie still persists more rounds are played until one team has scored.
|
12 |
+
|
13 |
+
The governing body of field hockey is the International Hockey Federation (FIH), which is called the Fédération Internationale de Hockey in French, with men and women being represented internationally in competitions including the Olympic Games, World Cup, World League, Champions Trophy and Junior World Cup, with many countries running extensive junior, senior, and masters club competitions. The FIH is also responsible for organizing the Hockey Rules Board and developing the rules for the game.
|
14 |
+
|
15 |
+
A popular variant of field hockey is indoor field hockey, which differs in a number of respects while embodying the primary principles of hockey. Indoor hockey is a 5-a-side variant, with a field which is reduced to approximately 40 m × 20 m (131 ft × 66 ft). With many of the rules remaining the same, including obstruction and feet, there are several key variations: Players may not raise the ball unless shooting on goal, players may not hit the ball (instead using pushes to transfer the ball), and the sidelines are replaced with solid barriers which the ball will rebound off.[5] In addition, the regulation guidelines for the indoor field hockey stick require a slightly thinner, lighter stick than an outdoor stick.[6]
|
16 |
+
|
17 |
+
There is a depiction of a field hockey-like game in Ancient Greece, dating to c. 510 BC, when the game may have been called Κερητίζειν (kerētízein) because it was played with a horn (κέρας, kéras, in Ancient Greek) and a ball.[7] Researchers disagree over how to interpret this image. It could have been a team or one-on-one activity (the depiction shows two active players, and other figures who may be teammates awaiting a face-off, or non-players waiting for their turn at play). Billiards historians Stein and Rubino believe it was among the games ancestral to lawn-and-field games like hockey and ground billiards, and near-identical depictions (but with only two figures) appear both in the Beni Hasan tomb of Ancient Egyptian administrator Khety of the 11th Dynasty (c. 2000 BCE), and in European illuminated manuscripts and other works of the 14th through 17th centuries, showing contemporary courtly and clerical life.[8] In East Asia, a similar game was entertained, using a carved wooden stick and ball prior, to 300 BC.[9] In Inner Mongolia, China, the Daur people have for about 1,000 years been playing beikou, a game with some similarities to field hockey.[10] A similar field hockey or ground billiards variant, called suigan, was played in China during the Ming dynasty (1368–1644, post-dating the Mongol-led Yuan dynasty).[8] A game similar to field hockey was played in the 17th century in Punjab state in India under name khido khundi (khido refers to the woolen ball, and khundi to the stick).[11]
|
18 |
+
In South America, most specifically in Chile, the local natives of the 16th century used to play a game called chueca, which also shares common elements with hockey.[12]
|
19 |
+
|
20 |
+
In Northern Europe, the games of hurling (Ireland) and Knattleikr (Iceland), both team ball games involving sticks to drive a ball to the opponents' goal, date at least as far back as the Early Middle Ages. By the 12th century, a team ball game called la soule or choule, akin to a chaotic and sometimes long-distance version of hockey or rugby football (depending on whether sticks were used in a particular local variant), was regularly played in France and southern Britain between villages or parishes. Throughout the Middle Ages to the Early Modern era, such games often involved the local clergy or secular aristocracy, and in some periods were limited to them by various anti-gaming edicts, or even banned altogether.[8] Stein and Rubino, among others, ultimately trace aspects of these games both to rituals in antiquity involving orbs and sceptres (on the aristocratic and clerical side), and to ancient military training exercises (on the popular side); polo (essentially hockey on horseback) was devised by the Ancient Persians for cavalry training, based on the local proto-hockey foot game of the region.[8]
|
21 |
+
|
22 |
+
The word hockey itself has no clear origin. One belief is that it was recorded in 1363 when Edward III of England issued the proclamation: "Moreover we ordain that you prohibit under penalty of imprisonment all and sundry from such stone, wood and iron throwing; handball, football, or hockey; coursing and cock-fighting, or other such idle games."[13] The belief is based on modern translations of the proclamation, which was originally in Latin and explicitly forbade the games "Pilam Manualem, Pedivam, & Bacularem: & ad Canibucam & Gallorum Pugnam". It may be recalled at this point that baculum is the Latin for 'stick', so the reference would appear to be to a game played with sticks. The English historian and biographer John Strype did not use the word "hockey" when he translated the proclamation in 1720, and the word 'hockey' remains of unknown origin.
|
23 |
+
|
24 |
+
The modern game grew from English public schools in the early 19th century. The first club was in 1849 at Blackheath in south-east London, but the modern rules grew out of a version played by Middlesex cricket clubs for winter game.[citation needed] Teddington Hockey Club formed the modern game by introducing the striking circle and changing the ball to a sphere from a rubber cube.[14] The Hockey Association was founded in 1886. The first international competition took place in 1895 (Ireland 3, Wales 0), and the International Rules Board was founded in 1900.
|
25 |
+
|
26 |
+
Field hockey was played at the Summer Olympics in 1908 and 1920. It was dropped in 1924, leading to the foundation of the Fédération Internationale de Hockey sur Gazon (FIH) as an international governing body by seven continental European nations; and hockey was reinstated as an Olympic game in 1928. Men's hockey united under the FIH in 1970.
|
27 |
+
|
28 |
+
The two oldest trophies are the Irish Senior Cup, which dates back to 1894, and the Irish Junior Cup, a second XI-only competition[clarification needed] instituted in 1895.[15]
|
29 |
+
|
30 |
+
In India, the Beighton Cup and the Aga Khan tournament commenced within ten years.[clarification needed] Entering the Olympics in 1928, India won all five games without conceding a goal, and won from 1932 until 1956 and then in 1964 and 1980. Pakistan won in 1960, 1968 and 1984.
|
31 |
+
|
32 |
+
In the early 1970s, artificial turf began to be used. Synthetic pitches changed most aspects of field hockey, gaining speed. New tactics and techniques such as the Indian dribble developed, followed by new rules to take account. The switch to synthetic surfaces ended Indian and Pakistani domination because artificial turf was too expensive in developing countries. Since the 1970s, Australia, the Netherlands, and Germany have dominated at the Olympics and World Cup stages.
|
33 |
+
|
34 |
+
Women's field hockey was first played at British universities and schools. The first club, the Molesey Ladies, was founded in 1887.[citation needed] The first national association was the Irish Ladies Hockey Union in 1894,[citation needed] and though rebuffed by the Hockey Association, women's field hockey grew rapidly around the world. This led to the International Federation of Women's Hockey Association (IFWHA) in 1927, though this did not include many continental European countries where women played as sections of men's associations and were affiliated to the FIH. The IFWHA held conferences every three years, and tournaments associated with these were the primary IFWHA competitions. These tournaments were non-competitive until 1975.
|
35 |
+
|
36 |
+
By the early 1970s, there were 22 associations with women's sections in the FIH and 36 associations in the IFWHA. Discussions started about a common rule book. The FIH introduced competitive tournaments in 1974, forcing the acceptance of the principle of competitive field hockey by the IFWHA in 1973. It took until 1982 for the two bodies to merge, but this allowed the introduction of women's field hockey to the Olympic games from 1980 where, as in the men's game, The Netherlands, Germany, and Australia have been consistently strong. Argentina has emerged as a team to be reckoned with since 2000, winning the world championship in 2002 and 2010 and medals at the last three Olympics.
|
37 |
+
|
38 |
+
In the United States field hockey is played predominantly by females. However, outside North America, participation is now fairly evenly balanced between men and women. For example, in England, England Hockey reports that as of the 2008–09 season there were 2488 registered men's teams, 1969 women's teams, 1042 boys' teams, 966 girls' teams and 274 mixed teams.[17] In 2006 the Irish Hockey Association reported that the gender split among its players was approximately 65% female and 35% male.[18] In its 2008 census, Hockey Australia reported 40,534 male club players and 41,542 female.[19] However, in the United States of America, there are few field hockey clubs, most play taking place between high school or college sides, consisting almost entirely of women. The strength of college field hockey reflects the impact of Title IX which mandated that colleges should fund men's and women's games programmes comparably.
|
39 |
+
|
40 |
+
The game's roots in the English public girls' school mean that the game is associated in the UK with active or overachieving middle class and upper class women. For example, in Nineteen Eighty-Four, George Orwell's novel set in a totalitarian London, main character Winston Smith initially dislikes Julia, the woman he comes to love, because of "the atmosphere of hockey-fields and cold baths and community hikes and general clean-mindedness which she managed to carry about with her."[20]
|
41 |
+
|
42 |
+
The game of field hockey is also very present in the United States. Many high schools and colleges in the U.S. offer the sport and in some areas it is even offered for youth athletes. It has been predominantly played on the East Coast, specifically the Mid-Atlantic in states such as New Jersey, New York, Pennsylvania, Maryland, and Virginia. It recent years however it has become increasingly present on the West Coast and in the Midwest. Field hockey is still not played much in the Southern states of the U.S. but it is offered at a few colleges in Tennessee and Kentucky.
|
43 |
+
|
44 |
+
Most hockey field dimensions were originally fixed using whole numbers of imperial measures. Nevertheless, metric measurements are now the official dimensions as laid down by the International Hockey Federation (FIH) in the "Rules of Hockey".[21] The pitch is a 91.4 m × 55 m (100.0 yd × 60.1 yd) rectangular field. At each end is a goal 2.14 m (7 ft) high and 3.66 m (12 ft) wide, as well as lines across the field 22.90 m (25 yd) from each end-line (generally referred to as the 23-metre lines or the 25-yard lines) and in the center of the field. A spot 0.15 m (6 in) in diameter, called the penalty spot or stroke mark, is placed with its centre 6.40 m (7 yd) from the centre of each goal. The shooting circle is 15 m (16 yd) from the base line.
|
45 |
+
|
46 |
+
Field hockey goals are made of two upright posts, joined at the top by a horizontal crossbar, with a net positioned to catch the ball when it passes through the goalposts. The goalposts and crossbar must be white and rectangular in shape, and should be 2 in (51 mm) wide and 2–3 in (51–76 mm) deep.
|
47 |
+
Field hockey goals also include sideboards and a backboard, which stand 50 cm (20 in) from the ground. The backboard runs the full 3.66 m (12.0 ft) width of the goal, while the sideboards are 1.2 m (3 ft 11 in) deep.[22]
|
48 |
+
|
49 |
+
Historically the game developed on natural grass turf. In the early 1970s, "synthetic grass" fields began to be used for hockey, with the first Olympic Games on this surface being held at Montreal in 1976. Synthetic pitches are now mandatory for all international tournaments and for most national competitions. While hockey is still played on traditional grass fields at some local levels and lesser national divisions, it has been replaced by synthetic surfaces almost everywhere in the western world. There are three main types of artificial hockey surface:[23][24][25]
|
50 |
+
|
51 |
+
Since the 1970s, sand-based pitches have been favoured as they dramatically speed up the game. However, in recent years there has been a massive increase in the number of "water-based" artificial turfs. Water-based synthetic turfs enable the ball to be transferred more quickly than on sand-based surfaces. It is this characteristic that has made them the surface of choice for international and national league competitions. Water-based surfaces are also less abrasive than sand-based surfaces and reduce the level of injury to players when they come into contact with the surface. The FIH are now[when?] proposing that new surfaces being laid should be of a hybrid variety which require less watering. This is due to the negative ecological effects of the high water requirements of water-based synthetic fields. It has also been stated that the decision to make artificial surfaces mandatory greatly favoured more affluent countries who could afford these new pitches.[26]
|
52 |
+
|
53 |
+
The game is played between two teams of eleven, 10 field players and one goal keeper, are permitted to be on the pitch at any one time. The remaining players may be substituted in any combination. There is an unlimited number of times a team can sub in and out. Substitutions are permitted at any point in the game, apart from between the award and end of a penalty corner; two exceptions to this rule is for injury or suspension of the defending goalkeeper, which is not allowed when playing with a field keep, or a player can exit the field, but you must wait until after the inserter touches the ball to put somebody back in.
|
54 |
+
|
55 |
+
Players are permitted to play the ball with the flat of the 'face side' and with the edges of the head and handle of the field hockey stick with the exception that, for reasons of safety, the ball may not be struck 'hard' with a forehand edge stroke, because of the difficulty of controlling the height and direction of the ball from that stroke.
|
56 |
+
|
57 |
+
The flat side is always on the "natural" side for a right-handed person swinging the stick at the ball from right to left. Left-handed sticks are rare, but available; however they are pointless as the rules forbid their use in a game. To make a strike at the ball with a left-to-right swing the player must present the flat of the 'face' of the stick to the ball by 'reversing' the stick head, i.e. by turning the handle through approximately 180° (while a reverse edge hit would turn the stick head through approximately 90° from the position of an upright forehand stroke with the 'face' of the stick head).
|
58 |
+
|
59 |
+
Edge hitting of the ball underwent a two-year "experimental period", twice the usual length of an "experimental trial" and is still a matter of some controversy within the game. Ric Charlesworth, the former Australian coach, has been a strong critic of the unrestricted use of the reverse edge hit. The 'hard' forehand edge hit was banned after similar concerns were expressed about the ability of players to direct the ball accurately, but the reverse edge hit does appear to be more predictable and controllable than its counterpart. This type of hit is now more commonly referred to as the "forehand sweep" where the ball is hit with the flat side or "natural" side of the stick and not the rounded edge.
|
60 |
+
|
61 |
+
Other rules include; no foot-to-ball contact, no use of hands, no obstructing other players, no high back swing, no hacking, and no third party. If a player is dribbling the ball and either loses control and kicks the ball or another player interferes that player is not permitted to gain control and continue dribbling. The rules do not allow the person who kicked the ball to gain advantage from the kick, so the ball will automatically be passed on to the opposing team. Conversely, if no advantage is gained from kicking the ball, play should continue. Players may not obstruct another's chance of hitting the ball in any way. No shoving/using your body/stick to prevent advancement in the other team. Penalty for this is the opposing team receives the ball and if the problem continues, the player can be carded. While a player is taking a free hit or starting a corner the back swing of their hit cannot be too high for this is considered dangerous. Finally there may not be three players touching the ball at one time. Two players from opposing teams can battle for the ball, however if another player interferes it is considered third party and the ball automatically goes to the team who only had one player involved in the third party.
|
62 |
+
|
63 |
+
A match ordinarily consists of two periods of 35 minutes and a halftime interval of 5 minutes. Other periods and interval may be agreed by
|
64 |
+
both teams except as specified in Regulations for particular competitions.[27] Since 2014, some International games have four 15 minute quarters with 2 minutes break between each quarter and 15 minutes break between quarter two and three. At the 2018 Commonwealth Games Held on the Gold Coast in Brisbane, Australia the hockey games for both men and women had four 15 minute quarters.
|
65 |
+
|
66 |
+
In December 2018 the FIH announced rule changes that would make 15-minute quarters universal from January 2019. England Hockey confirmed that while no changes would be made to the domestic game mid-season, the new rules would be implemented at the start of the 2019-20 season. However, in July 2019 England Hockey announced that 17.5-minute quarters would only be implemented in elite domestic club games.[28]
|
67 |
+
|
68 |
+
The game begins with a pass back from the centre-forward usually to the centre-half back from the halfway line, the opposing team can not try to tackle this play until the ball has been pushed back. The team consists of eleven players, the players are usually set up as follows: Goalkeeper, Left Fullback, Right Fullback, 3 half-backs and 4 forwards consisting of Left Wing, Left Inner, Right Inner and Right Wing.[contradictory] These positions can change and adapt throughout the course of the game depending on the attacking and defensive style of the opposition.[29]
|
69 |
+
|
70 |
+
When hockey positions are discussed, notions of fluidity are very common. Each team can be fielded with a maximum of 11 players and will typically arrange themselves into forwards, midfielders, and defensive players (fullbacks) with players frequently moving between these lines with the flow of play. Each team may also play with:
|
71 |
+
|
72 |
+
* a goalkeeper who wears a different color shirt and full protective equipment comprising at least headgear, leg guards and kickers; this player is referred to in the rules as a goalkeeper; or
|
73 |
+
|
74 |
+
* Only field players; no player has goalkeeping privileges or wears a different color shirt; no player may wear protective headgear except a face mask when defending a penalty corner or stroke.[5]
|
75 |
+
|
76 |
+
As hockey has a very dynamic style of play, it is difficult to simplify positions to the static formations which are common in football. Although positions will typically be categorized as either fullback, halfback, midfield/inner or striker, it is important for players to have an understanding of every position on the field. For example, it is not uncommon to see a halfback overlap and end up in either attacking position, with the midfield and strikers being responsible for re-adjusting to fill the space they left. Movement between lines like this is particularly common across all positions.
|
77 |
+
|
78 |
+
This fluid Australian culture[further explanation needed] of hockey has been responsible for developing an international trend towards players occupying spaces on the field, not having assigned positions. Although they may have particular spaces on the field which they are more comfortable and effective as players, they are responsible for occupying the space nearest them. This fluid approach to hockey and player movement has made it easy for teams to transition between formations such as; "3 at the back", "2 centre halves", "5 at the front", and more.
|
79 |
+
|
80 |
+
When the ball is inside the circle they are defending and they have their stick in their hand, goalkeepers wearing full protective equipment are permitted to use their stick, feet, kickers or leg guards to propel the ball and to use their stick, feet, kickers, leg guards or any other part of their body to stop the ball or deflect it in any direction including over the back line. Similarly, field players are permitted to use their stick. They are not allowed to use their feet and legs to propel the ball, stop the ball or deflect it in any direction including over the back line. However, neither goalkeepers, or players with goalkeeping privileges are permitted to conduct themselves in a manner which is dangerous to other players by taking advantage of the protective equipment they wear.[5]
|
81 |
+
|
82 |
+
Neither goalkeepers or players with goalkeeping privileges may lie on the ball, however, they are permitted to use arms, hands and any other part of their body to push the ball away. Lying on the ball deliberately will result in a penalty stroke, whereas if an umpire deems a goalkeeper has lain on the ball accidentally (e.g. it gets stuck in their protective equipment), a penalty corner is awarded.
|
83 |
+
|
84 |
+
* The action above is permitted only as part of a goal saving action or to move the ball away from the possibility of a goal scoring action by opponents. It does not permit a goalkeeper or player with goalkeeping privileges to propel the ball forcefully with arms, hands or body so that it travels a long distance
|
85 |
+
|
86 |
+
When the ball is outside the circle they are defending, goalkeepers or players with goalkeeping privileges are only permitted to play the ball with their stick. Further, a goalkeeper, or player with goalkeeping privileges who is wearing a helmet, must not take part in the match outside the 23m area they are defending, except when taking a penalty stroke. A goalkeeper must wear protective headgear at all times, except when taking a penalty stroke.
|
87 |
+
|
88 |
+
For the purposes of the rules, all players on the team in possession of the ball are attackers, and those on the team without the ball are defenders, yet throughout the game being played you are always "defending" your goal and "attacking" the opposite goal.[30]
|
89 |
+
|
90 |
+
The match is officiated by two field umpires. Traditionally each umpire generally controls half of the field, divided roughly diagonally. These umpires are often assisted by a technical bench including a timekeeper and record keeper.
|
91 |
+
|
92 |
+
Prior to the start of the game, a coin is tossed and the winning captain can choose a starting end or whether to start with the ball. Since 2017 the game consists of four periods of 15 minutes with a 2-minute break after every period, and a 15-minute intermission at half time before changing ends. At the start of each period, as well as after goals are scored, play is started with a pass from the centre of the field. All players must start in their defensive half (apart from the player making the pass), but the ball may be played in any direction along the floor. Each team starts with the ball in one half, and the team that conceded the goal has possession for the restart. Teams trade sides at halftime.
|
93 |
+
|
94 |
+
Field players may only play the ball with the face of the stick. If the back side of the stick is used, it is a penalty and the other team will get the ball back. Tackling is permitted as long as the tackler does not make contact with the attacker or the other person's stick before playing the ball (contact after the tackle may also be penalized if the tackle was made from a position where contact was inevitable). Further, the player with the ball may not deliberately use his body to push a defender out of the way.
|
95 |
+
|
96 |
+
Field players may not play the ball with their feet, but if the ball accidentally hits the feet, and the player gains no benefit from the contact, then the contact is not penalized. Although there has been a change in the wording of this rule from 1 January 2007, the current FIH umpires' briefing instructs umpires not to change the way they interpret this rule.[31]
|
97 |
+
|
98 |
+
Obstruction typically occurs in three circumstances – when a defender comes between the player with possession and the ball in order to prevent them tackling; when a defender's stick comes between the attacker's stick and the ball or makes contact with the attacker's stick or body; and also when blocking the opposition's attempt to tackle a teammate with the ball (called third party obstruction).
|
99 |
+
|
100 |
+
When the ball passes completely over the sidelines (on the sideline is still in), it is returned to play with a sideline hit, taken by a member of the team whose players were not the last to touch the ball before crossing the sideline. The ball must be placed on the sideline, with the hit taken from as near the place the ball went out of play as possible. If it crosses the back line after last touched by an attacker, a 15 m (16 yd) hit is awarded. A 15 m hit is also awarded for offences committed by the attacking side within 15 m of the end of the pitch they are attacking.
|
101 |
+
|
102 |
+
Set plays are often utilized for specific situations such as a penalty corner or free hit. For instance, many teams have penalty corner variations that they can use to beat the defensive team. The coach may have plays that sends the ball between two defenders and lets the player attack the opposing team's goal. There are no set plays unless your team has them.
|
103 |
+
|
104 |
+
Free hits are awarded when offences are committed outside the scoring circles (the term 'free hit' is standard usage but the ball need not be hit). The ball may be hit, pushed or lifted in any direction by the team offended against. The ball can be lifted from a free hit but not by hitting, you must flick or scoop to lift from a free hit. (In previous versions of the rules, hits in the area outside the circle in open play have been permitted but lifting one direction from a free hit was prohibited). Opponents must move 5 m (5.5 yd) from the ball when a free hit is awarded. A free hit must be taken from within playing distance of the place of the offence for which it was awarded and the ball must be stationary when the free hit is taken.
|
105 |
+
|
106 |
+
As mentioned above, a 15 m hit is awarded if an attacking player commits a foul forward of that line, or if the ball passes over the back line off an attacker. These free hits are taken in-line with where the foul was committed (taking a line parallel with the sideline between where the offence was committed, or the ball went out of play). When an attacking free hit is awarded within 5 m of the circle everyone including the person taking the penalty must be five metres from the circle and everyone apart from the person taking the free hit must be five metres away from the ball. When taking an attacking free hit, the ball may not be hit straight into the circle if you are within your attacking 23 meter area (25 yard area). It must travel 5 meters before going in.
|
107 |
+
|
108 |
+
In February 2009 the FIH introduced, as a "Mandatory Experiment" for international competition, an updated version of the free-hit rule. The changes allows a player taking a free hit to pass the ball to themselves. Importantly, this is not a "play on" situation, but to the untrained eye it may appear to be. The player must play the ball any distance in two separate motions, before continuing as if it were a play-on situation. They may raise an aerial or overhead immediately as the second action, or any other stroke permitted by the rules of field hockey. At high-school level, this is called a self pass and was adopted in Pennsylvania in 2010 as a legal technique for putting the ball in play.
|
109 |
+
|
110 |
+
Also, all players (from both teams) must be at least 5 m from any free hit awarded to the attack within the 23 m area. The ball may not travel directly into the circle from a free hit to the attack within the 23 m area without first being touched by another player or being dribbled at least 5 m by a player making a "self-pass". These experimental rules apply to all free-hit situations, including sideline and corner hits. National associations may also choose to introduce these rules for their domestic competitions.
|
111 |
+
|
112 |
+
A free hit from the quarter line is awarded to the attacking team if the ball goes over the back-line after last being touched by a defender, provided they do not play it over the back-line deliberately, in which case a penalty corner is awarded. This free hit is played by the attacking team from a spot on the quarter line closest to where the ball went out of play. All the parameters of an attacking free hit within the attacking quarter of the playing surface apply.
|
113 |
+
|
114 |
+
The short or penalty corner is awarded:
|
115 |
+
|
116 |
+
Short corners begin with five defenders (usually including the keeper) positioned behind the back line and the ball placed at least 10 yards from the nearest goal post.[32] All other players in the defending team must be beyond the centre line, that is not in their 'own' half of the pitch, until the ball is in play. Attacking players begin the play standing outside the scoring circle, except for one attacker who starts the corner by playing the ball from a mark 10 m either side of the goal (the circle has a 14.63 m radius). This player puts the ball into play by pushing or hitting the ball to the other attackers outside the circle; the ball must pass outside the circle and then put back into the circle before the attackers may make a shot at the goal from which a goal can be scored. FIH rules do not forbid a shot at goal before the ball leaves the circle after being 'inserted', nor is a shot at the goal from outside the circle prohibited, but a goal cannot be scored at all if the ball has not gone out of the circle and cannot be scored from a shot from outside the circle if it is not again played by an attacking player before it enters the goal.
|
117 |
+
|
118 |
+
For safety reasons, the first shot of a penalty corner must not exceed 460 mm high (the height of the "backboard" of the goal) at the point it crosses the goal line if it is hit. However, if the ball is deemed to be below backboard height, the ball can be subsequently deflected above this height by another player (defender or attacker), providing that this deflection does not lead to danger. Note that the "Slap" stroke (a sweeping motion towards the ball, where the stick is kept on or close to the ground when striking the ball) is classed as a hit, and so the first shot at goal must be below backboard height for this type of shot also.
|
119 |
+
|
120 |
+
If the first shot at goal in a short corner situation is a push, flick or scoop, in particular the drag flick (which has become popular at international and national league standards), the shot is permitted to rise above the height of the backboard, as long as the shot is not deemed dangerous to any opponent. This form of shooting was developed because it is not height restricted in the same way as the first hit shot at the goal and players with good technique are able to drag-flick with as much power as many others can hit a ball.
|
121 |
+
|
122 |
+
A penalty stroke is awarded when a defender commits a foul in the circle (accidental or otherwise) that prevents a probable goal or commits a deliberate foul in the circle or if defenders repeatedly run from the back line too early at a penalty corner. The penalty stroke is taken by a single attacker in the circle, against the goalkeeper, from a spot 6.4 m from goal. The ball is played only once at goal by the attacker using a push, flick or scoop stroke. If the shot is saved, play is restarted with a 15 m hit to the defenders. When a goal is scored, play is restarted in the normal way.
|
123 |
+
|
124 |
+
According to the current Rules of Hockey 2019[33] issued by the FIH there are only two criteria for a dangerously played ball. The first is legitimate evasive action by an opponent (what constitutes legitimate evasive action is an umpiring judgment). The second is specific to the rule concerning a shot at goal at a penalty corner but is generally, if somewhat inconsistently, applied throughout the game and in all parts of the pitch: it is that a ball lifted above knee height and at an opponent who is within 5m of the ball is certainly dangerous.
|
125 |
+
|
126 |
+
The velocity of the ball is not mentioned in the rules concerning a dangerously played ball. A ball that hits a player above the knee may on some occasions not be penalized, this is at the umpire's discretion. A jab tackle, for example, might accidentally lift the ball above knee height into an opponent from close range but at such low velocity as not to be, in the opinion of the umpire, dangerous play. In the same way a high-velocity hit at very close range into an opponent, but below knee height, could be considered to be dangerous or reckless play in the view of the umpire, especially when safer alternatives are open to the striker of the ball.
|
127 |
+
|
128 |
+
A ball that has been lifted high so that it will fall among close opponents may be deemed to be potentially dangerous and play may be stopped for that reason. A lifted ball that is falling to a player in clear space may be made potentially dangerous by the actions of an opponent closing to within 5m of the receiver before the ball has been controlled to ground – a rule which is often only loosely applied; the distance allowed is often only what might be described as playing distance, 2–3 m, and opponents tend to be permitted to close on the ball as soon as the receiver plays it: these unofficial variations are often based on the umpire's perception of the skill of the players i.e. on the level of the game, in order to maintain game flow, which umpires are in general in both Rules and Briefing instructed to do, by not penalising when it is unnecessary to do so; this is also a matter at the umpire's discretion.
|
129 |
+
|
130 |
+
The term "falling ball" is important in what may be termed encroaching offences. It is generally only considered an offence to encroach on an opponent receiving a lifted ball that has been lifted to above head height (although the height is not specified in rule) and is falling. So, for example, a lifted shot at the goal which is still rising as it crosses the goal line (or would have been rising as it crossed the goal line) can be legitimately followed up by any of the attacking team looking for a rebound.
|
131 |
+
|
132 |
+
In general even potentially dangerous play is not penalised if an opponent is not disadvantaged by it or, obviously, not injured by it so that he cannot continue. A personal penalty, that is a caution or a suspension, rather than a team penalty, such as a free ball or a penalty corner, may be (many would say should be or even must be, but again this is at the umpire's discretion) issued to the guilty party after an advantage allowed by the umpire has been played out in any situation where an offence has occurred, including dangerous play (but once advantage has been allowed the umpire cannot then call play back and award a team penalty).
|
133 |
+
|
134 |
+
It is not an offence to lift the ball over an opponent's stick (or body on the ground), provided that it is done with consideration for the safety of the opponent and not dangerously. For example, a skillful attacker may lift the ball over a defenders stick or prone body and run past them, however if the attacker lifts the ball into or at the defender's body, this would almost certainly be regarded as dangerous.
|
135 |
+
|
136 |
+
It is not against the rules to bounce the ball on the stick and even to run with it while doing so, as long as that does not lead to a potentially dangerous conflict with an opponent who is attempting to make a tackle. For example, two players trying to play at the ball in the air at the same time, would probably be considered a dangerous situation and it is likely that the player who first put the ball up or who was so 'carrying' it would be penalised.
|
137 |
+
|
138 |
+
Dangerous play rules also apply to the usage of the stick when approaching the ball, making a stroke at it (replacing what was at one time referred to as the "sticks" rule, which once forbade the raising of any part of the stick above the shoulder during any play. This last restriction has been removed but the stick should still not be used in a way that endangers an opponent) or attempting to tackle, (fouls relating to tripping, impeding and obstruction). The use of the stick to strike an opponent will usually be much more severely dealt with by the umpires than offences such as barging, impeding and obstruction with the body, although these are also dealt with firmly, especially when these fouls are intentional: field hockey is a non-contact game.
|
139 |
+
|
140 |
+
Players may not play or attempt to play at the ball above their shoulders unless trying to save a shot that could go into the goal, in which case they are permitted to stop the ball or deflect it safely away. A swing, as in a hit, at a high shot at the goal (or even wide of the goal) will probably be considered dangerous play if at opponents within 5 m and such a stroke would be contrary to rule in these circumstances anyway.
|
141 |
+
|
142 |
+
Within the English National League it is now a legal action to take a ball above shoulder height if completed using a controlled action.
|
143 |
+
|
144 |
+
green card (warning with 2 min suspension)
|
145 |
+
|
146 |
+
yellow card (suspension of 5 / 10 mins depending on intensity of foul)
|
147 |
+
|
148 |
+
red card (permanent suspension)
|
149 |
+
|
150 |
+
Hockey uses a three-tier penalty card system of warnings and suspensions:
|
151 |
+
|
152 |
+
If a coach is sent off, depending on local rules, a player may have to leave the field for the remaining length of the match.
|
153 |
+
|
154 |
+
In addition to their colours, field hockey penalty cards are often shaped differently, so they can be recognized easily. Green cards are normally triangular, yellow cards rectangular and red cards circular.
|
155 |
+
|
156 |
+
Unlike football, a player may receive more than one green or yellow card. However, they cannot receive the same card for the same offence (for example two yellows for dangerous play), and the second must always be a more serious card. In the case of a second yellow card for a different breach of the rules (for example a yellow for deliberate foot, and a second later in the game for dangerous play) the temporary suspension would be expected to be of considerably longer duration than the first. However, local playing conditions may mandate that cards are awarded only progressively, and not allow any second awards.
|
157 |
+
|
158 |
+
Umpires, if the free hit would have been in the attacking 23 m area, may upgrade the free hit to a penalty corner for dissent or other misconduct after the free hit has been awarded.
|
159 |
+
|
160 |
+
The teams' object is to play the ball into their attacking circle and, from there, hit, push or flick the ball into the goal, scoring a goal. The team with more goals after 60 minutes wins the game. The playing time may be shortened, particularly when younger players are involved, or for some tournament play. If the game is played in a countdown clock, like ice hockey, a goal can only count if the ball completely crosses the goaline and into the goal before time expires, not when the ball leaves the stick in the act of shooting.
|
161 |
+
|
162 |
+
In many competitions (such as regular club competition, or in pool games in FIH international tournaments such as the Olympics or the World Cup), a tied result stands and the overall competition standings are adjusted accordingly. Since March 2013, when tie breaking is required, the official FIH Tournament Regulations mandate to no longer have extra time and go directly into a penalty shoot-out when a classification match ends in a tie.[34] However, many associations follow the previous procedure consisting of two periods of 7.5 minutes of "golden goal" extra time during which the game ends as soon as one team scores.
|
163 |
+
|
164 |
+
The FIH implemented a two-year rules cycle with the 2007–08 edition of the rules, with the intention that the rules be reviewed on a biennial basis. The 2009 rulebook was officially released in early March 2009 (effective 1 May 2009), however the FIH published the major changes in February. The current rule book is effective from 1 January 2019.
|
165 |
+
|
166 |
+
The FIH has adopted a policy of including major changes to the rules as "Mandatory Experiments", showing that they must be played at international level, but are treated as experimental and will be reviewed before the next rulebook is published and either changed, approved as permanent rules, or deleted.
|
167 |
+
|
168 |
+
There are sometimes minor variations in rules from competition to competition; for instance, the duration of matches is often varied for junior competitions or for carnivals. Different national associations also have slightly differing rules on player equipment.
|
169 |
+
|
170 |
+
The new Euro Hockey League and the Olympics has made major alterations to the rules to aid television viewers, such as splitting the game into four quarters, and to try to improve player behavior, such as a two-minute suspension for green cards—the latter was also used in the 2010 World Cup and 2016 Olympics. In the United States, the NCAA has its own rules for inter-collegiate competitions; high school associations similarly play to different rules, usually using the rules published by the National Federation of State High School Associations (NFHS). This article assumes FIH rules unless otherwise stated. USA Field Hockey produces an annual summary of the differences.[35]
|
171 |
+
|
172 |
+
In the United States, the games at the junior high level consist of four 12-minute periods, while the high-school level consists of two 30-minute periods. Many private American schools play 12-minute quarters, and some have adopted FIH rules rather than NFHS rules.
|
173 |
+
|
174 |
+
Players are required to wear mouth guards and shin guards in order to play the game. Also, there is a newer rule requiring certain types of sticks be used. In recent years, the NFHS rules have moved closer to FIH, but in 2011 a new rule requiring protective eyewear was introduced for the 2011 Fall season. Further clarification of NFHS's rule requiring protective eyewear states, "effective 1 January 2019, all eye protection shall be permanently labeled with the current ASTM 2713 standard for field hockey."[36] Metal 'cage style' goggles favored by US high school lacrosse and permitted in high school field hockey is prohibited under FIH rules.[37]
|
175 |
+
|
176 |
+
Each player carries a "stick" that normally measures between 80–95 cm (31–38"); shorter or longer sticks are available. Sticks were traditionally made of wood, but are now often made also with fibreglass, kevlar or carbon fibre composites. Metal is forbidden from use in field hockey sticks, due to the risk of injury from sharp edges if the stick were to break. The stick has a rounded handle, has a J-shaped hook at the bottom, and is flattened on the left side (when looking down the handle with the hook facing upwards). All sticks must be right-handed; left-handed ones are prohibited.
|
177 |
+
|
178 |
+
There was traditionally a slight curve (called the bow, or rake) from the top to bottom of the face side of the stick and another on the 'heel' edge to the top of the handle (usually made according to the angle at which the handle part was inserted into the splice of the head part of the stick), which assisted in the positioning of the stick head in relation to the ball and made striking the ball easier and more accurate.
|
179 |
+
|
180 |
+
The hook at the bottom of the stick was only recently the tight curve (Indian style) that we have nowadays. The older 'English' sticks had a longer bend, making it very hard to use the stick on the reverse. For this reason players now use the tight curved sticks.
|
181 |
+
|
182 |
+
The handle makes up about the top third of the stick. It is wrapped in a grip similar to that used on tennis racket. The grip may be made of a variety of materials, including chamois leather, which improves grip in the wet and gives the stick a softer touch and different weighting it wrapped over a preexisting grip .
|
183 |
+
|
184 |
+
It was recently discovered that increasing the depth of the face bow made it easier to get high speeds from the dragflick and made the stroke easier to execute. At first, after this feature was introduced, the Hockey Rules Board placed a limit of 50 mm on the maximum depth of bow over the length of the stick but experience quickly demonstrated this to be excessive. New rules now limit this curve to under 25 mm so as to limit the power with which the ball can be flicked.
|
185 |
+
|
186 |
+
Standard field hockey balls are hard spherical balls, made of solid plastic (sometimes over a cork core), and are usually white, although they can be any colour as long as they contrast with the playing surface. The balls have a diameter of 71.3–74.8 mm (2.81–2.94 in) and a mass of 156–163 g (5.5–5.7 oz). The ball is often covered with indentations to reduce aquaplaning that can cause an inconsistent ball speed on wet surfaces.
|
187 |
+
|
188 |
+
The 2007 rulebook saw major changes regarding goalkeepers. A fully equipped goalkeeper must wear a helmet, leg guards and kickers, and like all players, they must carry a stick. Goalkeepers may use either a field player's stick or a specialised goalkeeping stick provided always the stick is of legal dimensions. Usually field hockey goalkeepers also wear extensive additional protective equipment including chest guards, padded shorts, heavily padded hand protectors, groin protectors, neck protectors and arm guards. A goalie may not cross the 23 m line, the sole exception to this being if the goalkeeper is to take a penalty stroke at the other end of the field, when the clock is stopped. The goalkeeper can also remove their helmet for this action. While goalkeepers are allowed to use their feet and hands to clear the ball, like field players they may only use the one side of their stick. Slide tackling is permitted as long as it is with the intention of clearing the ball, not aimed at a player.
|
189 |
+
|
190 |
+
It is now also even possible for teams to have a full eleven outfield players and no goalkeeper at all. No player may wear a helmet or other goalkeeping equipment, neither will any player be able to play the ball with any other part of the body than with their stick. This may be used to offer a tactical advantage, for example, if a team is trailing with only a short time to play, or to allow for play to commence if no goalkeeper or kit is available.
|
191 |
+
|
192 |
+
The basic tactic in field hockey, as in association football and many other team games, is to outnumber the opponent in a particular area of the field at a moment in time. When in possession of the ball this temporary numerical superiority can be used to pass the ball around opponents so that they cannot effect a tackle because they cannot get within playing reach of the ball and to further use this numerical advantage to gain time and create clear space for making scoring shots on the opponent's goal. When not in possession of the ball numerical superiority is used to isolate and channel an opponent in possession and 'mark out' any passing options so that an interception or a tackle may be made to gain possession. Highly skillful players can sometimes get the better of more than one opponent and retain the ball and successfully pass or shoot but this tends to use more energy than quick early passing.
|
193 |
+
|
194 |
+
Every player has a role depending on their relationship to the ball if the team communicates throughout the play of the game. There will be players on the ball (offensively - ball carriers; defensively - pressure, support players, and movement players.
|
195 |
+
|
196 |
+
The main methods by which the ball is moved around the field by players are a) passing b) pushing the ball and running with it controlled to the front or right of the body and c) "dribbling"; where the player controls the ball with the stick and moves in various directions with it to elude opponents. To make a pass the ball may be propelled with a pushing stroke, where the player uses their wrists to push the stick head through the ball while the stick head is in contact with it; the "flick" or "scoop", similar to the push but with an additional arm and leg and rotational actions to lift the ball off the ground; and the "hit", where a swing at ball is taken and contact with it is often made very forcefully, causing the ball to be propelled at velocities in excess of 70 mph (110 km/h). In order to produce a powerful hit, usually for travel over long distances or shooting at the goal, the stick is raised higher and swung with maximum power at the ball, a stroke sometimes known as a "drive".
|
197 |
+
|
198 |
+
Tackles are made by placing the stick into the path of the ball or playing the stick head or shaft directly at the ball. To increase the effectiveness of the tackle, players will often place the entire stick close to the ground horizontally, thus representing a wider barrier. To avoid the tackle, the ball carrier will either pass the ball to a teammate using any of the push, flick, or hit strokes, or attempt to maneuver or "drag" the ball around the tackle, trying to deceive the tackler.
|
199 |
+
|
200 |
+
In recent years, the penalty corner has gained importance as a goal scoring opportunity. Particularly with the technical development of the drag flick. Tactics at penalty corners to set up time for a shot with a drag flick or a hit shot at the goal involve various complex plays, including multiple passes before deflections towards the goal is made but the most common method of shooting is the direct flick or hit at the goal.
|
201 |
+
|
202 |
+
At the highest level, field hockey is a fast moving, highly skilled game, with players using fast moves with the stick, quick accurate passing, and hard hits, in attempts to keep possession and move the ball towards the goal. Tackling with physical contact and otherwise physically obstructing players is not permitted. Some of the tactics used resemble football (soccer), but with greater ball speed.
|
203 |
+
|
204 |
+
With the 2009 changes to the rules regarding free hits in the attacking 23m area, the common tactic of hitting the ball hard into the circle was forbidden. Although at higher levels this was considered tactically risky and low-percentage at creating scoring opportunities, it was used with some effect to 'win' penalty corners by forcing the ball onto a defender's foot or to deflect high (and dangerously) off a defender's stick. The FIH felt it was a dangerous practice that could easily lead to raised deflections and injuries in the circle, which is often crowded at a free-hit situation, and outlawed it.
|
205 |
+
|
206 |
+
The biggest two field hockey tournaments are the Olympic Games tournament, and the Hockey World Cup, which is also held every 4 years. Apart from this, there is the Champions Trophy held each year for the six top-ranked teams. Field hockey has also been played at the Commonwealth Games since 1998. Amongst the men, India lead in Olympic competition, having won 8 golds (6 successive in row). Amongst the women, Australia and Netherlands have 3 Olympic golds while Netherlands has clinched the World Cup 6 times. The Sultan Azlan Shah Hockey Tournament and Sultan Ibrahim Ismail Hockey Tournament for the junior team, both tournaments held annually in Malaysia, are becoming prominent field hockey tournaments where teams from around the world participate to win the cup.
|
207 |
+
|
208 |
+
India and Pakistan dominated men's hockey until the early 1980s, winning eight Olympic golds and three of the first five world cups, respectively, but have become less prominent with the ascendancy of Belgium, the Netherlands, Germany, New Zealand, Australia, and Spain since the late 1980s, as grass playing surfaces were replaced with artificial turf (which conferred increased importance on athleticism). Other notable men's nations include Argentina, England (who combine with other British "Home Nations" to form the Great Britain side at Olympic events) and South Korea. Despite their recent drop in international rankings, Pakistan still holds the record of four World Cup wins.
|
209 |
+
|
210 |
+
Netherlands, Australia and Argentina are the most successful national teams among women. The Netherlands was the predominant women's team before field hockey was added to Olympic events. In the early 1990s, Australia emerged as the strongest women's country although retirement of a number of players weakened the team. Argentina improved its play on the 2000s, heading IFH rankings in 2003, 2010 and 2013. Other prominent women's teams are China, South Korea, Germany and India.
|
211 |
+
|
212 |
+
As of November 2017[update] Argentina's men's team and the Netherlands' women's teams lead the FIH world rankings.
|
213 |
+
|
214 |
+
For a couple of years, Belgium has emerged as a leading nation, with a World Champions title (2018), a European Champions title (2019), a silver medal at the Olympics (2016) and a lead on the FIH men's team world ranking.
|
215 |
+
|
216 |
+
This is a list of the major International field hockey tournaments, in chronological order. Tournaments included are:
|
217 |
+
|
218 |
+
Although invitational or not open to all countries, the following are also considered international tournaments:
|
219 |
+
|
220 |
+
As the name suggests, Hockey5s is a hockey variant which features five players on each team (which must include a goalkeeper). The field of play is 55 m long and 41.70 m wide— this is approximately half the size of a regular pitch. Few additional markings are needed as there is no penalty circle nor penalty corners; shots can be taken from anywhere on the pitch. Penalty strokes are replaced by a "challenge" which is like the one-on-one method used in a penalty shoot-out. The duration of the match is three 12-minute periods with an interval of two minutes between periods; golden goal periods are multiple 5-minute periods. The rules are simpler and it is intended that the game is faster, creating more shots on goal with less play in midfield, and more attractive to spectators.[38]
|
221 |
+
|
222 |
+
An Asian qualification tournament for two places at the 2014 Youth Olympic Games was the first time an FIH event used the Hockey5s format. Hockey5s was also used for the Youth Olympic hockey tournament, and at the Pacific Games in 2015.
|
223 |
+
|
224 |
+
Hockey features in F. J. Campbell's 2018 novel No Number Nine, the final chapters of which are set at the Sydney 2000 Olympics.[39] Field Hockey has prominently featured in Indian films like Chak De! India and Gold.
|
en/259.html.txt
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A leap year (also known as an intercalary year or bissextile year) is a calendar year that contains an additional day (or, in the case of a lunisolar calendar, a month) added to keep the calendar year synchronized with the astronomical year or seasonal year.[1] Because astronomical events and seasons do not repeat in a whole number of days, calendars that have the same number of days in each year drift over time with respect to the event that the year is supposed to track. By inserting (called intercalating in technical terminology) an additional day or month into the year, the drift can be corrected. A year that is not a leap year is a common year.
|
2 |
+
|
3 |
+
For example, in the Gregorian calendar, each leap year has 366 days instead of 365, by extending February to 29 days rather than the common 28. These extra days occur in each year which is an integer multiple of 4 (except for years evenly divisible by 100, which are not leap years unless evenly divisible by 400). Similarly, in the lunisolar Hebrew calendar, Adar Aleph, a 13th lunar month, is added seven times every 19 years to the twelve lunar months in its common years to keep its calendar year from drifting through the seasons. In the Bahá'í Calendar, a leap day is added when needed to ensure that the following year begins on the March equinox.
|
4 |
+
|
5 |
+
The term leap year probably comes from the fact that a fixed date in the Gregorian calendar normally advances one day of the week from one year to the next, but the day of the week in the 12 months following the leap day (from March 1 through February 28 of the following year) will advance two days due to the extra day, thus leaping over one day in the week.[2][3] For example, Christmas Day (December 25) fell on a Sunday in 2016, Monday in 2017, Tuesday in 2018 and Wednesday in 2019, but then will leap over Thursday to fall on a Friday in 2020.
|
6 |
+
|
7 |
+
The length of a day is also occasionally corrected by inserting a leap second into Coordinated Universal Time (UTC) because of variations in Earth's rotation period. Unlike leap days, leap seconds are not introduced on a regular schedule because variations in the length of the day are not entirely predictable.
|
8 |
+
|
9 |
+
Leap years can present a problem in computing, known as the leap year bug, when a year is not correctly identified as a leap year or when February 29 is not handled correctly in logic that accepts or manipulates dates.
|
10 |
+
|
11 |
+
In the Gregorian calendar, the standard calendar in most of the world, most years that are multiples of 4 are leap years. In each leap year, the month of February has 29 days instead of 28. Adding one extra day in the calendar every four years compensates for the fact that a period of 365 days is shorter than a tropical year by almost 6 hours.[4] Some exceptions to this basic rule are required since the duration of a tropical year is slightly less than 365.25 days. The Gregorian reform modified the Julian calendar's scheme of leap years as follows:
|
12 |
+
|
13 |
+
Every year that is exactly divisible by four is a leap year, except for years that are exactly divisible by 100, but these centurial years are leap years if they are exactly divisible by 400. For example, the years 1700, 1800, and 1900 are not leap years, but the years 1600 and 2000 are.[5]
|
14 |
+
|
15 |
+
Over a period of four centuries, the accumulated error of adding a leap day every four years amounts to about three extra days. The Gregorian calendar therefore drops three leap days every 400 years, which is the length of its leap cycle. This is done by dropping February 29 in the three century years (multiples of 100) that cannot be exactly divided by 400.[6][7] The years 1600, 2000 and 2400 are leap years, while 1700, 1800, 1900, 2100, 2200 and 2300 are not leap years. By this rule, the average number of days per year is 365 + 1⁄4 − 1⁄100 + 1⁄400 = 365.2425.[8] The rule can be applied to years before the Gregorian reform (the proleptic Gregorian calendar), if astronomical year numbering is used.[9]
|
16 |
+
|
17 |
+
The Gregorian calendar was designed to keep the vernal equinox on or close to March 21, so that the date of Easter (celebrated on the Sunday after the ecclesiastical full moon that falls on or after March 21) remains close to the vernal equinox.[10][8] The "Accuracy" section of the "Gregorian calendar" article discusses how well the Gregorian calendar achieves this design goal, and how well it approximates the tropical year.
|
18 |
+
|
19 |
+
The following pseudocode determines whether a year is a leap year or a common year in the Gregorian calendar (and in the proleptic Gregorian calendar before 1582). The year variable being tested is the integer representing the number of the year in the Gregorian calendar.
|
20 |
+
|
21 |
+
if (year is not divisible by 4) then (it is a common year)
|
22 |
+
else if (year is not divisible by 100) then (it is a leap year)
|
23 |
+
else if (year is not divisible by 400) then (it is a common year)
|
24 |
+
else (it is a leap year)
|
25 |
+
|
26 |
+
The algorithm applies to proleptic Gregorian calendar years before 1, but only if the year is expressed with astronomical year numbering. It is not valid for the BC or BCE notation. The algorithm is not necessarily valid for years in the Julian calendar, such as years before 1752 in the British Empire. The year 1700 was a leap year in the Julian calendar, but not in the Gregorian calendar.
|
27 |
+
|
28 |
+
February 29 is a date that usually occurs every four years, and is called the leap day. This day is added to the calendar in leap years as a corrective measure, because the Earth does not orbit the sun in precisely 365 days.
|
29 |
+
|
30 |
+
The Gregorian calendar is a modification of the Julian calendar first used by the Romans. The Roman calendar originated as a lunisolar calendar and named many of its days after the syzygies of the moon: the new moon (Kalendae or calends, hence "calendar") and the full moon (Idus or ides). The Nonae or nones was not the first quarter moon but was exactly one nundina or Roman market week of nine days before the ides, inclusively counting the ides as the first of those nine days. This is what we would call a period of eight days. In 1825, Ideler believed that the lunisolar calendar was abandoned about 450 BC by the decemvirs, who implemented the Roman Republican calendar, used until 46 BC. The days of these calendars were counted down (inclusively) to the next named day, so February 24 was ante diem sextum Kalendas Martias ("the sixth day before the calends of March") often abbreviated a. d. VI Kal. Mart. The Romans counted days inclusively in their calendars, so this was actually the fifth day before March 1 when counted in the modern exclusive manner (not including the starting day).[11]
|
31 |
+
|
32 |
+
The Republican calendar's intercalary month was inserted on the first or second day after the Terminalia (a. d. VII Kal. Mar., February 23). The remaining days of Februarius were dropped. This intercalary month, named Intercalaris or Mercedonius, contained 27 days. The religious festivals that were normally celebrated in the last five days of February were moved to the last five days of Intercalaris. Because only 22 or 23 days were effectively added, not a full lunation, the calends and ides of the Roman Republican calendar were no longer associated with the new moon and full moon.
|
33 |
+
|
34 |
+
The Julian calendar, which was developed in 46 BC by Julius Caesar, and became effective in 45 BC, distributed an extra ten days among the months of the Roman Republican calendar. Caesar also replaced the intercalary month by a single intercalary day, located where the intercalary month used to be. To create the intercalary day, the existing ante diem sextum Kalendas Martias (February 24) was doubled, producing ante diem bis sextum Kalendas Martias. Hence, the year containing the doubled day was a bissextile (bis sextum, "twice sixth") year. For legal purposes, the two days of the bis sextum were considered to be a single day, with the second half being intercalated; but in common practice by 238, when Censorinus wrote, the intercalary day was followed by the last five days of February, a. d. VI, V, IV, III and pridie Kal. Mart. (the days numbered 24, 25, 26, 27, and 28 from the beginning of February in a common year), so that the intercalated day was the first half of the doubled day. Thus the intercalated day was effectively inserted between the 23rd and 24th days of February. All later writers, including Macrobius about 430, Bede in 725, and other medieval computists (calculators of Easter), continued to state that the bissextum (bissextile day) occurred before the last five days of February.
|
35 |
+
|
36 |
+
Until 1970, the Roman Catholic Church always celebrated the feast of Saint Matthias on a. d. VI Kal. Mart., so if the days were numbered from the beginning of the month, it was named February 24 in common years, but the presence of the bissextum in a bissextile year immediately before a. d. VI Kal. Mart. shifted the latter day to February 25 in leap years, with the Vigil of St. Matthias shifting from February 23 to the leap day of February 24. This shift did not take place in pre-Reformation Norway and Iceland; Pope Alexander III ruled that either practice was lawful (Liber Extra, 5. 40. 14. 1). Other feasts normally falling on February 25–28 in common years are also shifted to the following day in a leap year (although they would be on the same day according to the Roman notation). The practice is still observed by those who use the older calendars.
|
37 |
+
|
38 |
+
The Revised Bengali Calendar of Bangladesh and the Indian National Calendar organise their leap years so that the every leap day is close to a February 29 in the Gregorian calendar and vice versa. This makes it easy to convert dates to or from Gregorian.
|
39 |
+
|
40 |
+
The Thai solar calendar uses the Buddhist Era (BE), but has been synchronized with the Gregorian since AD 1941.
|
41 |
+
|
42 |
+
The Julian calendar was instituted in 45 BC at the order of Julius Caesar, and the original intent was to make every fourth year a leap year, but this was not carried out correctly. Augustus ordered some leap years to be omitted to correct the problem, and by AD 8 the leap years were being observed every fourth year, and the observances were consistent up to and including modern times.
|
43 |
+
|
44 |
+
From AD 8 the Julian calendar received an extra day added to February in years that are multiples of 4 (although the AD year numbering system was not introduced until AD 525).
|
45 |
+
|
46 |
+
The Coptic calendar and Ethiopian calendar also add an extra day to the end of the year once every four years before a Julian 29-day February.
|
47 |
+
|
48 |
+
This rule gives an average year length of 365.25 days. However, it is 11 minutes longer than a tropical year. This means that the vernal equinox moves a day earlier in the calendar about every 131 years.
|
49 |
+
|
50 |
+
The Revised Julian calendar adds an extra day to February in years that are multiples of four, except for years that are multiples of 100 that do not leave a remainder of 200 or 600 when divided by 900. This rule agrees with the rule for the Gregorian calendar until 2799. The first year that dates in the Revised Julian calendar will not agree with those in the Gregorian calendar will be 2800, because it will be a leap year in the Gregorian calendar but not in the Revised Julian calendar.
|
51 |
+
|
52 |
+
This rule gives an average year length of 365.242222 days. This is a very good approximation to the mean tropical year, but because the vernal equinox year is slightly longer, the Revised Julian calendar for the time being does not do as good a job as the Gregorian calendar at keeping the vernal equinox on or close to March 21.
|
53 |
+
|
54 |
+
The Chinese calendar is lunisolar, so a leap year has an extra month, often called an embolismic month after the Greek word for it. In the Chinese calendar the leap month is added according to a rule which ensures that month 11 is always the month that contains the northern winter solstice. The intercalary month takes the same number as the preceding month; for example, if it follows the second month (二月) then it is simply called "leap second month" i.e. simplified Chinese: 闰二月; traditional Chinese: 閏二月; pinyin: rùn'èryuè.
|
55 |
+
|
56 |
+
The Hebrew calendar is lunisolar with an embolismic month. This extra month is called Adar Alef (first Adar) and is added before Adar, which then becomes Adar Bet (second Adar). According to the Metonic cycle, this is done seven times every nineteen years (specifically, in years 3, 6, 8, 11, 14, 17, and 19). This is to ensure that Passover (Pesah) is always in the spring as required by the Torah (Pentateuch) in many verses[12] relating to Passover.
|
57 |
+
|
58 |
+
In addition, the Hebrew calendar has postponement rules that postpone the start of the year by one or two days. These postponement rules reduce the number of different combinations of year length and starting days of the week from 28 to 14, and regulate the location of certain religious holidays in relation to the Sabbath. In particular, the first day of the Hebrew year can never be Sunday, Wednesday or Friday. This rule is known in Hebrew as "lo adu rosh" (לא אד״ו ראש), i.e., "Rosh [ha-Shanah, first day of the year] is not Sunday, Wednesday or Friday" (as the Hebrew word adu is written by three Hebrew letters signifying Sunday, Wednesday and Friday). Accordingly, the first day of Passover is never Monday, Wednesday or Friday. This rule is known in Hebrew as "lo badu Pesah" (לא בד״ו פסח), which has a double meaning — "Passover is not a legend", but also "Passover is not Monday, Wednesday or Friday" (as the Hebrew word badu is written by three Hebrew letters signifying Monday, Wednesday and Friday).
|
59 |
+
|
60 |
+
One reason for this rule is that Yom Kippur, the holiest day in the Hebrew calendar and the tenth day of the Hebrew year, now must never be adjacent to the weekly Sabbath (which is Saturday), i.e., it must never fall on Friday or Sunday, in order not to have two adjacent Sabbath days. However, Yom Kippur can still be on Saturday. A second reason is that Hoshana Rabbah, the 21st day of the Hebrew year, will never be on Saturday. These rules for the Feasts do not apply to the years from the Creation to the deliverance of the Hebrews from Egypt under Moses. It was at that time (cf. Exodus 13) that the God of Abraham, Isaac and Jacob gave the Hebrews their "Law" including the days to be kept holy and the feast days and Sabbaths.
|
61 |
+
|
62 |
+
Years consisting of 12 months have between 353 and 355 days. In a k'sidra ("in order") 354-day year, months have alternating 30 and 29 day lengths. In a chaser ("lacking") year, the month of Kislev is reduced to 29 days. In a malei ("filled") year, the month of Marcheshvan is increased to 30 days. 13-month years follow the same pattern, with the addition of the 30-day Adar Alef, giving them between 383 and 385 days.
|
63 |
+
|
64 |
+
The observed and calculated versions of the Islamic calendar do not have regular leap days, even though both have lunar months containing 29 or 30 days, generally in alternating order. However, the tabular Islamic calendar used by Islamic astronomers during the Middle Ages and still used by some Muslims does have a regular leap day added to the last month of the lunar year in 11 years of a 30-year cycle.[13] This additional day is found at the end of the last month, Dhu 'l-Hijja, which is also the month of the Hajj.[14]
|
65 |
+
|
66 |
+
The Hijri-Shamsi calendar, also adopted by the Ahmadiyya Community, is based on solar calculations and is similar to the Gregorian calendar in its structure with the exception that the first year starts with Hijra.[15]
|
67 |
+
|
68 |
+
The Bahá'í calendar is a solar calendar composed of 19 months of 19 days each (361 days). Years begin at Naw-Rúz, on the vernal equinox, on or about March 21. A period of "Intercalary Days", called Ayyam-i-Ha, are inserted before the 19th month. This period normally has 4 days, but an extra day is added when needed to ensure that the following year starts on the vernal equinox. This is calculated and known years in advance.
|
69 |
+
|
70 |
+
The Iranian calendar is an observational calendar that starts on the spring equinox and adds a single intercalated day to the last month (Esfand) once every four or five years; the first leap year occurs as the fifth year of the typical 33-year cycle and the remaining leap years occur every four years through the remainder of the 33-year cycle. This system has less periodic deviation or jitter from its mean year than the Gregorian calendar, and operates on the simple rule that the vernal equinox always falls in the 24-hour period ending at noon on New Year's Day.[16] The 33-year period is not completely regular; every so often the 33-year cycle will be broken by a cycle of 29 years.[17] A similar rule has been proposed to simplify the Gregorian calendar.[18] The centennial leap years would be spaced so that in years giving remainder 3 on division by 100 the dynamic mean sun[19] passes through the equinox in the 24-hour period ending at 1 PM GMT on 19 March. The system would be introduced when for the first time the dynamic mean sun is due to pass through the equinox before 1 PM GMT on 18 March in a year giving remainder 3 on division by 400. The immediately preceding centennial leap year will be cancelled. The first cancellation will probably be AD 8400 and the next two centennial leap years thereafter will probably be AD 8800 and AD 9700.[20]
|
71 |
+
|
72 |
+
|
73 |
+
|
74 |
+
In Ireland and Britain, it is a tradition that women may propose marriage only in leap years. While it has been claimed that the tradition was initiated by Saint Patrick or Brigid of Kildare in 5th century Ireland, this is dubious, as the tradition has not been attested before the 19th century.[21] Supposedly, a 1288 law by Queen Margaret of Scotland (then age five and living in Norway), required that fines be levied if a marriage proposal was refused by the man; compensation was deemed to be a pair of leather gloves, a single rose, £1 and a kiss.[22] In some places the tradition was tightened to restricting female proposals to the modern leap day, February 29, or to the medieval (bissextile) leap day, February 24.
|
75 |
+
|
76 |
+
According to Felten: "A play from the turn of the 17th century, 'The Maydes Metamorphosis,' has it that 'this is leape year/women wear breeches.' A few hundred years later, breeches wouldn't do at all: Women looking to take advantage of their opportunity to pitch woo were expected to wear a scarlet petticoat — fair warning, if you will."[23]
|
77 |
+
|
78 |
+
In Finland, the tradition is that if a man refuses a woman's proposal on leap day, he should buy her the fabrics for a skirt.[24]
|
79 |
+
|
80 |
+
In France, since 1980, a satirical newspaper entitled La Bougie du Sapeur is published only on leap year, on February 29.
|
81 |
+
|
82 |
+
In Greece, marriage in a leap year is considered unlucky.[25] One in five engaged couples in Greece will plan to avoid getting married in a leap year.[26]
|
83 |
+
|
84 |
+
In February 1988 the town of Anthony in Texas, declared itself "leap year capital of the world", and an international leapling birthday club was started.[27]
|
85 |
+
|
86 |
+
Woman capturing man with butterfly-net
|
87 |
+
|
88 |
+
Women anxiously awaiting January 1
|
89 |
+
|
90 |
+
Histrionically preparing
|
91 |
+
|
92 |
+
A person born on February 29 may be called a "leapling" or a "leaper".[28] In common years, they usually celebrate their birthdays on February 28. In some situations, March 1 is used as the birthday in a non-leap year, since it is the day following February 28.
|
93 |
+
|
94 |
+
Technically, a leapling will have fewer birthday anniversaries than their age in years. This phenomenon is exploited when a person claims to be only a quarter of their actual age, by counting their leap-year birthday anniversaries only: for example, in Gilbert and Sullivan's 1879 comic opera The Pirates of Penzance, Frederic the pirate apprentice discovers that he is bound to serve the pirates until his 21st birthday (that is, when he turns 88 years old, since 1900 was not a leap year) rather than until his 21st year.
|
95 |
+
|
96 |
+
For legal purposes, legal birthdays depend on how local laws count time intervals.
|
97 |
+
|
98 |
+
The Civil Code of the Taiwan since October 10, 1929,[29] implies that the legal birthday of a leapling is February 28 in common years:
|
99 |
+
|
100 |
+
If a period fixed by weeks, months, and years does not commence from the beginning of a week, month, or year, it ends with the ending of the day which precedes the day of the last week, month, or year which corresponds to that on which it began to commence. But if there is no corresponding day in the last month, the period ends with the ending of the last day of the last month.[30]
|
101 |
+
|
102 |
+
Since 1990 non-retroactively, Hong Kong considers the legal birthday of a leapling March 1 in common years:[31]
|
en/2590.html.txt
ADDED
@@ -0,0 +1,227 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Ice hockey is a contact team sport played on ice, usually in a rink, in which two teams of skaters use their sticks to shoot a vulcanized rubber puck into their opponent's net to score goals. The sport is known to be fast-paced and physical, with teams usually fielding six players at a time: one goaltender, and five players who skate the span of the ice trying to control the puck and score goals against the opposing team.
|
4 |
+
|
5 |
+
Ice hockey is most popular in Canada, central and eastern Europe, the Nordic countries, Russia, and the United States. Ice hockey is the official national winter sport of Canada.[1] In addition, ice hockey is the most popular winter sport in Belarus, Croatia, the Czech Republic, Finland, Latvia, Russia, Slovakia, Sweden, and Switzerland. North America's National Hockey League (NHL) is the highest level for men's ice hockey and the strongest professional ice hockey league in the world. The Kontinental Hockey League (KHL) is the highest league in Russia and much of Eastern Europe. The International Ice Hockey Federation (IIHF) is the formal governing body for international ice hockey, with the IIHF managing international tournaments and maintaining the IIHF World Ranking. Worldwide, there are ice hockey federations in 76 countries.[2]
|
6 |
+
|
7 |
+
In Canada, the United States, Nordic countries, and some other European countries the sport is known simply as hockey; the name "ice hockey" is used in places where "hockey" more often refers to the more established field hockey, such as countries in South America, Asia, Africa, Australasia, and some European countries including the United Kingdom, Ireland and the Netherlands.[3]
|
8 |
+
|
9 |
+
Ice hockey is believed to have evolved from simple stick and ball games played in the 18th and 19th centuries in the United Kingdom and elsewhere. These games were brought to North America and several similar winter games using informal rules were developed, such as shinny and ice polo. The contemporary sport of ice hockey was developed in Canada, most notably in Montreal, where the first indoor hockey game was played on March 3, 1875. Some characteristics of that game, such as the length of the ice rink and the use of a puck, have been retained to this day. Amateur ice hockey leagues began in the 1880s, and professional ice hockey originated around 1900. The Stanley Cup, emblematic of ice hockey club supremacy, was first awarded in 1893 to recognize the Canadian amateur champion and later became the championship trophy of the NHL. In the early 1900s, the Canadian rules were adopted by the Ligue Internationale de Hockey Sur Glace, the precursor of the IIHF and the sport was played for the first time at the Olympics during the 1920 Summer Olympics.
|
10 |
+
|
11 |
+
In international competitions, the national teams of six countries (the Big Six) predominate: Canada, Czech Republic, Finland, Russia, Sweden and the United States. Of the 69 medals awarded all-time in men's competition at the Olympics, only seven medals were not awarded to one of those countries (or two of their precursors, the Soviet Union for Russia, and Czechoslovakia for the Czech Republic). In the annual Ice Hockey World Championships, 177 of 201 medals have been awarded to the six nations. Teams outside the Big Six have won only five medals in either competition since 1953.[4][5] The World Cup of Hockey is organized by the National Hockey League and the National Hockey League Players' Association (NHLPA), unlike the annual World Championships and quadrennial Olympic tournament, both run by the International Ice Hockey Federation. World Cup games are played under NHL rules and not those of the IIHF, and the tournament occurs prior to the NHL pre-season, allowing for all NHL players to be available, unlike the World Championships, which overlaps with the NHL's Stanley Cup playoffs. Furthermore, all 12 Women's Olympic and 36 IIHF World Women's Championship medals were awarded to one of these six countries. The Canadian national team or the United States national team have between them won every gold medal of either series.[6][7]
|
12 |
+
|
13 |
+
In England, field hockey has historically been called simply "hockey" and what was referenced by first appearances in print. The first known mention spelled as "hockey" occurred in the 1772 book Juvenile Sports and Pastimes, to Which Are Prefixed, Memoirs of the Author: Including a New Mode of Infant Education, by Richard Johnson (Pseud. Master Michel Angelo), whose chapter XI was titled "New Improvements on the Game of Hockey".[8] The 1527 Statute of Galway banned a sport called "'hokie'—the hurling of a little ball with sticks or staves". A form of this word was thus being used in the 16th century, though much removed from its current usage.[9]
|
14 |
+
|
15 |
+
The belief that hockey was mentioned in a 1363 proclamation by King Edward III of England[10] is based on modern translations of the proclamation, which was originally in Latin and explicitly forbade the games "Pilam Manualem, Pedivam, & Bacularem: & ad Canibucam & Gallorum Pugnam".[11][12] The English historian and biographer John Strype did not use the word "hockey" when he translated the proclamation in 1720, instead translating "Canibucam" as "Cambuck";[13] this may have referred to either an early form of hockey or a game more similar to golf or croquet.[14]
|
16 |
+
|
17 |
+
According to the Austin Hockey Association, the word "puck" derives from the Scottish Gaelic puc or the Irish poc (to poke, punch or deliver a blow). "...The blow given by a hurler to the ball with his camán or hurley is always called a puck."[15]
|
18 |
+
|
19 |
+
Stick-and-ball games date back to pre-Christian times. In Europe, these games included the Irish game of hurling, the closely related Scottish game of shinty and versions of field hockey (including bandy ball, played in England). IJscolf, a game resembling colf on an ice-covered surface, was popular in the Low Countries between the Middle Ages and the Dutch Golden Age. It was played with a wooden curved bat (called a colf or kolf), a wooden or leather ball and two poles (or nearby landmarks), with the objective to hit the chosen point using the least number of strokes. A similar game (knattleikr) had been played for a thousand years or more by the Scandinavian peoples, as documented in the Icelandic sagas. Polo has been referred to as "hockey on horseback".[16] In England, field hockey developed in the late 17th century, and there is evidence that some games of field hockey took place on the ice.[16] These games of "hockey on ice" were sometimes played with a bung (a plug of cork or oak used as a stopper on a barrel). William Pierre Le Cocq stated, in a 1799 letter written in Chesham, England:
|
20 |
+
|
21 |
+
I must now describe to you the game of Hockey; we have each a stick turning up at the end. We get a bung. There are two sides one of them knocks one way and the other side the other way. If any one of the sides makes the bung reach that end of the churchyard it is victorious.[17]
|
22 |
+
|
23 |
+
A 1797 engraving unearthed by Swedish sport historians Carl Gidén and Patrick Houda shows a person on skates with a stick and bung on the River Thames, probably in December 1796.[18]
|
24 |
+
|
25 |
+
British soldiers and immigrants to Canada and the United States brought their stick-and-ball games with them and played them on the ice and snow of winter.
|
26 |
+
|
27 |
+
To while away their boredom and to stay in shape they [European colonial soldiers in North America] would play on the frozen rivers and lakes. The British [English] played bandy, the Scots played shinty and golf, the Irish, hurling, while the Dutch soldiers probably pursued ken jaegen. Curiosity led some to try lacrosse. Each group learned the game from the others. The most daring ventured to play on skates.
|
28 |
+
All these contributions nourished a game that was evolving. Hockey was invented by all these people, all these cultures, all these individuals. Hockey is the conclusion of all these beginnings.
|
29 |
+
|
30 |
+
In 1825, John Franklin wrote "The game of hockey played on the ice was the morning sport" on Great Bear Lake during one of his Arctic expeditions. A mid-1830s watercolour portrays New Brunswick lieutenant-governor Archibald Campbell and his family with British soldiers on skates playing a stick-on-ice sport. Captain R.G.A. Levinge, a British Army officer in New Brunswick during Campbell's time, wrote about "hockey on ice" on Chippewa Creek (a tributary of the Niagara River) in 1839. In 1843 another British Army officer in Kingston, Ontario wrote, "Began to skate this year, improved quickly and had great fun at hockey on the ice."[19] An 1859 Boston Evening Gazette article referred to an early game of hockey on ice in Halifax that year.[20] An 1835 painting by John O'Toole depicts skaters with sticks and bung on a frozen stream in the American state of West Virginia, at that time still part of Virginia.[18]
|
31 |
+
|
32 |
+
In the same era, the Mi'kmaq, a First Nations people of the Canadian Maritimes, also had a stick-and-ball game. Canadian oral histories describe a traditional stick-and-ball game played by the Mi'kmaq, and Silas Tertius Rand (in his 1894 Legends of the Micmacs) describes a Mi'kmaq ball game known as tooadijik. Rand also describes a game played (probably after European contact) with hurleys, known as wolchamaadijik.[21] Sticks made by the Mi'kmaq were used by the British for their games.
|
33 |
+
|
34 |
+
Early 19th-century paintings depict shinney (or "shinny"), an early form of hockey with no standard rules which was played in Nova Scotia.[22] Many of these early games absorbed the physical aggression of what the Onondaga called dehuntshigwa'es (lacrosse).[23] Shinney was played on the St. Lawrence River at Montreal and Quebec City, and in Kingston[19] and Ottawa. The number of players was often large. To this day, shinney (derived from "shinty") is a popular Canadian[24] term for an informal type of hockey, either ice or street hockey.
|
35 |
+
|
36 |
+
Thomas Chandler Haliburton, in The Attache: Second Series (published in 1844) imagined a dialogue, between two of the novel's characters, which mentions playing "hurly on the long pond on the ice". This has been interpreted by some historians from Windsor, Nova Scotia as reminiscent of the days when the author was a student at King's College School in that town in 1810 and earlier.[20][21] Based on Haliburton's quote, claims were made that modern hockey was invented in Windsor, Nova Scotia, by King's College students and perhaps named after an individual ("Colonel Hockey's game").[25] Others claim that the origins of hockey come from games played in the area of Dartmouth and Halifax in Nova Scotia. However, several references have been found to hurling and shinty being played on the ice long before the earliest references from both Windsor and Dartmouth/Halifax,[26] and the word "hockey" was used to designate a stick-and-ball game at least as far back as 1773, as it was mentioned in the book Juvenile Sports and Pastimes, to Which Are Prefixed, Memoirs of the Author: Including a New Mode of Infant Education by Richard Johnson (Pseud. Master Michel Angelo), whose chapter XI was titled "New Improvements on the Game of Hockey".[27]
|
37 |
+
|
38 |
+
While the game's origins lie elsewhere, Montreal is at the centre of the development of the sport of contemporary ice hockey, and is recognized as the birthplace of organized ice hockey.[28] On March 3, 1875, the first organized indoor game was played at Montreal's Victoria Skating Rink between two nine-player teams, including James Creighton and several McGill University students. Instead of a ball or bung, the game featured a "flat circular piece of wood"[29] (to keep it in the rink and to protect spectators). The goal posts were 8 feet (2.4 m) apart[29] (today's goals are six feet wide).
|
39 |
+
|
40 |
+
In 1876, games played in Montreal were "conducted under the 'Hockey Association' rules";[30] the Hockey Association was England's field hockey organization. In 1877, The Gazette (Montreal) published a list of seven rules, six of which were largely based on six of the Hockey Association's twelve rules, with only minor differences (even the word "ball" was kept); the one added rule explained how disputes should be settled.[31] The McGill University Hockey Club, the first ice hockey club, was founded in 1877[32] (followed by the Quebec Hockey Club in 1878 and the Montreal Victorias in 1881).[33] In 1880, the number of players per side was reduced from nine to seven.[8]
|
41 |
+
|
42 |
+
The number of teams grew, enough to hold the first "world championship" of ice hockey at Montreal's annual Winter Carnival in 1883. The McGill team won the tournament and was awarded the Carnival Cup.[34] The game was divided into thirty-minute halves. The positions were now named: left and right wing, centre, rover, point and cover-point, and goaltender. In 1886, the teams competing at the Winter Carnival organized the Amateur Hockey Association of Canada (AHAC), and played a season comprising "challenges" to the existing champion.[35]
|
43 |
+
|
44 |
+
In Europe, it is believed that in 1885 the Oxford University Ice Hockey Club was formed to play the first Ice Hockey Varsity Match against traditional rival Cambridge in St. Moritz, Switzerland; however, this is undocumented. The match was won by the Oxford Dark Blues, 6–0;[36][37] the first photographs and team lists date from 1895.[38] This rivalry continues, claiming to be the oldest hockey rivalry in history; a similar claim is made about the rivalry between Queen's University at Kingston and Royal Military College of Kingston, Ontario. Since 1986, considered the 100th anniversary of the rivalry, teams of the two colleges play for the Carr-Harris Cup.[39]
|
45 |
+
|
46 |
+
In 1888, the Governor General of Canada, Lord Stanley of Preston (whose sons and daughter were hockey enthusiasts), first attended the Montreal Winter Carnival tournament and was impressed with the game. In 1892, realizing that there was no recognition for the best team in Canada (although a number of leagues had championship trophies), he purchased a silver bowl for use as a trophy. The Dominion Hockey Challenge Cup (which later became known as the Stanley Cup) was first awarded in 1893 to the Montreal Hockey Club, champions of the AHAC; it continues to be awarded annually to the National Hockey League's championship team.[40] Stanley's son Arthur helped organize the Ontario Hockey Association, and Stanley's daughter Isobel was one of the first women to play ice hockey.
|
47 |
+
|
48 |
+
By 1893, there were almost a hundred teams in Montreal alone; in addition, there were leagues throughout Canada. Winnipeg hockey players used cricket pads to better protect the goaltender's legs; they also introduced the "scoop" shot, or what is now known as the wrist shot. William Fairbrother, from Ontario, Canada is credited with inventing the ice hockey net in the 1890s.[41] Goal nets became a standard feature of the Canadian Amateur Hockey League (CAHL) in 1900. Left and right defence began to replace the point and cover-point positions in the OHA in 1906.[42]
|
49 |
+
|
50 |
+
In the United States, ice polo, played with a ball rather than a puck, was popular during this period; however, by 1893 Yale University and Johns Hopkins University held their first ice hockey matches.[43] American financier Malcolm Greene Chace is credited with being the father of hockey in the United States.[44] In 1892, as an amateur tennis player, Chace visited Niagara Falls, New York for a tennis match, where he met some Canadian hockey players. Soon afterwards, Chace put together a team of men from Yale, Brown, and Harvard, and toured across Canada as captain of this team.[44] The first collegiate hockey match in the United States was played between Yale University and Johns Hopkins in Baltimore. Yale, led by captain Chace, beat Hopkins, 2–1.[45] In 1896, the first ice hockey league in the US was formed. The US Amateur Hockey League was founded in New York City, shortly after the opening of the artificial-ice St. Nicholas Rink.
|
51 |
+
|
52 |
+
Lord Stanley's five sons were instrumental in bringing ice hockey to Europe, defeating a court team (which included the future Edward VII and George V) at Buckingham Palace in 1895.[46] By 1903, a five-team league had been founded. The Ligue Internationale de Hockey sur Glace was founded in 1908 to govern international competition, and the first European championship was won by Great Britain in 1910. The sport grew further in Europe in the 1920s, after ice hockey became an Olympic sport. Many bandy players switched to hockey so as to be able to compete in the Olympics.[47][48] Bandy remained popular in the Soviet Union, which only started its ice hockey program in the 1950s. In the mid-20th century, the Ligue became the International Ice Hockey Federation.[49]
|
53 |
+
|
54 |
+
As the popularity of ice hockey as a spectator sport grew, earlier rinks were replaced by larger rinks. Most of the early indoor ice rinks have been demolished; Montreal's Victoria Rink, built in 1862, was demolished in 1925.[50] Many older rinks succumbed to fire, such as Denman Arena, Dey's Arena, Quebec Skating Rink and Montreal Arena, a hazard of the buildings' wood construction. The Stannus Street Rink in Windsor, Nova Scotia (built in 1897) may be the oldest still in existence; however, it is no longer used for hockey. The Aberdeen Pavilion (built in 1898) in Ottawa was used for hockey in 1904 and is the oldest existing facility that has hosted Stanley Cup games.
|
55 |
+
|
56 |
+
The oldest indoor ice hockey arena still in use today for hockey is Boston's Matthews Arena, which was built in 1910. It has been modified extensively several times in its history and is used today by Northeastern University for hockey and other sports. It was the original home rink of the Boston Bruins professional team,[51] itself the oldest United States-based team in the NHL, starting play in the league in today's Matthews Arena on December 1, 1924. Madison Square Garden in New York City, built in 1968, is the oldest continuously-operating arena in the NHL.[52]
|
57 |
+
|
58 |
+
Professional hockey has existed since the early 20th century. By 1902, the Western Pennsylvania Hockey League was the first to employ professionals. The league joined with teams in Michigan and Ontario to form the first fully professional league—the International Professional Hockey League (IPHL)—in 1904. The WPHL and IPHL hired players from Canada; in response, Canadian leagues began to pay players (who played with amateurs). The IPHL, cut off from its largest source of players, disbanded in 1907. By then, several professional hockey leagues were operating in Canada (with leagues in Manitoba, Ontario and Quebec).
|
59 |
+
|
60 |
+
In 1910, the National Hockey Association (NHA) was formed in Montreal. The NHA would further refine the rules: dropping the rover position, dividing the game into three 20-minute periods and introducing minor and major penalties. After re-organizing as the National Hockey League in 1917, the league expanded into the United States, starting with the Boston Bruins in 1924.
|
61 |
+
|
62 |
+
Professional hockey leagues developed later in Europe, but amateur leagues leading to national championships were in place. One of the first was the Swiss National League A, founded in 1916. Today, professional leagues have been introduced in most countries of Europe. Top European leagues include the Kontinental Hockey League, the Czech Extraliga, the Finnish Liiga and the Swedish Hockey League.
|
63 |
+
|
64 |
+
While the general characteristics of the game stay the same wherever it is played, the exact rules depend on the particular code of play being used. The two most important codes are those of the IIHF[53] and the NHL.[54] Both of the codes, and others, originated from Canadian rules of ice hockey of the early 20th Century.
|
65 |
+
|
66 |
+
Ice hockey is played on a hockey rink. During normal play, there are six players per side on the ice at any time, one of them being the goaltender, each of whom is on ice skates. The objective of the game is to score goals by shooting a hard vulcanized rubber disc, the puck, into the opponent's goal net, which is placed at the opposite end of the rink. The players use their sticks to pass or shoot the puck.
|
67 |
+
|
68 |
+
Within certain restrictions, players may redirect the puck with any part of their body. Players may not hold the puck in their hand and are prohibited from using their hands to pass the puck to their teammates unless they are in the defensive zone. Players are also prohibited from kicking the puck into the opponent's goal, though unintentional redirections off the skate are permitted. Players may not intentionally bat the puck into the net with their hands.
|
69 |
+
|
70 |
+
Hockey is an off-side game, meaning that forward passes are allowed, unlike in rugby. Before the 1930s, hockey was an on-side game, meaning that only backward passes were allowed. Those rules favoured individual stick-handling as a key means of driving the puck forward. With the arrival of offside rules, the forward pass transformed hockey into a true team sport, where individual performance diminished in importance relative to team play, which could now be coordinated over the entire surface of the ice as opposed to merely rearward players.[55]
|
71 |
+
|
72 |
+
The six players on each team are typically divided into three forwards, two defencemen, and a goaltender. The term skaters is typically used to describe all players who are not goaltenders. The forward positions consist of a centre and two wingers: a left wing and a right wing. Forwards often play together as units or lines, with the same three forwards always playing together. The defencemen usually stay together as a pair generally divided between left and right. Left and right side wingers or defencemen are generally positioned as such, based on the side on which they carry their stick. A substitution of an entire unit at once is called a line change. Teams typically employ alternate sets of forward lines and defensive pairings when short-handed or on a power play. The goaltender stands in a, usually blue, semi-circle called the crease in the defensive zone keeping pucks from going in. Substitutions are permitted at any time during the game, although during a stoppage of play the home team is permitted the final change. When players are substituted during play, it is called changing on the fly. A new NHL rule added in the 2005–06 season prevents a team from changing their line after they ice the puck.
|
73 |
+
|
74 |
+
The boards surrounding the ice help keep the puck in play and they can also be used as tools to play the puck. Players are permitted to bodycheck opponents into the boards as a means of stopping progress. The referees, linesmen and the outsides of the goal are "in play" and do not cause a stoppage of the game when the puck or players are influenced (by either bouncing or colliding) into them. Play can be stopped if the goal is knocked out of position. Play often proceeds for minutes without interruption. When play is stopped, it is restarted with a faceoff. Two players face each other and an official drops the puck to the ice, where the two players attempt to gain control of the puck. Markings (circles) on the ice indicate the locations for the faceoff and guide the positioning of players.
|
75 |
+
|
76 |
+
The three major rules of play in ice hockey that limit the movement of the puck: offside, icing, and the puck going out of play. A player is offside if he enters his opponent's zone before the puck itself. Under many situations, a player may not "ice the puck", shoot the puck all the way across both the centre line and the opponent's goal line. The puck goes out of play whenever it goes past the perimeter of the ice rink (onto the player benches, over the glass, or onto the protective netting above the glass) and a stoppage of play is called by the officials using whistles. It also does not matter if the puck comes back onto the ice surface from those areas as the puck is considered dead once it leaves the perimeter of the rink.
|
77 |
+
|
78 |
+
Under IIHF rules, each team may carry a maximum of 20 players and two goaltenders on their roster. NHL rules restrict the total number of players per game to 18, plus two goaltenders. In the NHL, the players are usually divided into four lines of three forwards, and into three pairs of defencemen. On occasion, teams may elect to substitute an extra defenceman for a forward. The seventh defenceman may play as a substitute defenceman, spend the game on the bench, or if a team chooses to play four lines then this seventh defenceman may see ice-time on the fourth line as a forward.
|
79 |
+
|
80 |
+
A professional game consists of three periods of twenty minutes, the clock running only when the puck is in play. The teams change ends after each period of play, including overtime. Recreational leagues and children's leagues often play shorter games, generally with three shorter periods of play.
|
81 |
+
|
82 |
+
Various procedures are used if a tie occurs. In tournament play, as well as in the NHL playoffs, North Americans favour sudden death overtime, in which the teams continue to play twenty-minute periods until a goal is scored. Up until the 1999–2000 season regular season NHL games were settled with a single five-minute sudden death period with five players (plus a goalie) per side, with both teams awarded one point in the standings in the event of a tie. With a goal, the winning team would be awarded two points and the losing team none (just as if they had lost in regulation).
|
83 |
+
|
84 |
+
From the 1999–2000 until the 2003–04 seasons, the National Hockey League decided ties by playing a single five-minute sudden death overtime period with each team having four skaters per side (plus the goalie). In the event of a tie, each team would still receive one point in the standings but in the event of a victory the winning team would be awarded two points in the standings and the losing team one point. The idea was to discourage teams from playing for a tie, since previously some teams might have preferred a tie and 1 point to risking a loss and zero points. The only exception to this rule is if a team opts to pull their goalie in exchange for an extra skater during overtime and is subsequently scored upon (an empty net goal), in which case the losing team receives no points for the overtime loss. Since the 2015–16 season, the single five-minute sudden death overtime session involves three skaters on each side. Since three skaters must always be on the ice in an NHL game, the consequences of penalties are slightly different from those during regulation play. If a team is on a powerplay when overtime begins, that team will play with more than three skaters (usually four, very rarely five) until the expiration of the penalty. Any penalty during overtime that would result in a team losing a skater during regulation instead causes the non-penalized team to add a skater. Once the penalized team's penalty ends, the number of skaters on each side is adjusted accordingly, with the penalized team adding a skater in regulation and the non-penalized team subtracting a skater in overtime. This goes until the next stoppage of play.[56]
|
85 |
+
|
86 |
+
International play and several North American professional leagues, including the NHL (in the regular season), now use an overtime period identical to that from 1999–2000 to 2003–04 followed by a penalty shootout. If the score remains tied after an extra overtime period, the subsequent shootout consists of three players from each team taking penalty shots. After these six total shots, the team with the most goals is awarded the victory. If the score is still tied, the shootout then proceeds to a sudden death format. Regardless of the number of goals scored during the shootout by either team, the final score recorded will award the winning team one more goal than the score at the end of regulation time. In the NHL if a game is decided in overtime or by a shootout the winning team is awarded two points in the standings and the losing team is awarded one point. Ties no longer occur in the NHL.
|
87 |
+
|
88 |
+
The overtime mode for the NHL playoffs differ from the regular season. In the playoffs there are no shootouts nor ties. If a game is tied after regulation an additional 20 minutes of 5 on 5 sudden death overtime will be added. In case of a tied game after the overtime, multiple 20-minute overtimes will be played until a team scores, which wins the match. Since 2019, the IIHF World Championships and the medal games in the Olympics use the same format, but in a 3-on-3 format.
|
89 |
+
|
90 |
+
In ice hockey, infractions of the rules lead to play stoppages whereby the play is restarted at a face off. Some infractions result in the imposition of a penalty to a player or team. In the simplest case, the offending player is sent to the penalty box and their team has to play with one less player on the ice for a designated amount of time. Minor penalties last for two minutes, major penalties last for five minutes, and a double minor penalty is two consecutive penalties of two minutes duration. A single minor penalty may be extended by a further two minutes for causing visible injury to the victimized player. This is usually when blood is drawn during high sticking. Players may be also assessed personal extended penalties or game expulsions for misconduct in addition to the penalty or penalties their team must serve. The team that has been given a penalty is said to be playing short-handed while the opposing team is on a power play.
|
91 |
+
|
92 |
+
A two-minute minor penalty is often charged for lesser infractions such as tripping, elbowing, roughing, high-sticking, delay of the game, too many players on the ice, boarding, illegal equipment, charging (leaping into an opponent or body-checking him after taking more than two strides), holding, holding the stick (grabbing an opponent's stick), interference, hooking, slashing, kneeing, unsportsmanlike conduct (arguing a penalty call with referee, extremely vulgar or inappropriate verbal comments), "butt-ending" (striking an opponent with the knob of the stick—a very rare penalty), "spearing", or cross-checking. As of the 2005–2006 season, a minor penalty is also assessed for diving, where a player embellishes or simulates an offence. More egregious fouls may be penalized by a four-minute double-minor penalty, particularly those that injure the victimized player. These penalties end either when the time runs out or when the other team scores during the power play. In the case of a goal scored during the first two minutes of a double-minor, the penalty clock is set down to two minutes upon a score, effectively expiring the first minor penalty. Five-minute major penalties are called for especially violent instances of most minor infractions that result in intentional injury to an opponent, or when a minor penalty results in visible injury (such as bleeding), as well as for fighting. Major penalties are always served in full; they do not terminate on a goal scored by the other team. Major penalties assessed for fighting are typically offsetting, meaning neither team is short-handed and the players exit the penalty box upon a stoppage of play following the expiration of their respective penalties. The foul of boarding (defined as "check[ing] an opponent in such a manner that causes the opponent to be thrown violently in the boards")[57] is penalized either by a minor or major penalty at the discretion of the referee, based on the violent state of the hit. A minor or major penalty for boarding is often assessed when a player checks an opponent from behind and into the boards.
|
93 |
+
|
94 |
+
Some varieties of penalties do not always require the offending team to play a man short. Concurrent five-minute major penalties in the NHL usually result from fighting. In the case of two players being assessed five-minute fighting majors, both the players serve five minutes without their team incurring a loss of player (both teams still have a full complement of players on the ice). This differs with two players from opposing sides getting minor penalties, at the same time or at any intersecting moment, resulting from more common infractions. In this case, both teams will have only four skating players (not counting the goaltender) until one or both penalties expire (if one penalty expires before the other, the opposing team gets a power play for the remainder of the time); this applies regardless of current pending penalties. However, in the NHL, a team always has at least three skaters on the ice. Thus, ten-minute misconduct penalties are served in full by the penalized player, but his team may immediately substitute another player on the ice unless a minor or major penalty is assessed in conjunction with the misconduct (a two-and-ten or five-and-ten). In this case, the team designates another player to serve the minor or major; both players go to the penalty box, but only the designee may not be replaced, and he is released upon the expiration of the two or five minutes, at which point the ten-minute misconduct begins. In addition, game misconducts are assessed for deliberate intent to inflict severe injury on an opponent (at the officials' discretion), or for a major penalty for a stick infraction or repeated major penalties. The offending player is ejected from the game and must immediately leave the playing surface (he does not sit in the penalty box); meanwhile, if an additional minor or major penalty is assessed, a designated player must serve out of that segment of the penalty in the box (similar to the above-mentioned "two-and-ten"). In some rare cases, a player may receive up to nineteen minutes in penalties for one string of plays. This could involve receiving a four-minute double minor penalty, getting in a fight with an opposing player who retaliates, and then receiving a game misconduct after the fight. In this case, the player is ejected and two teammates must serve the double-minor and major penalties.
|
95 |
+
|
96 |
+
A penalty shot is awarded to a player when the illegal actions of another player stop a clear scoring opportunity, most commonly when the player is on a breakaway. A penalty shot allows the obstructed player to pick up the puck on the centre red-line and attempt to score on the goalie with no other players on the ice, to compensate for the earlier missed scoring opportunity. A penalty shot is also awarded for a defender other than the goaltender covering the puck in the goal crease, a goaltender intentionally displacing his own goal posts during a breakaway to avoid a goal, a defender intentionally displacing his own goal posts when there is less than two minutes to play in regulation time or at any point during overtime, or a player or coach intentionally throwing a stick or other object at the puck or the puck carrier and the throwing action disrupts a shot or pass play.
|
97 |
+
|
98 |
+
Officials also stop play for puck movement violations, such as using one's hands to pass the puck in the offensive end, but no players are penalized for these offences. The sole exceptions are deliberately falling on or gathering the puck to the body, carrying the puck in the hand, and shooting the puck out of play in one's defensive zone (all penalized two minutes for delay of game).
|
99 |
+
|
100 |
+
In the NHL, a unique penalty applies to the goalies. The goalies now are forbidden to play the puck in the "corners" of the rink near their own net. This will result in a two-minute penalty against the goalie's team. Only in the area in-front of the goal line and immediately behind the net (marked by two red lines on either side of the net) the goalie can play the puck.
|
101 |
+
|
102 |
+
An additional rule that has never been a penalty, but was an infraction in the NHL before recent rules changes, is the two-line offside pass. Prior to the 2005–06 NHL season, play was stopped when a pass from inside a team's defending zone crossed the centre line, with a face-off held in the defending zone of the offending team. Now, the centre line is no longer used in the NHL to determine a two-line pass infraction, a change that the IIHF had adopted in 1998. Players are now able to pass to teammates who are more than the blue and centre ice red line away.
|
103 |
+
|
104 |
+
The NHL has taken steps to speed up the game of hockey and create a game of finesse, by retreating from the past when illegal hits, fights, and "clutching and grabbing" among players were commonplace. Rules are now more strictly enforced, resulting in more penalties, which in turn provides more protection to the players and facilitates more goals being scored. The governing body for United States' amateur hockey has implemented many new rules to reduce the number of stick-on-body occurrences, as well as other detrimental and illegal facets of the game ("zero tolerance").
|
105 |
+
|
106 |
+
In men's hockey, but not in women's, a player may use his hip or shoulder to hit another player if the player has the puck or is the last to have touched it. This use of the hip and shoulder is called body checking. Not all physical contact is legal—in particular, hits from behind, hits to the head and most types of forceful stick-on-body contact are illegal.
|
107 |
+
|
108 |
+
A delayed penalty call occurs when a penalty offence is committed by the team that does not have possession of the puck. In this circumstance the team with possession of the puck is allowed to complete the play; that is, play continues until a goal is scored, a player on the opposing team gains control of the puck, or the team in possession commits an infraction or penalty of their own. Because the team on which the penalty was called cannot control the puck without stopping play, it is impossible for them to score a goal. In these cases, the team in possession of the puck can pull the goalie for an extra attacker without fear of being scored on. However, it is possible for the controlling team to mishandle the puck into their own net. If a delayed penalty is signalled and the team in possession scores, the penalty is still assessed to the offending player, but not served. In 2012, this rule was changed by the United States' National Collegiate Athletic Association (NCAA) for college level hockey. In college games, the penalty is still enforced even if the team in possession scores.[58]
|
109 |
+
|
110 |
+
A typical game of hockey is governed by two to four officials on the ice, charged with enforcing the rules of the game. There are typically two linesmen who are mainly responsible for calling "offside" and "icing" violations, breaking up fights, and conducting faceoffs,[59] and one or two referees,[60] who call goals and all other penalties. Linesmen can, however, report to the referee(s) that a penalty should be assessed against an offending player in some situations.[61] The restrictions on this practice vary depending on the governing rules. On-ice officials are assisted by off-ice officials who act as goal judges, time keepers, and official scorers.
|
111 |
+
|
112 |
+
The most widespread system in use today is the "three-man system", that uses one referee and two linesmen. Another less commonly used system is the two referee and one linesman system. This system is very close to the regular three-man system except for a few procedure changes. With the first being the National Hockey League, a number of leagues have started to implement the "four-official system", where an additional referee is added to aid in the calling of penalties normally difficult to assess by one single referee. The system is now used in every NHL game since 2001, at IIHF World Championships, the Olympics and in many professional and high-level amateur leagues in North America and Europe.
|
113 |
+
|
114 |
+
Officials are selected by the league they work for. Amateur hockey leagues use guidelines established by national organizing bodies as a basis for choosing their officiating staffs. In North America, the national organizing bodies Hockey Canada and USA Hockey approve officials according to their experience level as well as their ability to pass rules knowledge and skating ability tests. Hockey Canada has officiating levels I through VI.[62] USA Hockey has officiating levels 1 through 4.[63]
|
115 |
+
|
116 |
+
Since men's ice hockey is a full contact sport, body checks are allowed so injuries are a common occurrence. Protective equipment is mandatory and is enforced in all competitive situations. This includes a helmet with either a visor or a full face mask, shoulder pads, elbow pads, mouth guard, protective gloves, heavily padded shorts (also known as hockey pants) or a girdle, athletic cup (also known as a jock, for males; and jill, for females), shin pads, skates, and (optionally) a neck protector.
|
117 |
+
|
118 |
+
Goaltenders use different equipment. With hockey pucks approaching them at speeds of up to 100 mph (160 km/h) they must wear equipment with more protection. Goaltenders wear specialized goalie skates (these skates are built more for movement side to side rather than forwards and backwards), a jock or jill, large leg pads (there are size restrictions in certain leagues), blocking glove, catching glove, a chest protector, a goalie mask, and a large jersey. Goaltenders' equipment has continually become larger and larger, leading to fewer goals in each game and many official rule changes.
|
119 |
+
|
120 |
+
Hockey skates are optimized for physical acceleration, speed and manoeuvrability. This includes rapid starts, stops, turns, and changes in skating direction. In addition, they must be rigid and tough to protect the skater's feet from contact with other skaters, sticks, pucks, the boards, and the ice itself. Rigidity also improves the overall manoeuvrability of the skate. Blade length, thickness (width), and curvature (rocker/radius (front to back) and radius of hollow (across the blade width) are quite different from speed or figure skates. Hockey players usually adjust these parameters based on their skill level, position, and body type. The blade width of most skates are about 1⁄8 inch (3.2 mm) thick.
|
121 |
+
|
122 |
+
The hockey stick consists of a long, relatively wide, and slightly curved flat blade, attached to a shaft. The curve itself has a big impact on its performance. A deep curve allows for lifting the puck easier while a shallow curve allows for easier backhand shots. The flex of the stick also impacts the performance. Typically, a less flexible stick is meant for a stronger player since the player is looking for the right balanced flex that allows the stick to flex easily while still having a strong "whip-back" which sends the puck flying at high speeds. It is quite distinct from sticks in other sports games and most suited to hitting and controlling the flat puck. Its unique shape contributed to the early development of the game.
|
123 |
+
|
124 |
+
Ice hockey is a full contact sport and carries a high risk of injury. Players are moving at speeds around approximately 20–30 mph (30–50 km/h) and quite a bit of the game revolves around the physical contact between the players. Skate blades, hockey sticks, shoulder contact, hip contact, and hockey pucks can all potentially cause injuries. The types of injuries associated with hockey include: lacerations, concussions, contusions, ligament tears, broken bones, hyperextensions, and muscle strains. Women's ice hockey players are allowed to contact other players but are not allowed to body check.
|
125 |
+
|
126 |
+
Compared to athletes who play other sports, ice hockey players are at higher risk of overuse injuries and injuries caused by early sports specialization by teenagers.[64]
|
127 |
+
|
128 |
+
According to the Hughston Health Alert, "Lacerations to the head, scalp, and face are the most frequent types of injury [in hockey]."[65] Even a shallow cut to the head results in a loss of a large amount of blood. Direct trauma to the head is estimated to account for 80% of all hockey injuries as a result of player contact with other players or hockey equipment.[65]
|
129 |
+
|
130 |
+
One of the leading causes of head injury is body checking from behind. Due to the danger of delivering a check from behind, many leagues, including the NHL have made this a major and game misconduct penalty (called "boarding"). Another type of check that accounts for many of the player-to-player contact concussions is a check to the head resulting in a misconduct penalty (called "head contact"). A check to the head can be defined as delivering a hit while the receiving player's head is down and their waist is bent and the aggressor is targeting the opponent player's head.
|
131 |
+
|
132 |
+
The most dangerous result of a head injury in hockey can be classified as a concussion. Most concussions occur during player-to-player contact rather than when a player is checked into the boards. Checks to the head have accounted for nearly 50% of concussions that players in the National Hockey League have suffered. In recent years, the NHL has implemented new rules which penalize and suspend players for illegal checks to the heads, as well as checks to unsuspecting players. Concussions that players suffer may go unreported because there is no obvious physical signs if a player is not knocked unconscious. This can prove to be dangerous if a player decides to return to play without receiving proper medical attention. Studies show that ice hockey causes 44.3% of all traumatic brain injuries among Canadian children.[66] In severe cases, the traumatic brain injuries are capable of resulting in death. Occurrences of death from these injuries are rare.
|
133 |
+
|
134 |
+
An important defensive tactic is checking—attempting to take the puck from an opponent or to remove the opponent from play. Stick checking, sweep checking, and poke checking are legal uses of the stick to obtain possession of the puck. The neutral zone trap is designed to isolate the puck carrier in the neutral zone preventing him from entering the offensive zone. Body checking is using one's shoulder or hip to strike an opponent who has the puck or who is the last to have touched it (the last person to have touched the puck is still legally "in possession" of it, although a penalty is generally called if he is checked more than two seconds after his last touch). Body checking is also a penalty in certain leagues in order to reduce the chance of injury to players. Often the term checking is used to refer to body checking, with its true definition generally only propagated among fans of the game.
|
135 |
+
|
136 |
+
Offensive tactics include improving a team's position on the ice by advancing the puck out of one's zone towards the opponent's zone, progressively by gaining lines, first your own blue line, then the red line and finally the opponent's blue line. NHL rules instated for the 2006 season redefined the offside rule to make the two-line pass legal; a player may pass the puck from behind his own blue line, past both that blue line and the centre red line, to a player on the near side of the opponents' blue line. Offensive tactics are designed ultimately to score a goal by taking a shot. When a player purposely directs the puck towards the opponent's goal, he or she is said to "shoot" the puck.
|
137 |
+
|
138 |
+
A deflection is a shot that redirects a shot or a pass towards the goal from another player, by allowing the puck to strike the stick and carom towards the goal. A one-timer is a shot struck directly off a pass, without receiving the pass and shooting in two separate actions. Headmanning the puck, also known as breaking out, is the tactic of rapidly passing to the player farthest down the ice. Loafing, also known as cherry-picking, is when a player, usually a forward, skates behind an attacking team, instead of playing defence, in an attempt to create an easy scoring chance.
|
139 |
+
|
140 |
+
A team that is losing by one or two goals in the last few minutes of play will often elect to pull the goalie; that is, remove the goaltender and replace him or her with an extra attacker on the ice in the hope of gaining enough advantage to score a goal. However, it is an act of desperation, as it sometimes leads to the opposing team extending their lead by scoring a goal in the empty net.
|
141 |
+
|
142 |
+
One of the most important strategies for a team is their forecheck. Forechecking is the act of attacking the opposition in their defensive zone. Forechecking is an important part of the dump and chase strategy (i.e. shooting the puck into the offensive zone and then chasing after it). Each team will use their own unique system but the main ones are: 2–1–2, 1–2–2, and 1–4. The 2–1–2 is the most basic forecheck system where two forwards will go in deep and pressure the opposition's defencemen, the third forward stays high and the two defencemen stay at the blueline. The 1–2–2 is a bit more conservative system where one forward pressures the puck carrier and the other two forwards cover the oppositions' wingers, with the two defencemen staying at the blueline. The 1–4 is the most defensive forecheck system, referred to as the neutral zone trap, where one forward will apply pressure to the puck carrier around the oppositions' blueline and the other 4 players stand basically in a line by their blueline in hopes the opposition will skate into one of them. Another strategy is the left wing lock, which has two forwards pressure the puck and the left wing and the two defencemen stay at the blueline.
|
143 |
+
|
144 |
+
There are many other little tactics used in the game of hockey. Cycling moves the puck along the boards in the offensive zone to create a scoring chance by making defenders tired or moving them out of position. Pinching is when a defenceman pressures the opposition's winger in the offensive zone when they are breaking out, attempting to stop their attack and keep the puck in the offensive zone. A saucer pass is a pass used when an opposition's stick or body is in the passing lane. It is the act of raising the puck over the obstruction and having it land on a teammate's stick.
|
145 |
+
|
146 |
+
A deke, short for "decoy", is a feint with the body or stick to fool a defender or the goalie. Many modern players, such as Pavel Datsyuk, Sidney Crosby and Patrick Kane, have picked up the skill of "dangling", which is fancier deking and requires more stick handling skills.
|
147 |
+
|
148 |
+
Although fighting is officially prohibited in the rules, it is not an uncommon occurrence at the professional level, and its prevalence has been both a target of criticism and a considerable draw for the sport. At the professional level in North America fights are unofficially condoned. Enforcers and other players fight to demoralize the opposing players while exciting their own, as well as settling personal scores. A fight will also break out if one of the team's skilled players gets hit hard or someone receives what the team perceives as a dirty hit. The amateur game penalizes fisticuffs more harshly, as a player who receives a fighting major is also assessed at least a 10-minute misconduct penalty (NCAA and some Junior leagues) or a game misconduct penalty and suspension (high school and younger, as well as some casual adult leagues).[67] Crowds seem to like fighting in ice hockey and cheer when fighting erupts.[68]
|
149 |
+
|
150 |
+
Ice hockey is one of the fastest growing women's sports in the world, with the number of participants increasing by 400 percent from 1995 to 2005.[69] In 2011, Canada had 85,827 women players,[70] United States had 65,609,[71] Finland 4,760,[72] Sweden 3,075[73] and Switzerland 1,172.[74] While there are not as many organized leagues for women as there are for men, there exist leagues of all levels, including the Canadian Women's Hockey League (CWHL), Western Women's Hockey League, National Women's Hockey League (NWHL), Mid-Atlantic Women's Hockey League, and various European leagues; as well as university teams, national and Olympic teams, and recreational teams. The IIHF holds IIHF World Women's Championships tournaments in several divisions; championships are held annually, except that the top flight does not play in Olympic years.[75]
|
151 |
+
|
152 |
+
The chief difference between women's and men's ice hockey is that body checking is prohibited in women's hockey. After the 1990 Women's World Championship, body checking was eliminated in women's hockey. In current IIHF women's competition, body checking is either a minor or major penalty, decided at the referee's discretion.[76] In addition, players in women's competition are required to wear protective full-face masks.[76]
|
153 |
+
|
154 |
+
In Canada, to some extent ringette has served as the female counterpart to ice hockey, in the sense that traditionally, boys have played hockey while girls have played ringette.[77]
|
155 |
+
|
156 |
+
Women are known to have played the game in the 19th century. Several games were recorded in the 1890s in Ottawa, Ontario, Canada. The women of Lord Stanley's family were known to participate in the game of ice hockey on the outdoor ice rink at Rideau Hall, the residence of Canada's Governor-General.
|
157 |
+
|
158 |
+
The game developed at first without an organizing body. A tournament in 1902 between Montreal and Trois-Rivieres was billed as the first championship tournament. Several tournaments, such as at the Banff Winter Carnival, were held in the early 20th century and numerous women's teams such as the Seattle Vamps and Vancouver Amazons existed. Organizations started to develop in the 1920s, such as the Ladies Ontario Hockey Association, and later, the Dominion Women's Amateur Hockey Association. Starting in the 1960s, the game spread to universities. Today, the sport is played from youth through adult leagues, and in the universities of North America and internationally. There have been two major professional women's hockey leagues to have paid its players: the National Women's Hockey League with teams in the United States and the Canadian Women's Hockey League with teams in Canada, China, and the United States.
|
159 |
+
|
160 |
+
The first women's world championship tournament, albeit unofficial, was held in 1987 in Toronto, Ontario, Canada. This was followed by the first IIHF World Championship in 1990 in Ottawa. Women's ice hockey was added as a medal sport at the 1998 Winter Olympics in Nagano, Japan. The United States won the gold, Canada won the silver and Finland won the bronze medal.[78] Canada won in 2002, 2006, 2010, and 2014, and also reached the gold medal game in 2018, where it lost in a shootout to the United States, their first loss in a competitive Olympic game since 2002.[79]
|
161 |
+
|
162 |
+
The United States Hockey League (USHL) welcomed the first female professional ice hockey player in 1969–70, when the Marquette Iron Rangers signed Karen Koch.[80] One woman, Manon Rhéaume, has played in an NHL pre-season game as a goaltender for the Tampa Bay Lightning against the St. Louis Blues. In 2003, Hayley Wickenheiser played with the Kirkkonummi Salamat in the Finnish men's Suomi-sarja league. Several women have competed in North American minor leagues, including Rhéaume, goaltenders Kelly Dyer and Erin Whitten and defenceman Angela Ruggiero.
|
163 |
+
|
164 |
+
With interest in women's ice hockey growing, between 2007 and 2010 the number of registered female players worldwide grew from 153,665 to 170,872. Women's hockey is on the rise in almost every part of the world and there are teams in North America, Europe, Asia, Oceania, Africa and Latin America.[81]
|
165 |
+
|
166 |
+
The future of international women's ice hockey was discussed at the World Hockey Summit in 2010, and IIHF member associations could work together.[82] International Olympic Committee president Jacques Rogge stated that the women's hockey tournament might be eliminated from the Olympics since the event was not competitively balanced, and dominated by Canada and the United States.[83] Team Canada captain Hayley Wickenheiser explained that the talent gap between the North American and European countries was due to the presence of women's professional leagues in North America, along with year-round training facilities. She stated the European players were talented, but their respective national team programs were not given the same level of support as the European men's national teams, or the North American women's national teams.[84] She stressed the need for women to have their own professional league which would be for the benefit of international hockey.[85]
|
167 |
+
|
168 |
+
The primary women's professional hockey league in North America is the National Women's Hockey League (NWHL) with five teams located in the United States and one in Canada.[86] From 2007 until 2019 the Canadian Women's Hockey League (CWHL) operated with teams in Canada, the United States and China.[87]
|
169 |
+
|
170 |
+
The CWHL was founded in 2007 and originally consisted of seven teams in Canada, but had several membership changes including adding a team in the United States in 2010. When the league launched, its players were only compensated for travel and equipment. The league began paying its players a stipend in the 2017–18 season when the league launched its first teams in China. For the league's 2018–19 season, there were six teams consisting of the Calgary Inferno, Les Canadiennes de Montreal, Markham Thunder, Shenzhen KRS Vanke Rays, Toronto Furies, and the Worcester Blades.[87] The CWHL ceased operations in 2019 citing unsustainable business operations.
|
171 |
+
|
172 |
+
The NWHL was founded in 2015 with four teams in the Northeast United States and was the first North American women's league to pay its players. The league expanded to five teams in 2018 with the addition of the formerly independent Minnesota Whitecaps.[88] The league had conditionally approved of Canadian expansion teams in Montreal and Toronto following the dissolution of the CWHL, but lack of investors has caused the postponement of any further expansion. On April 22, 2020, the NWHL officially announced that Toronto was awarded an expansion team for the 2020–21 season growing the league to six teams. The six teams in the league are the Boston Pride, Buffalo Beauts, Connecticut Whale, Metropolitan Riveters, Minnesota Whitecaps, and the Toronto Six.
|
173 |
+
|
174 |
+
The following is a list of professional ice hockey leagues by attendance:
|
175 |
+
|
176 |
+
The NHL is by far the best attended and most popular ice hockey league in the world, and is among the major professional sports leagues in the United States and Canada. The league's history began after Canada's National Hockey Association decided to disband in 1917; the result was the creation of the National Hockey League with four teams. The league expanded to the United States beginning in 1924 and had as many as 10 teams before contracting to six teams by 1942–43. In 1967, the NHL doubled in size to 12 teams, undertaking one of the greatest expansions in professional sports history. A few years later, in 1972, a new 12-team league, the World Hockey Association (WHA) was formed and due to its ensuing rivalry with the NHL, it caused an escalation in players salaries. In 1979, the 17-team NHL merged with the WHA creating a 21-team league.[93] By 2017, the NHL had expanded to 31 teams, and after a realignment in 2013, these teams were divided into two conferences and four divisions.[94] The league is expected to expand to 32 teams by 2021.
|
177 |
+
|
178 |
+
The American Hockey League (AHL), sometimes referred to as "The A",[95] is the primary developmental professional league for players aspiring to enter the NHL. It comprises 31 teams from the United States and Canada. It is run as a "farm league" to the NHL, with the vast majority of AHL players under contract to an NHL team. The ECHL (called the East Coast Hockey League before the 2003–04 season) is a mid-level minor league in the United States with a few players under contract to NHL or AHL teams.
|
179 |
+
|
180 |
+
As of 2019, there are three minor professional leagues with no NHL affiliations: the Federal Prospects Hockey League (FPHL), Ligue Nord-Américaine de Hockey (LNAH), and the Southern Professional Hockey League (SPHL).
|
181 |
+
|
182 |
+
U Sports ice hockey is the highest level of play at the Canadian university level under the auspices of U Sports, Canada's governing body for university sports. As these players compete at the university level, they are obligated to follow the rule of standard eligibility of five years.
|
183 |
+
In the United States especially, college hockey is popular and the best university teams compete in the annual NCAA Men's Ice Hockey Championship. The American Collegiate Hockey Association is composed of college teams at the club level.
|
184 |
+
|
185 |
+
In Canada, the Canadian Hockey League is an umbrella organization comprising three major junior leagues: the Ontario Hockey League, the Western Hockey League, and the Quebec Major Junior Hockey League. It attracts players from Canada, the United States and Europe. The major junior players are considered amateurs as they are under 21-years-old and not paid a salary, however, they do get a stipend and play a schedule similar to a professional league. Typically, the NHL drafts many players directly from the major junior leagues.
|
186 |
+
|
187 |
+
In the United States, the United States Hockey League (USHL) is the highest junior league. Players in this league are also amateur with players required to be under 21-years old, but do not get a stipend, which allows players to retain their eligibility for participation in NCAA ice hockey.
|
188 |
+
|
189 |
+
The Kontinental Hockey League (KHL) is the largest and most popular ice hockey league in Eurasia. The league is the direct successor to the Russian Super League, which in turn was the successor to the Soviet League, the history of which dates back to the Soviet adoption of ice hockey in the 1940s. The KHL was launched in 2008 with clubs predominantly from Russia, but featuring teams from other post-Soviet states. The league expanded beyond the former Soviet countries beginning in the 2011–12 season, with clubs in Croatia and Slovakia. The KHL currently comprises member clubs based in Belarus (1), China (1), Finland (1), Latvia (1), Kazakhstan (1) and Russia (19) for a total of 24.
|
190 |
+
|
191 |
+
The second division of hockey in Eurasia is the Supreme Hockey League (VHL). This league features 24 teams from Russia and 2 from Kazakhstan. This league is currently being converted to a farm league for the KHL, similarly to the AHL's function in relation to the NHL. The third division is the Russian Hockey League, which features only teams from Russia. The Asia League, an international ice hockey league featuring clubs from China, Japan, South Korea, and the Russian Far East, is the successor to the Japan Ice Hockey League.
|
192 |
+
|
193 |
+
The highest junior league in Eurasia is the Junior Hockey League (MHL). It features 32 teams from post-Soviet states, predominantly Russia. The second tier to this league is the Junior Hockey League Championships (MHL-B).
|
194 |
+
|
195 |
+
Several countries in Europe have their own top professional senior leagues. Many future KHL and NHL players start or end their professional careers in these leagues. The National League A in Switzerland, Swedish Hockey League in Sweden, Liiga in Finland, and Czech Extraliga in the Czech Republic are all very popular in their respective countries.
|
196 |
+
|
197 |
+
Beginning in the 2014–15 season, the Champions Hockey League was launched, a league consisting of first-tier teams from several European countries, running parallel to the teams' domestic leagues. The competition is meant to serve as a Europe-wide ice hockey club championship. The competition is a direct successor to the European Trophy and is related to the 2008–09 tournament of the same name.
|
198 |
+
|
199 |
+
There are also several annual tournaments for clubs, held outside of league play. Pre-season tournaments include the European Trophy, Tampere Cup and the Pajulahti Cup. One of the oldest international ice hockey competition for clubs is the Spengler Cup, held every year in Davos, Switzerland, between Christmas and New Year's Day. It was first awarded in 1923 to the Oxford University Ice Hockey Club. The Memorial Cup, a competition for junior-level (age 20 and under) clubs is held annually from a pool of junior championship teams in Canada and the United States.
|
200 |
+
|
201 |
+
International club competitions organized by the IIHF include the Continental Cup, the Victoria Cup and the European Women's Champions Cup. The World Junior Club Cup is an annual tournament of junior ice hockey clubs representing each of the top junior leagues.
|
202 |
+
|
203 |
+
The Australian Ice Hockey League and New Zealand Ice Hockey League are represented by nine and five teams respectively. As of 2012, the two top teams of the previous season from each league compete in the Trans-Tasman Champions League.
|
204 |
+
|
205 |
+
Ice hockey in Africa is a small but growing sport; while no African ice hockey playing nation has a domestic national leagues, there are several regional leagues in South Africa.
|
206 |
+
|
207 |
+
Ice hockey has been played at the Winter Olympics since 1924 (and was played at the summer games in 1920). Hockey is Canada's national winter sport, and Canadians are extremely passionate about the game. The nation has traditionally done very well at the Olympic games, winning 6 of the first 7 gold medals. However, by 1956 its amateur club teams and national teams could not compete with the teams of government-supported players from the Soviet Union. The USSR won all but two gold medals from 1956 to 1988. The United States won its first gold medal in 1960. On the way to winning the gold medal at the 1980 Lake Placid Olympics, amateur US college players defeated the heavily favoured Soviet squad—an event known as the "Miracle on Ice" in the United States. Restrictions on professional players were fully dropped at the 1988 games in Calgary. NHL agreed to participate ten years later. 1998 Games saw the full participation of players from the NHL, which suspended operations during the Games and has done so in subsequent Games up until 2018. The 2010 games in Vancouver were the first played in an NHL city since the inclusion of NHL players. The 2010 games were the first played on NHL-sized ice rinks, which are narrower than the IIHF standard.
|
208 |
+
|
209 |
+
National teams representing the member federations of the IIHF compete annually in the IIHF Ice Hockey World Championships. Teams are selected from the available players by the individual federations, without restriction on amateur or professional status. Since it is held in the spring, the tournament coincides with the annual NHL Stanley Cup playoffs and many of the top players are hence not available to participate in the tournament. Many of the NHL players who do play in the IIHF tournament come from teams eliminated before the playoffs or in the first round, and federations often hold open spots until the tournament to allow for players to join the tournament after their club team is eliminated. For many years, the tournament was an amateur-only tournament, but this restriction was removed, beginning in 1977.
|
210 |
+
|
211 |
+
The 1972 Summit Series and 1974 Summit Series, two series pitting the best Canadian and Soviet players without IIHF restrictions were major successes, and established a rivalry between Canada and the USSR. In the spirit of best-versus-best without restrictions on amateur or professional status, the series were followed by five Canada Cup tournaments, played in North America. Two NHL versus USSR series were also held: the 1979 Challenge Cup and Rendez-vous '87. The Canada Cup tournament later became the World Cup of Hockey, played in 1996, 2004 and 2016. The United States won in 1996 and Canada won in 2004 and 2016.
|
212 |
+
|
213 |
+
Since the initial women's world championships in 1990, there have been fifteen tournaments.[75] Women's hockey has been played at the Olympics since 1998.[78] The only finals in the women's world championship or Olympics that did not involve both Canada and the United States were the 2006 Winter Olympic final between Canada and Sweden and 2019 World Championship final between the US and Finland.
|
214 |
+
|
215 |
+
Other ice hockey tournaments featuring national teams include the World U20 Championship, the World U18 Championships, the World U-17 Hockey Challenge, the World Junior A Challenge, the Ivan Hlinka Memorial Tournament, the World Women's U18 Championships and the 4 Nations Cup. The annual Euro Hockey Tour, an unofficial European championship between the national men's teams of the Czech Republic, Finland, Russia and Sweden have been played since 1996–97.
|
216 |
+
|
217 |
+
The attendance record for an ice hockey game was set on December 11, 2010, when the University of Michigan's men's ice hockey team faced cross-state rival Michigan State in an event billed as "The Big Chill at the Big House". The game was played at Michigan's (American) football venue, Michigan Stadium in Ann Arbor, with a capacity of 109,901 as of the 2010 football season. When UM stopped sales to the public on May 6, 2010, with plans to reserve remaining tickets for students, over 100,000 tickets had been sold for the event.[96] Ultimately, a crowd announced by UM as 113,411, the largest in the stadium's history (including football), saw the homestanding Wolverines win 5–0. Guinness World Records, using a count of ticketed fans who actually entered the stadium instead of UM's figure of tickets sold, announced a final figure of 104,173.[97][98]
|
218 |
+
|
219 |
+
The record was approached but not broken at the 2014 NHL Winter Classic, which also held at Michigan Stadium, with the Detroit Red Wings as the home team and the Toronto Maple Leafs as the opposing team with an announced crowd of 105,491. The record for a NHL Stanley Cup playoff game is 28,183, set on April 23, 1996, at the Thunderdome during a Tampa Bay Lightning – Philadelphia Flyers game.[99]
|
220 |
+
|
221 |
+
Number of registered hockey players, including male, female and junior, provided by the respective countries' federations. Note that this list only includes the 38 of 81 IIHF member countries with more than 1,000 registered players as of October 2019.[100][101]
|
222 |
+
|
223 |
+
Pond hockey is a form of ice hockey played generally as pick-up hockey on lakes, ponds and artificial outdoor rinks during the winter. Pond hockey is commonly referred to in hockey circles as shinny. Its rules differ from traditional hockey because there is no hitting and very little shooting, placing a greater emphasis on skating, stickhandling and passing abilities. Since 2002, the World Pond Hockey Championship has been played on Roulston Lake in Plaster Rock, New Brunswick, Canada.[102] Since 2006, the US Pond Hockey Championships have been played in Minneapolis, Minnesota, and the Canadian National Pond Hockey Championships have been played in Huntsville, Ontario.
|
224 |
+
|
225 |
+
Sledge hockey is an adaption of ice hockey designed for players who have a physical disability. Players are seated in sleds and use a specialized hockey stick that also helps the player navigate on the ice. The sport was created in Sweden in the early 1960s, and is played under similar rules to ice hockey.
|
226 |
+
|
227 |
+
Ice hockey is the official winter sport of Canada. Ice hockey, partially because of its popularity as a major professional sport, has been a source of inspiration for numerous films, television episodes and songs in North American popular culture.[103][104]
|
en/2591.html.txt
ADDED
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Katsushika Hokusai (葛飾 北斎, c. 31 October 1760 – 10 May 1849), known simply as Hokusai, was a Japanese artist, ukiyo-e painter and printmaker of the Edo period.[1] Born in Edo (now Tokyo), Hokusai is best known as author of the woodblock print series Thirty-six Views of Mount Fuji (富嶽三十六景, Fugaku Sanjūroku-kei, c. 1831) which includes the internationally iconic print, The Great Wave off Kanagawa.
|
4 |
+
|
5 |
+
Hokusai created the Thirty-Six Views both as a response to a domestic travel boom and as part of a personal obsession with Mount Fuji.[2] It was this series, specifically The Great Wave print and Fine Wind, Clear Morning, that secured Hokusai's fame both in Japan and overseas. As historian Richard Lane concludes, "if there is one work that made Hokusai's name, both in Japan and abroad, it must be this monumental print-series".[3] While Hokusai's work prior to this series is certainly important, it was not until this series that he gained broad recognition.[4]
|
6 |
+
|
7 |
+
Hokusai's date of birth is unclear, but is often stated as the 23rd day of the 9th month of the 10th year of the Hōreki era (in the old calendar, or 31 October 1760) to an artisan family, in the Katsushika district of Edo, Japan.[5] His childhood name was Tokitarō.[6] It is believed his father was the mirror-maker Nakajima Ise, who produced mirrors for the shōgun.[6] His father never made Hokusai an heir, so it is possible that his mother was a concubine.[5] Hokusai began painting around the age of six, perhaps learning from his father, whose work on mirrors included a painting of designs around mirrors.[5]
|
8 |
+
|
9 |
+
Hokusai was known by at least thirty names during his lifetime. While the use of multiple names was a common practice of Japanese artists of the time, his number of pseudonyms exceeds that of any other major Japanese artist. Hokusai's name changes are so frequent, and so often related to changes in his artistic production and style, that they are used for breaking his life up into periods.[5]
|
10 |
+
|
11 |
+
At the age of 12, his father sent him to work in a bookshop and lending library, a popular institution in Japanese cities, where reading books made from wood-cut blocks was a popular entertainment of the middle and upper classes.[7] At 14, he worked as an apprentice to a wood-carver, until the age of 18, when he entered the studio of Katsukawa Shunshō. Shunshō was an artist of ukiyo-e, a style of woodblock prints and paintings that Hokusai would master, and head of the so-called Katsukawa school.[6] Ukiyo-e, as practised by artists like Shunshō, focused on images of the courtesans and Kabuki actors who were popular in Japan's cities at the time.[8]
|
12 |
+
|
13 |
+
After a year, Hokusai's name changed for the first time, when he was dubbed Shunrō by his master. It was under this name that he published his first prints, a series of pictures of Kabuki actors published in 1779. During the decade he worked in Shunshō's studio, Hokusai was married to his first wife, about whom very little is known except that she died in the early 1790s. He married again in 1797, although this second wife also died after a short time. He fathered two sons and three daughters with these two wives, and his youngest daughter Ei, also known as Ōi, eventually became an artist.[8][9]
|
14 |
+
|
15 |
+
Upon the death of Shunshō in 1793, Hokusai began exploring other styles of art, including European styles he was exposed to through French and Dutch copper engravings he was able to acquire.[8] He was soon expelled from the Katsukawa school by Shunkō, the chief disciple of Shunshō, possibly due to studies at the rival Kanō school. This event was, in his own words, inspirational: "What really motivated the development of my artistic style was the embarrassment I suffered at Shunkō's hands."[3]
|
16 |
+
|
17 |
+
Hokusai also changed the subjects of his works, moving away from the images of courtesans and actors that were the traditional subjects of ukiyo-e. Instead, his work became focused on landscapes and images of the daily life of Japanese people from a variety of social levels. This change of subject was a breakthrough in ukiyo-e and in Hokusai's career.[8] Fireworks in the Cool of Evening at Ryogoku Bridge in Edo (1790) dates from this period of Hokusai's life.[10]
|
18 |
+
|
19 |
+
The next period saw Hokusai's association with the Tawaraya School and the adoption of the name "Tawaraya Sōri". He produced many brush paintings, called surimono, and illustrations for kyōka ehon (illustrated book of humorous poems) during this time. In 1798, Hokusai passed his name on to a pupil and set out as an independent artist, free from ties to a school for the first time, adopting the name Hokusai Tomisa.
|
20 |
+
|
21 |
+
By 1800, Hokusai was further developing his use of ukiyo-e for purposes other than portraiture. He had also adopted the name he would most widely be known by, Katsushika Hokusai, the former name referring to the part of Edo where he was born and the latter meaning, 'north studio'. That year, he published two collections of landscapes, Famous Sights of the Eastern Capital and Eight Views of Edo (modern Tokyo). He also began to attract students of his own, eventually teaching 50 pupils over the course of his life.[8]
|
22 |
+
|
23 |
+
He became increasingly famous over the next decade, both due to his artwork and his talent for self-promotion. During an Edo festival in 1804, he created a portrait of the Buddhist prelate Daruma, founder of Zen, said to be 600 feet (180 m) long using a broom and buckets full of ink.[citation needed] Another story places him in the court of the shogun Tokugawa Ienari, invited there to compete with another artist who practised more traditional brush stroke painting. Hokusai's painting, created in front of the Shōgun, consisted of painting a blue curve on paper, then chasing across it a chicken whose feet had been dipped in red paint. He described the painting to the Shōgun as a landscape showing the Tatsuta River with red maple leaves floating in it, winning the competition.[11]
|
24 |
+
|
25 |
+
1807 saw Hokusai collaborate with the popular novelist Takizawa Bakin on a series of illustrated books. The two did not get along due to artistic differences, and their collaboration ended during work on their fourth book. The publisher, given the choice between keeping Hokusai or Bakin on the project, opted to keep Hokusai, emphasizing the importance of illustrations in printed works of the period.[12]
|
26 |
+
|
27 |
+
Hokusai paid close attention to the production of his work in books. Two instances are documented in letters he wrote to the publishers and block cutters involved in the production of his designs in Toshisen Ehon, a Japanese edition of an anthology of Chinese poetry. Hokusai writes to the book's publisher that the blockcutter Egawa Tomekichi, with whom Hokusai had previously worked and whom he respected, had strayed from Hokusai's style in the cutting of certain heads. Hokusai also wrote directly to another block cutter involved in the project, Sugita Kinsuke, stating that he disliked the Utagawa-school style in which Kinsuke had cut the figure's eyes and noses and that amendments would need to be made for the final prints to be true to Hokusai's style. In his letter, Hokusai includes illustrated examples of both his style of illustrating eyes and noses and the Utagawa–school style. The publisher agreed to make these alterations, even with hundreds of copies of the book already printed. To correct these details the already existing cut blocks would be corrected by use of the Umeki technique. The sections to be corrected would be removed and a prepared piece of wood inserted, into which the blockcutter would cut the revised design. Use of the Umeki technique can be detected by fine break marks bordering the inserted block. Copies in print of both the original woodblock and those made of Hokusai's requested revisions survive, printed in 1833 and 1836 respectively.[13]
|
28 |
+
|
29 |
+
In 1811, at the age of 51, Hokusai changed his name to Taito and entered the period in which he created the Hokusai Manga and various etehon, or art manuals.[6] These etehon, beginning in 1812 with Quick Lessons in Simplified Drawing, served as a convenient way to make money and attract more students. Manga (meaning random drawings) included studies in perspective.[14] The first book of Hokusai's manga, sketches or caricatures that influenced the modern form of comics known by the same name, was published in 1814. Together, his 12 volumes of manga published before 1820 and three more published posthumously include thousands of drawings of animals, religious figures, and everyday people. They often have humorous overtones and were very popular at the time.[12] Later, Hokusai created 4-frame-cartoons. Many of his Manga also illustrates rich people's live in a humorous way.[15]
|
30 |
+
|
31 |
+
On 5 October 1817, he painted at the Hongan-ji Nagoya Betsuin in Nagoya the Big Daruma on paper,[citation needed] measuring 18 × 10.8 metres,[citation needed] impressing many onlookers. For this feat he received the name "Darusen" (a shortened form of Daruma Sensei).[16][17] Although the original was destroyed in 1945, promotional handbills from that time survived and are preserved at the Nagoya City Museum. Based on studies, a reproduction of the large painting was done at a large public event on 23 November 2017 to commemorate the 200-year anniversary of the painting, using the same size and techniques and material as the original.[18][19]
|
32 |
+
|
33 |
+
In 1820, Hokusai changed his name yet again, this time to "Iitsu," a change which marked the start of a period in which he secured fame as an artist throughout Japan. His most famous work, Thirty-six Views of Mount Fuji, including the famous Great Wave off Kanagawa, was produced in the early 1830s. The results of Hokusai's perspectival studies in Manga can be seen here in The Great Wave off Kanagawa where he uses what would have been seen as a western perspective to represent depth and volume.[14] It proved so popular that Hokusai later added ten more prints to the series. Among the other popular series of prints he published during this time are A Tour of the Waterfalls of the Provinces, Oceans of Wisdom and Unusual Views of Celebrated Bridges in the Provinces.[20] He also began producing a number of detailed individual images of flowers and birds, including the extraordinarily detailed Poppies and Flock of Chickens.[21]
|
34 |
+
|
35 |
+
The next period, beginning in 1834, saw Hokusai working under the name "Gakyō Rōjin" (画狂老人; "The Old Man Mad About Art").[22] It was at this time that Hokusai produced One Hundred Views of Mount Fuji, another significant landscape series.[23]
|
36 |
+
|
37 |
+
In the postscript to this work, Hokusai writes:
|
38 |
+
|
39 |
+
From the age of six, I had a passion for copying the form of things and since the age of fifty I have published many drawings, yet of all I drew by my seventieth year there is nothing worth taking into account. At seventy-three years I partly understood the structure of animals, birds, insects and fishes, and the life of grasses and plants. And so, at eighty-six I shall progress further; at ninety I shall even further penetrate their secret meaning, and by one hundred I shall perhaps truly have reached the level of the marvellous and divine. When I am one hundred and ten, each dot, each line will possess a life of its own.[24]
|
40 |
+
|
41 |
+
In 1839, a fire destroyed Hokusai's studio and much of his work. By this time, his career was beginning to fade as younger artists such as Andō Hiroshige became increasingly popular. At the age of 83, Hokusai traveled to Obuse in Shinano Province (now Nagano Prefecture) at the invitation of a wealthy farmer, Takai Kozan where he stayed for several years.[25] During his time in Obuse, he created several masterpieces, included the Masculine Wave and the Feminine Wave.[25] Hokusai never stopped painting and completed Ducks in a Stream at the age of 87.[26]
|
42 |
+
|
43 |
+
Constantly seeking to produce better work, he apparently exclaimed on his deathbed, "If only Heaven will give me just another ten years ... Just another five more years, then I could become a real painter." He died on 10 May 1849 (18th day of the 4th month of the 2nd year of the Kaei era by the old calendar), and was buried at the Seikyō-ji in Tokyo (Taito Ward).[6]
|
44 |
+
|
45 |
+
Dragon on the Higashimachi Festival Float, Obuse, 1844.
|
46 |
+
|
47 |
+
Feminine Wave, painted while living in Obuse, 1845.
|
48 |
+
|
49 |
+
The Dragon of Smoke Escaping from Mt Fuji
|
50 |
+
|
51 |
+
Hokusai also created erotic art, called shunga in Japanese. Most shunga are a type of ukiyo-e, usually executed in woodblock print format.[27] Translated literally, the Japanese word shunga means picture of spring or "spring".
|
52 |
+
|
53 |
+
Hokusai had a long career, but he produced most of his important work after age 60. His most popular work is the ukiyo-e series Thirty-six Views of Mount Fuji, which was created between 1826 and 1833. It actually consists of 46 prints (10 of them added after initial publication).[3]
|
54 |
+
|
55 |
+
In addition, Hokusai is responsible for the 1834 One Hundred Views of Mount Fuji (富嶽百景, Fugaku Hyakkei), a work which "is generally considered the masterpiece among his landscape picture books".[3] His ukiyo-e transformed the art form from a style of portraiture focused on the courtesans and actors popular during the Edo period in Japan's cities into a much broader style of art that focused on landscapes, plants, and animals.[8]
|
56 |
+
|
57 |
+
Both Hokusai's choice of art name and frequent depiction of Mount Fuji stem from his religious beliefs. The name Hokusai (北斎) means "North Studio (room)", an abbreviation of Hokushinsai (北辰際) or "North Star Studio". Hokusai was a member of the Nichiren sect of Buddhism, who see the North Star as associated with the deity Myōken (妙見菩薩).[3]
|
58 |
+
|
59 |
+
Mount Fuji has traditionally been linked with eternal life. This belief can be traced to The Tale of the Bamboo Cutter, where a goddess deposits the elixir of life on the peak. As Henry Smith expounds:
|
60 |
+
|
61 |
+
Thus from an early time, Mt. Fuji was seen as the source of the secret of immortality, a tradition that was at the heart of Hokusai's own obsession with the mountain.[2]
|
62 |
+
|
63 |
+
The largest of Hokusai's works is the 15-volume collection Hokusai Manga (北斎漫画), published in 1814. The work is crammed with a collection of nearly 4,000 sketches of animals, people, objects, etc.[3] Despite the name, the sketches can not really be considered the precedent to modern manga, as the style and content of Hokusai's Manga is different from the story-based comic-book style of modern manga.[3]
|
64 |
+
|
65 |
+
Fine Wind, Clear Morning, from Thirty-six Views of Mount Fuji
|
66 |
+
|
67 |
+
Kirifuri waterfall at Kurokami Mountain in Shimotsuke, from A Tour of Japanese Waterfalls
|
68 |
+
|
69 |
+
Cuckoo and Azaleas, 1834from the Small Flower series
|
70 |
+
|
71 |
+
The Ghost of Oiwa, from One Hundred Ghost Tales
|
72 |
+
|
73 |
+
Courtesan asleep
|
74 |
+
|
75 |
+
Still Life
|
76 |
+
|
77 |
+
The Yodo River [Moon], from Snow, Moon, Blossoms
|
78 |
+
|
79 |
+
Tenma Bridge in Setsu Province, from Rare Views of Famous Japanese Bridges
|
80 |
+
|
81 |
+
Chōshi in Shimosha, from One Thousand Images of the Sea
|
82 |
+
|
83 |
+
Fuji From Goten Yama[28]
|
84 |
+
|
85 |
+
Lake Suwa in Shinano Province, from Rare Views of Famous Landscapes
|
86 |
+
|
87 |
+
Carp Leaping up a Cascade
|
88 |
+
|
89 |
+
The Strong Oi Pouring Sake
|
90 |
+
|
91 |
+
Yoshino Waterfalls, where Yoshitsune Washed his Horse, from A Tour of Japanese Waterfalls
|
92 |
+
|
93 |
+
Hokusai had achievements in various fields as an artist. He made designs for book illustrations and woodblock prints, sketches, and painting for over 70 years.[29] Hokusai was an early experimenter with western linear perspective among Japanese artists.[30] Hokusai himself was influenced by Sesshū Tōyō and other styles of Chinese painting.[31] His influences stretched across the globe to his western contemporaries in nineteenth-century Europe with Japonism, which started with a craze for collecting Japanese art, particularly ukiyo-e, of which some of the first samples were to be seen in Paris, when in about 1856, the French artist Félix Bracquemond first came across a copy of the sketchbook Hokusai Manga at the workshop of his printer.[citation needed] He influenced Art Nouveau, or Jugendstil in Germany, and the larger Impressionism movement, with themes echoing his work appearing in the work of Claude Monet and Pierre-Auguste Renoir. According to the Brooklyn Rail, "many artists collected his woodcuts: Degas, Gauguin, Klimt, Franz Marc, August Macke, Manet, and van Gogh."[32] Hermann Obrist's whiplash motif, or Peitschenhieb, which came to exemplify the new movement, is visibly influenced by Hokusai's work.
|
94 |
+
|
95 |
+
Even after his death, exhibitions of his artworks continue to grow. In 2005, Tokyo National Museum held a Hokusai exhibition which had the largest number of visitors of any exhibit there that year.[33] Several paintings from the Tokyo exhibition were also exhibited in the United Kingdom. The British Museum held the first exhibition of Hokusai's later year artworks including 'The Great Wave' in 2017.[34]
|
96 |
+
|
97 |
+
Hokusai inspired the Hugo Award–winning short story by science fiction author Roger Zelazny, "24 Views of Mt. Fuji, by Hokusai", in which the protagonist tours the area surrounding Mount Fuji, stopping at locations painted by Hokusai. A 2011 book on mindfulness closes with the poem "Hokusai Says" by Roger Keyes, preceded with the explanation that "[s]ometimes poetry captures the soul of an idea better than anything else."[35]
|
98 |
+
|
99 |
+
In the 1985 Encyclopaedia Britannica, Richard Lane characterizes Hokusai as "since the later 19th century [having] impressed Western artists, critics and art lovers alike, more, possibly, than any other single Asian artist".[36]
|
100 |
+
|
101 |
+
'Store Selling Picture Books and Ukiyo-e' by Hokusai shows how ukiyo-e during the time was actually sold; it shows how these prints were sold at local shops, and ordinary people could buy ukiyo-e. Unusually in this image, Hokusai used a hand-colored approach instead of using several separated woodblocks.[1]
|
102 |
+
|
103 |
+
For readers who want more information on specific works of art by Hokusai, these particular works are recommended.
|
104 |
+
|
105 |
+
Monographs dedicated to Hokusai art works:
|
en/2592.html.txt
ADDED
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Hollywood is a neighborhood in the central region of Los Angeles, California. Its name has come to be a shorthand reference for the U.S. film industry and the people associated with it. Many of its studios such as Paramount Pictures, Warner Bros., and Universal Pictures were founded there and Paramount still has its studios there.
|
4 |
+
|
5 |
+
Hollywood was incorporated as a municipality in 1903.[2][3] It was consolidated with the city of Los Angeles in 1910, and soon thereafter a prominent film industry emerged, eventually becoming the most recognizable in the world.[4][5]
|
6 |
+
|
7 |
+
H.J. Whitley, a real estate developer, had previously started more than a hundred towns across the western United States.[6][7] Whitley arranged to buy the 480-acre (190 ha) E.C. Hurd ranch. They agreed on a price and shook hands on the deal. Whitley shared his plans for the new town with General Harrison Gray Otis, publisher of the Los Angeles Times, and Ivar Weid, a prominent businessman in the area.[citation needed]
|
8 |
+
|
9 |
+
Daeida Wilcox, who donated land to help in the development of Hollywood, learned of the name Hollywood from Ivar Weid, her neighbor in Holly Canyon (now Lake Hollywood) and a prominent investor and friend of Whitley's.[8][9] She recommended the same name to her husband, Harvey H. Wilcox, who had purchased 120 acres on February 1, 1887. It wasn't until August 1887 Wilcox decided to use that name and filed with the Los Angeles County Recorder's office on a deed and parcel map of the property.
|
10 |
+
|
11 |
+
By 1900, the region had a post office, newspaper, hotel, and two markets. Los Angeles, with a population of 102,479 lay 10 miles (16 km) east through the vineyards, barley fields, and citrus groves. A single-track streetcar line ran down the middle of Prospect Avenue from it, but service was infrequent and the trip took two hours. The old citrus fruit-packing house was converted into a livery stable, improving transportation for the inhabitants of Hollywood.
|
12 |
+
|
13 |
+
The Hollywood Hotel was opened in 1902 by Whitley, who was a president of the Los Pacific Boulevard and Development Company. Having finally acquired the Hurd ranch and subdivided it, Whitley built the hotel to attract land buyers. Flanking the west side of Highland Avenue, the structure fronted on Prospect Avenue, which, still a dusty, unpaved road, was regularly graded and graveled. The hotel was to become internationally known and was the center of the civic and social life and home of the stars for many years.[10]
|
14 |
+
|
15 |
+
Whitley's company developed and sold one of the early residential areas, the Ocean View Tract.[11] Whitley did much to promote the area. He paid thousands of dollars for electric lighting, including bringing electricity and building a bank, as well as a road into the Cahuenga Pass. The lighting ran for several blocks down Prospect Avenue. Whitley's land was centered on Highland Avenue.[12][13] His 1918 development, Whitley Heights, was named for him.
|
16 |
+
|
17 |
+
Hollywood was incorporated as a municipality on November 14, 1903, by a vote of 88 for and 77 against. On January 30, 1904, the voters in Hollywood decided, by a vote of 113 to 96, to banish the sale of liquor within the city, except for medicinal purposes. Neither hotels nor restaurants were allowed to serve wine or liquor before or after meals.[14]
|
18 |
+
|
19 |
+
In 1910, the city voted for a merger with Los Angeles in order to secure an adequate water supply and to gain access to the L.A. sewer system. With annexation, the name of Prospect Avenue was changed to Hollywood Boulevard and all the street numbers were also changed.[15]
|
20 |
+
|
21 |
+
By 1912, major motion-picture companies had set up production near or in Los Angeles.[16] In the early 1900s, most motion picture patents were held by Thomas Edison's Motion Picture Patents Company in New Jersey, and filmmakers were often sued to stop their productions. To escape this, filmmakers began moving out west to Los Angeles, where attempts to enforce Edison's patents were easier to evade.[17] Also, the weather was ideal and there was quick access to various settings. Los Angeles became the capital of the film industry in the United States.[18] The mountains, plains and low land prices made Hollywood a good place to establish film studios.[19]
|
22 |
+
|
23 |
+
Director D. W. Griffith was the first to make a motion picture in Hollywood. His 17-minute short film In Old California (1910) was filmed for the Biograph Company.[20][21][22] Although Hollywood banned movie theaters—of which it had none—before annexation that year, Los Angeles had no such restriction.[23] The first film by a Hollywood studio, Nestor Motion Picture Company, was shot on October 26, 1911.[24] The H. J. Whitley home was used as its set, and the unnamed movie was filmed in the middle of their groves at the corner of Whitley Avenue and Hollywood Boulevard.[25][26]
|
24 |
+
|
25 |
+
The first studio in Hollywood, the Nestor Company, was established by the New Jersey–based Centaur Company in a roadhouse at 6121 Sunset Boulevard (the corner of Gower), in October 1911.[27] Four major film companies – Paramount, Warner Bros., RKO, and Columbia – had studios in Hollywood, as did several minor companies and rental studios. In the 1920s, Hollywood was the fifth-largest industry in the nation.[18] By the 1930s, Hollywood studios became fully vertically integrated, as production, distribution and exhibition was controlled by these companies, enabling Hollywood to produce 600 films per year.[19]
|
26 |
+
|
27 |
+
Hollywood became known as Tinseltown[28]
|
28 |
+
and the "dream factory"[19] because of the glittering image of the movie industry. Hollywood has since[when?] become a major center for film study in the United States.
|
29 |
+
|
30 |
+
A large sign reading HOLLYWOODLAND was built in the Hollywood Hills in 1923 to advertise a housing development. In 1949, the Hollywood Chamber of Commerce entered a contract with the City of Los Angeles to repair and rebuild the sign. The agreement stipulated that LAND be removed to spell HOLLYWOOD so the sign would now refer to the district, rather than the housing development.[29]
|
31 |
+
|
32 |
+
During the early 1950s, the Hollywood Freeway was constructed through the northeast corner of Hollywood.
|
33 |
+
|
34 |
+
The Capitol Records Building on Vine Street, just north of Hollywood Boulevard, was built in 1956, and the Hollywood Walk of Fame was created in 1958 as a tribute to artists and other significant contributors to the entertainment industry. The official opening was on February 8, 1960.[30][31][32]
|
35 |
+
|
36 |
+
The Hollywood Boulevard Commercial and Entertainment District was listed in the National Register of Historic Places in 1985.
|
37 |
+
|
38 |
+
In June 1999, the Hollywood extension of the Los Angeles County Metro Rail Red Line subway opened from Downtown Los Angeles to the San Fernando Valley, with stops along Hollywood Boulevard at Western Avenue (Hollywood/Western Metro station), Vine Street (Hollywood/Vine Metro station), and Highland Avenue (Hollywood/Highland Metro station).
|
39 |
+
|
40 |
+
The Dolby Theatre, which opened in 2001 as the Kodak Theatre at the Hollywood & Highland Center mall, is the home of the Oscars. The mall is located where the historic Hollywood Hotel once stood.
|
41 |
+
|
42 |
+
After the neighborhood underwent years of serious decline in the 1980s, many landmarks were threatened with demolition.[33] Columbia Square, at the northwest corner of Sunset Boulevard and Gower Street, is part of the ongoing rebirth of Hollywood. The Art Deco-style studio complex completed in 1938, which was once the Hollywood headquarters for CBS, became home to a new generation of broadcasters when cable television networks MTV, Comedy Central, BET and Spike TV consolidated their offices here in 2014 as part of a $420-million office, residential and retail complex.[34] Since 2000, Hollywood has been increasingly gentrified due to revitalization by private enterprise and public planners.[35][36][37] Over 1,200 hotel rooms have been added in Hollywood area between 2001 and 2016. Four thousand new apartments and over thirty low to mid-rise development projects were approved in 2019.[38]
|
43 |
+
|
44 |
+
In 2002, some Hollywood voters began a campaign for the area to secede from Los Angeles and become a separate municipality. In June of that year, the Los Angeles County Board of Supervisors placed secession referendums for both Hollywood and the San Fernando Valley on the ballot. To pass, they required the approval of a majority of voters in the proposed new municipality as well as a majority of voters in all of Los Angeles. In the November election, both measures failed by wide margins in the citywide vote.[39]
|
45 |
+
|
46 |
+
In 2020, businesses in Hollywood's Gower Gulch, including the strip mall, suffered from damage and looting following the death of George Floyd.[40]
|
47 |
+
|
48 |
+
According to the Mapping L.A. project of the Los Angeles Times, Hollywood is flanked by Hollywood Hills to the north, Los Feliz to the northeast, East Hollywood or Virgil Village to the east, Larchmont and Hancock Park to the south, Fairfax to the southwest, West Hollywood to the west and Hollywood Hills West to the northwest.[41]
|
49 |
+
|
50 |
+
Street limits of the Hollywood neighborhood are: north, Hollywood Boulevard from La Brea Avenue to the east boundary of Wattles Garden Park and Franklin Avenue between Bonita and Western avenues; east, Western Avenue; south, Melrose Avenue, and west, La Brea Avenue or the West Hollywood city line.[42][43]
|
51 |
+
|
52 |
+
In 1918, H. J. Whitley commissioned architect A. S. Barnes to design Whitley Heights as a Mediterranean-style village on the hills above Hollywood Boulevard, and it became the first celebrity community.[44][45][46]
|
53 |
+
|
54 |
+
Other areas within Hollywood are Franklin Village, Little Armenia, Spaulding Square, Thai Town,[42] and Yucca Corridor.[47][48]
|
55 |
+
|
56 |
+
In 1994, Hollywood, Alabama, and 10 other towns named Hollywood successfully fought an attempt by the Hollywood Chamber of Commerce to trademark the name and force same-named communities to pay royalties to it.[49] A key point was that Alabama's Hollywood was the first incorporated Hollywood in the nation, whereas the one in California did not incorporate until 1903, six years after Alabama's. In addition, it merged with Los Angeles in 1910 and was no longer identified as a separate city.
|
57 |
+
|
58 |
+
The 2000 U.S. census counted 77,818 residents in the 3.51-square-mile (9.1 km2) Hollywood neighborhood—an average of 22,193 people per square mile (8,569 per km2), the seventh-densest neighborhood in all of Los Angeles County. In 2008 the city estimated that the population had increased to 85,489. The median age for residents was 31, about the city's average.[42]
|
59 |
+
|
60 |
+
Hollywood was said to be "highly diverse" when compared to the city at large. The ethnic breakdown in 2000 was 42.2% Latino or Hispanic, 41% Non-Hispanic White, 7.1% Asian, 5.2% black, and 4.5% other.[42] Mexico (21.3%) and Guatemala (13%) were the most common places of birth for the 53.8% of the residents who were born abroad, a figure that was considered high for the city as a whole.[42]
|
61 |
+
|
62 |
+
The median household income in 2008 dollars was $33,694, considered low for Los Angeles. The average household size of 2.1 people was also lower than the city norm. Renters occupied 92.4% of the housing units, and home- or apartment owners the rest.[42]
|
63 |
+
|
64 |
+
The percentages of never-married men (55.1%), never-married women (39.8%) and widows (9.6%) were among the county's highest. There were 2,640 families headed by single parents, about average for Los Angeles.[42]
|
65 |
+
|
66 |
+
In 2000, there were 2,828 military veterans, or 4.5%, a low rate for the city as a whole.[42]
|
67 |
+
These were the ten neighborhoods or cities in Los Angeles County with the highest population densities, according to the 2000 census, with the population per square mile:[51]
|
68 |
+
|
69 |
+
KNX was the last radio station to broadcast from Hollywood before it left CBS Columbia Square for a studio in the Miracle Mile in 2005.[52]
|
70 |
+
|
71 |
+
On January 22, 1947, the first commercial television station west of the Mississippi River, KTLA, began operating in Hollywood. In December of that year, The Public Prosecutor became the first network television series to be filmed in Hollywood. Television stations KTLA and KCET, both on Sunset Boulevard, are the last broadcasters (television or radio) with Hollywood addresses, but KCET has since sold its studios on Sunset and plans to move to another location. KNBC moved in 1962 from the former NBC Radio City Studios at the northeast corner of Sunset Boulevard and Vine Street to NBC Studios in Burbank. KTTV moved in 1996 from its former home at Metromedia Square on Sunset Boulevard to West Los Angeles, and KCOP left its home on La Brea Avenue to join KTTV on the Fox lot. KCBS-TV and KCAL-TV moved from their longtime home at CBS Columbia Square on Sunset Boulevard to a new facility at CBS Studio Center in Studio City.
|
72 |
+
|
73 |
+
As a neighborhood within the Los Angeles city limits, Hollywood does not have its own municipal government. There was an official, appointed by the Hollywood Chamber of Commerce, who served as an honorary "Mayor of Hollywood" for ceremonial purposes only. Johnny Grant held this position from 1980 until his death on January 9, 2008.[53]
|
74 |
+
|
75 |
+
The Los Angeles Police Department is responsible for police services. The Hollywood police station is at 1358 N. Wilcox Ave.
|
76 |
+
|
77 |
+
Los Angeles Fire Department operates four fire stations – Station 27, 41, 52, and 82 – in the area.
|
78 |
+
|
79 |
+
The Los Angeles County Department of Health Services operates the Hollywood-Wilshire Health Center in Hollywood.[54]
|
80 |
+
|
81 |
+
The United States Postal Service operates the Hollywood Post Office,[55] the Hollywood Pavilion Post Office,[56] and the Sunset Post Office.[57]
|
82 |
+
|
83 |
+
Hollywood is included within the Hollywood United Neighborhood Council (HUNC)[58] Hollywood Hills West Neighborhood Council[59][60] and the Hollywood Studio District Neighborhood Council.[61][62] Neighborhood Councils cast advisory votes on such issues as zoning, planning, and other community issues. The council members are voted in by stakeholders, generally defined as anyone living, working, owning property, or belonging to an organization within the boundaries of the council.[63]
|
84 |
+
|
85 |
+
Hollywood residents aged 25 and older holding a four-year degree amounted to 28% of the population in 2000, about the same as in the county at large.[42]
|
86 |
+
|
87 |
+
The Will and Ariel Durant Branch, John C. Fremont Branch, and the Frances Howard Goldwyn – Hollywood Regional Branch of the Los Angeles Public Library are in Hollywood.
|
88 |
+
|
89 |
+
Public schools are operated by the Los Angeles Unified School District (LAUSD).
|
90 |
+
|
91 |
+
Schools in Hollywood include:
|
92 |
+
|
93 |
+
The Grauman's Chinese Theatre before 2007
|
94 |
+
|
95 |
+
Hollywood Walk of Fame
|
96 |
+
|
97 |
+
Dolby Theatre
|
98 |
+
|
99 |
+
Crossroads of the World
|
en/2593.html.txt
ADDED
@@ -0,0 +1,248 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Holocaust, also known as the Shoah,[c] was the World War II genocide of the European Jews. Between 1941 and 1945, across German-occupied Europe, Nazi Germany and its collaborators systematically murdered some six million Jews, around two-thirds of Europe's Jewish population.[a][d] The murders were carried out in pogroms and mass shootings; by a policy of extermination through work in concentration camps; and in gas chambers and gas vans in German extermination camps, chiefly Auschwitz, Bełżec, Chełmno, Majdanek, Sobibór, and Treblinka in occupied Poland.[4]
|
4 |
+
|
5 |
+
Germany implemented the persecution in stages. Following Adolf Hitler's appointment as Chancellor on 30 January 1933, the regime built a network of concentration camps in Germany for political opponents and those deemed "undesirable", starting with Dachau on 22 March 1933.[5] After the passing of the Enabling Act on 24 March,[6] which gave Hitler plenary powers, the government began isolating Jews from civil society; this included boycotting Jewish businesses in April 1933 and enacting the Nuremberg Laws in September 1935. On 9–10 November 1938, eight months after Germany annexed Austria, Jewish businesses and other buildings were ransacked or set on fire throughout Germany and Austria during what became known as Kristallnacht (the "Night of Broken Glass"). After Germany invaded Poland in September 1939, triggering World War II, the regime set up ghettos to segregate Jews. Eventually thousands of camps and other detention sites were established across German-occupied Europe.
|
6 |
+
|
7 |
+
The segregation of Jews in ghettos culminated in the policy of extermination the Nazis called the "Final Solution to the Jewish Question", discussed by senior Nazi officials at the Wannsee Conference in Berlin in January 1942. As German forces captured territories in the East, all anti-Jewish measures were radicalized. Under the coordination of the SS, with directions from the highest leadership of the Nazi Party, killings were committed within Germany itself, throughout occupied Europe, and within territories controlled by Germany's allies. Paramilitary death squads called Einsatzgruppen, in cooperation with the German Army and local collaborators, murdered around 1.3 million Jews in mass shootings and pogroms between 1941 and 1945. By mid-1942, victims were being deported from ghettos across Europe in sealed freight trains to extermination camps where, if they survived the journey, they were gassed, worked or beaten to death, or killed by disease or during death marches. The killing continued until the end of World War II in Europe in May 1945.
|
8 |
+
|
9 |
+
The European Jews were targeted for extermination as part of a larger event during the Holocaust era (1933–1945),[7][8] in which Germany and its collaborators persecuted and murdered other groups, including ethnic Poles, Soviet civilians and prisoners of war, the Roma, the handicapped, political and religious dissidents, and gay men.[e] The death toll of these other groups is thought to be over 11 million.[b]
|
10 |
+
|
11 |
+
The term holocaust, first used in 1895 by the New York Times to describe the massacre of Armenian Christians by Ottoman Muslims,[9] comes from the Greek: ὁλόκαυστος, romanized: holókaustos; ὅλος hólos, "whole" + καυστός kaustós, "burnt offering".[f] The biblical term shoah (Hebrew: שׁוֹאָה), meaning "destruction", became the standard Hebrew term for the murder of the European Jews. According to Haaretz, the writer Yehuda Erez may have been the first to describe events in Germany as the shoah. Davar and later Haaretz both used the term in September 1939.[12][g] Yom HaShoah became Israel's Holocaust Remembrance Day in 1951.[14]
|
12 |
+
|
13 |
+
On 3 October 1941 the American Hebrew used the phrase "before the Holocaust", apparently to refer to the situation in France,[15] and in May 1943 the New York Times, discussing the Bermuda Conference, referred to the "hundreds of thousands of European Jews still surviving the Nazi Holocaust".[16] In 1968 the Library of Congress created a new category, "Holocaust, Jewish (1939–1945)".[17] The term was popularized in the United States by the NBC mini-series Holocaust (1978), about a fictional family of German Jews,[18] and in November that year the President's Commission on the Holocaust was established.[19] As non-Jewish groups began to include themselves as Holocaust victims, many Jews chose to use the Hebrew terms Shoah or Churban.[20][h] The Nazis used the phrase "Final Solution to the Jewish Question" (German: die Endlösung der Judenfrage).[22]
|
14 |
+
|
15 |
+
Most Holocaust historians define the Holocaust as the genocide of the European Jews by Nazi Germany and its collaborators between 1941 and 1945.[a]
|
16 |
+
|
17 |
+
Michael Gray, a specialist in Holocaust education,[30] offers three definitions: (a) "the persecution and murder of Jews by the Nazis and their collaborators between 1933 and 1945", which views Kristallnacht in 1938 as an early phase of the Holocaust; (b) "the systematic mass murder of the Jews by the Nazi regime and its collaborators between 1941 and 1945", which recognizes the policy shift in 1941 toward extermination; and (c) "the persecution and murder of various groups by the Nazi regime and its collaborators between 1933 and 1945", which includes all the Nazis' victims, a definition that fails, Gray writes, to acknowledge that only the Jews were singled out for annihilation.[31] Donald Niewyk and Francis Nicosia, in The Columbia Guide to the Holocaust (2000), favor a definition that focuses on the Jews, Roma and handicapped: "the systematic, state-sponsored murder of entire groups determined by heredity".[32]
|
18 |
+
|
19 |
+
The United States Holocaust Memorial Museum distinguishes between the Holocaust (the murder of six million Jews) and "the era of the Holocaust", which began when Hitler became Chancellor of Germany in January 1933.[33] Victims of the era of the Holocaust include those the Nazis viewed as inherently inferior (chiefly Slavs, the Roma and the handicapped), and those targeted because of their beliefs or behavior (such as Jehovah's Witnesses, communists and homosexuals).[34] Peter Hayes writes that the persecution of these groups was less consistent than that of the Jews; the Nazis' treatment of the Slavs, for example, consisted of "enslavement and gradual attrition", while some Slavs (Hayes lists Bulgarians, Croats, Slovaks and some Ukrainians) were favored.[24] Against this, Hitler regarded the Jews as what Dan Stone calls "a Gegenrasse: a 'counter-race' ... not really human at all".[e]
|
20 |
+
|
21 |
+
The logistics of the mass murder turned Germany into what Michael Berenbaum called a "genocidal state".[36] Eberhard Jäckel wrote in 1986 during the German Historikerstreit—a dispute among historians about the uniqueness of the Holocaust and its relationship with the crimes of the Soviet Union—that it was the first time a state had thrown its power behind the idea that an entire people should be wiped out.[i] Anyone with three or four Jewish grandparents was to be exterminated,[38] and complex rules were devised to deal with Mischlinge ("mixed breeds").[39] Bureaucrats identified who was a Jew, confiscated property, and scheduled trains to deport them. Companies fired Jews and later used them as slave labor. Universities dismissed Jewish faculty and students. German pharmaceutical companies tested drugs on camp prisoners; other companies built the crematoria.[36] As prisoners entered the death camps, they surrendered all personal property,[40] which was catalogued and tagged before being sent to Germany for reuse or recycling.[41] Through a concealed account, the German National Bank helped launder valuables stolen from the victims.[42]
|
22 |
+
|
23 |
+
Dan Stone writes that since the opening of archives following the fall of former communist states in Eastern Europe, it has become increasingly clear that the Holocaust was a pan-European phenomenon, a series of "Holocausts" impossible to conduct without the help of local collaborators. Without collaborators, the Germans could not have extended the killing across most of the continent.[43][j][45] According to Donald Bloxham, in many parts of Europe "extreme collective violence was becoming an accepted measure of resolving identity crises".[46] Christian Gerlach writes that non-Germans "not under German command" killed 5–6 percent of the six million, but that their involvement was crucial in other ways.[47]
|
24 |
+
|
25 |
+
The industrialization and scale of the murder was unprecedented. Killings were systematically conducted in virtually all areas of occupied Europe—more than 20 occupied countries.[48] Nearly three million Jews in occupied Poland and between 700,000 and 2.5 million Jews in the Soviet Union were killed. Hundreds of thousands more died in the rest of Europe.[49] Some Christian churches defended converted Jews, but otherwise, Saul Friedländer wrote in 2007: "Not one social group, not one religious community, not one scholarly institution or professional association in Germany and throughout Europe declared its solidarity with the Jews ..."[50]
|
26 |
+
|
27 |
+
Medical experiments conducted on camp inmates by the SS were another distinctive feature.[51] At least 7,000 prisoners were subjected to experiments; most died as a result, during the experiments or later.[52] Twenty-three senior physicians and other medical personnel were charged at Nuremberg, after the war, with crimes against humanity. They included the head of the German Red Cross, tenured professors, clinic directors, and biomedical researchers.[53] Experiments took place at Auschwitz, Buchenwald, Dachau, Natzweiler-Struthof, Neuengamme, Ravensbrück, Sachsenhausen, and elsewhere. Some dealt with sterilization of men and women, the treatment of war wounds, ways to counteract chemical weapons, research into new vaccines and drugs, and the survival of harsh conditions.[52]
|
28 |
+
|
29 |
+
The most notorious physician was Josef Mengele, an SS officer who became the Auschwitz camp doctor on 30 May 1943.[54] Interested in genetics[54] and keen to experiment on twins, he would pick out subjects from the new arrivals during "selection" on the ramp, shouting "Zwillinge heraus!" (twins step forward!).[55] They would be measured, killed, and dissected. One of Mengele's assistants said in 1946 that he was told to send organs of interest to the directors of the "Anthropological Institute in Berlin-Dahlem". This is thought to refer to Mengele's academic supervisor, Otmar Freiherr von Verschuer, director from October 1942 of the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics in Berlin-Dahlem.[56][k]
|
30 |
+
|
31 |
+
There were around 9.5 million Jews in Europe in 1933.[60] Most heavily concentrated in the east, the pre-war population was 3.5 million in Poland; 3 million in the Soviet Union; nearly 800,000 in Romania, and 700,000 in Hungary. Germany had over 500,000.[49]
|
32 |
+
|
33 |
+
Throughout the Middle Ages in Europe, Jews were subjected to antisemitism based on Christian theology, which blamed them for killing Jesus. Even after the Reformation, Catholicism and Lutheranism continued to persecute Jews, accusing them of blood libels and subjecting them to pogroms and expulsions.[61] The second half of the 19th century saw the emergence in the German empire and Austria-Hungary of the völkisch movement, developed by such thinkers as Houston Stewart Chamberlain and Paul de Lagarde. The movement embraced a pseudo-scientific racism that viewed Jews as a race whose members were locked in mortal combat with the Aryan race for world domination.[62] These ideas became commonplace throughout Germany; the professional classes adopted an ideology that did not see humans as racial equals with equal hereditary value.[63] The Nazi Party (the Nationalsozialistische Deutsche Arbeiterpartei or National Socialist German Workers' Party) originated as an offshoot of the völkisch movement, and it adopted that movement's antisemitism.[64]
|
34 |
+
|
35 |
+
After World War I (1914–1918), many Germans did not accept that their country had been defeated, which gave birth to the stab-in-the-back myth. This insinuated that it was disloyal politicians, chiefly Jews and communists, who had orchestrated Germany's surrender. Inflaming the anti-Jewish sentiment was the apparent over-representation of Jews in the leadership of communist revolutionary governments in Europe, such as Ernst Toller, head of a short-lived revolutionary government in Bavaria. This perception contributed to the canard of Jewish Bolshevism.[65]
|
36 |
+
|
37 |
+
Early antisemites in the Nazi Party included Dietrich Eckart, publisher of the Völkischer Beobachter, the party's newspaper, and Alfred Rosenberg, who wrote antisemitic articles for it in the 1920s. Rosenberg's vision of a secretive Jewish conspiracy ruling the world would influence Hitler's views of Jews by making them the driving force behind communism.[66] Central to Hitler's world view was the idea of expansion and Lebensraum (living space) in Eastern Europe for German Aryans, a policy of what Doris Bergen called "race and space". Open about his hatred of Jews, he subscribed to common antisemitic stereotypes.[67] From the early 1920s onwards, he compared the Jews to germs and said they should be dealt with in the same way. He viewed Marxism as a Jewish doctrine, said he was fighting against "Jewish Marxism", and believed that Jews had created communism as part of a conspiracy to destroy Germany.[68]
|
38 |
+
|
39 |
+
With the appointment in January 1933 of Adolf Hitler as Chancellor of Germany and the Nazi's seizure of power, German leaders proclaimed the rebirth of the Volksgemeinschaft ("people's community").[70] Nazi policies divided the population into two groups: the Volksgenossen ("national comrades") who belonged to the Volksgemeinschaft, and the Gemeinschaftsfremde ("community aliens") who did not. Enemies were divided into three groups: the "racial" or "blood" enemies, such as the Jews and Roma; political opponents of Nazism, such as Marxists, liberals, Christians, and the "reactionaries" viewed as wayward "national comrades"; and moral opponents, such as gay men, the work-shy, and habitual criminals. The latter two groups were to be sent to concentration camps for "re-education", with the aim of eventual absorption into the Volksgemeinschaft. "Racial" enemies could never belong to the Volksgemeinschaft; they were to be removed from society.[71]
|
40 |
+
|
41 |
+
Before and after the March 1933 Reichstag elections, the Nazis intensified their campaign of violence against opponents,[72] setting up concentration camps for extrajudicial imprisonment.[73] One of the first, at Dachau, opened on 22 March 1933.[74] Initially the camp contained mostly Communists and Social Democrats.[75] Other early prisons were consolidated by mid-1934 into purpose-built camps outside the cities, run exclusively by the SS.[76] The camps served as a deterrent by terrorizing Germans who did not support the regime.[77]
|
42 |
+
|
43 |
+
Throughout the 1930s, the legal, economic, and social rights of Jews were steadily restricted.[78]On 1 April 1933, there was a boycott of Jewish businesses.[79] On 7 April 1933, the Law for the Restoration of the Professional Civil Service was passed, which excluded Jews and other "non-Aryans" from the civil service.[80] Jews were disbarred from practicing law, being editors or proprietors of newspapers, joining the Journalists' Association, or owning farms.[81] In Silesia, in March 1933, a group of men entered the courthouse and beat up Jewish lawyers; Friedländer writes that, in Dresden, Jewish lawyers and judges were dragged out of courtrooms during trials.[82] Jewish students were restricted by quotas from attending schools and universities.[80] Jewish businesses were targeted for closure or "Aryanization", the forcible sale to Germans; of the approximately 50,000 Jewish-owned businesses in Germany in 1933, about 7,000 were still Jewish-owned in April 1939. Works by Jewish composers,[83] authors, and artists were excluded from publications, performances, and exhibitions.[84] Jewish doctors were dismissed or urged to resign. The Deutsches Ärzteblatt (a medical journal) reported on 6 April 1933: "Germans are to be treated by Germans only."[85]
|
44 |
+
|
45 |
+
The economic strain of the Great Depression led Protestant charities and some members of the German medical establishment to advocate compulsory sterilization of the "incurable" mentally and physically handicapped,[87] people the Nazis called Lebensunwertes Leben (life unworthy of life).[88] On 14 July 1933, the Law for the Prevention of Hereditarily Diseased Offspring (Gesetz zur Verhütung erbkranken Nachwuchses), the Sterilization Law, was passed.[89][90] The New York Times reported on 21 December that year: "400,000 Germans to be sterilized".[91] There were 84,525 applications from doctors in the first year. The courts reached a decision in 64,499 of those cases; 56,244 were in favor of sterilization.[92] Estimates for the number of involuntary sterilizations during the whole of the Third Reich range from 300,000 to 400,000.[93]
|
46 |
+
|
47 |
+
In October 1939 Hitler signed a "euthanasia decree" backdated to 1 September 1939 that authorized Reichsleiter Philipp Bouhler, the chief of Hitler's Chancellery, and Karl Brandt, Hitler's personal physician, to carry out a program of involuntary euthanasia. After the war this program came to be known as Aktion T4,[94] named after Tiergartenstraße 4, the address of a villa in the Berlin borough of Tiergarten, where the various organizations involved were headquartered.[95] T4 was mainly directed at adults, but the euthanasia of children was also carried out.[96] Between 1939 and 1941, 80,000 to 100,000 mentally ill adults in institutions were killed, as were 5,000 children and 1,000 Jews, also in institutions. There were also dedicated killing centers, where the deaths were estimated at 20,000, according to Georg Renno, deputy director of Schloss Hartheim, one of the euthanasia centers, or 400,000, according to Frank Zeireis, commandant of the Mauthausen concentration camp.[97] Overall, the number of mentally and physically handicapped murdered was about 150,000.[98]
|
48 |
+
|
49 |
+
Although not ordered to take part, psychiatrists and many psychiatric institutions were involved in the planning and carrying out of Aktion T4.[99] In August 1941, after protests from Germany's Catholic and Protestant churches, Hitler cancelled the T4 program,[100] although the handicapped continued to be killed until the end of the war.[98] The medical community regularly received bodies for research; for example, the University of Tübingen received 1,077 bodies from executions between 1933 and 1945. The German neuroscientist Julius Hallervorden received 697 brains from one hospital between 1940 and 1944: "I accepted these brains of course. Where they came from and how they came to me was really none of my business."[101]
|
50 |
+
|
51 |
+
On 15 September 1935, the Reichstag passed the Reich Citizenship Law and the Law for the Protection of German Blood and German Honor, known as the Nuremberg Laws. The former said that only those of "German or kindred blood" could be citizens. Anyone with three or more Jewish grandparents was classified as a Jew.[103] The second law said: "Marriages between Jews and subjects of the state of German or related blood are forbidden." Sexual relationships between them were also criminalized; Jews were not allowed to employ German women under the age of 45 in their homes.[104][103] The laws referred to Jews but applied equally to the Roma and black Germans. Although other European countries—Bulgaria, Croatia, Hungary, Italy, Romania, Slovakia, and Vichy France—passed similar legislation,[103] Gerlach notes that "Nazi Germany adopted more nationwide anti-Jewish laws and regulations (about 1,500) than any other state."[105]
|
52 |
+
|
53 |
+
By the end of 1934, 50,000 German Jews had left Germany,[106] and by the end of 1938, approximately half the German Jewish population had left,[107] among them the conductor Bruno Walter, who fled after being told that the hall of the Berlin Philharmonic would be burned down if he conducted a concert there.[108] Albert Einstein, who was in the United States when Hitler came to power, never returned to Germany; his citizenship was revoked and he was expelled from the Kaiser Wilhelm Society and Prussian Academy of Sciences.[109] Other Jewish scientists, including Gustav Hertz, lost their teaching positions and left the country.[110]
|
54 |
+
|
55 |
+
On 12 March 1938, Germany annexed Austria. Austrian Nazis broke into Jewish shops, stole from Jewish homes and businesses, and forced Jews to perform humiliating acts such as scrubbing the streets or cleaning toilets.[111] Jewish businesses were "Aryanized", and all the legal restrictions on Jews in Germany were imposed.[112] In August that year, Adolf Eichmann was put in charge of the Central Agency for Jewish Emigration in Vienna (Zentralstelle für jüdische Auswanderung in Wien). About 100,000 Austrian Jews had left the country by May 1939, including Sigmund Freud and his family, who moved to London.[113] The Évian Conference was held in France in July 1938 by 32 countries, as an attempt to help the increased refugees from Germany, but aside from establishing the largely ineffectual Intergovernmental Committee on Refugees, little was accomplished and most countries participating did not increase the number of refugees they would accept.[114]
|
56 |
+
|
57 |
+
On 7 November 1938, Herschel Grynszpan, a Polish Jew, shot the German diplomat Ernst vom Rath in the German Embassy in Paris, in retaliation for the expulsion of his parents and siblings from Germany.[115][l] When vom Rath died on 9 November, the government used his death as a pretext to instigate a pogrom against the Jews. The government claimed it was spontaneous, but in fact it had been ordered and planned by Adolf Hitler and Joseph Goebbels, although with no clear goals, according to David Cesarani. The result, he writes, was "murder, rape, looting, destruction of property, and terror on an unprecedented scale".[117]
|
58 |
+
|
59 |
+
Known as Kristallnacht (or "Night of Broken Glass"), the attacks on 9–10 November 1938 were partly carried out by the SS and SA,[118] but ordinary Germans joined in; in some areas, the violence began before the SS or SA arrived.[119] Over 7,500 Jewish shops (out of 9,000) were looted and attacked, and over 1,000 synagogues damaged or destroyed. Groups of Jews were forced by the crowd to watch their synagogues burn; in Bensheim they were made to dance around it, and in Laupheim to kneel before it.[120] At least 90 Jews died. The damage was estimated at 39 million Reichmarks.[121] Cesarani writes that "[t]he extent of the desolation stunned the population and rocked the regime."[122] It also shocked the rest of the world. The Times of London wrote on 11 November 1938: "No foreign propagandist bent upon blackening Germany before the world could outdo the tale of burnings and beatings, of blackguardly assaults upon defenseless and innocent people, which disgraced that country yesterday. Either the German authorities were a party to this outbreak or their powers over public order and a hooligan minority are not what they are proudly claimed to be."[123]
|
60 |
+
|
61 |
+
Between 9 and 16 November, 30,000 Jews were sent to the Buchenwald, Dachau, and Sachsenhausen concentration camps.[124] Many were released within weeks; by early 1939, 2,000 remained in the camps.[125] German Jewry was held collectively responsible for restitution of the damage; they also had to pay an "atonement tax" of over a billion Reichmarks. Insurance payments for damage to their property were confiscated by the government. A decree on 12 November 1938 barred Jews from most remaining occupations.[126] Kristallnacht marked the end of any sort of public Jewish activity and culture, and Jews stepped up their efforts to leave the country.[127]
|
62 |
+
|
63 |
+
Before World War II, Germany considered mass deportation from Europe of German, and later European, Jewry.[128] Among the areas considered for possible resettlement were British Palestine and, after the war began, French Madagascar,[129] Siberia, and two reservations in Poland.[130][m] Palestine was the only location to which any German resettlement plan produced results, via the Haavara Agreement between the Zionist Federation of Germany and the German government. Between November 1933 and December 1939, the agreement resulted in the emigration of about 53,000 German Jews, who were allowed to transfer RM 100 million of their assets to Palestine by buying German goods, in violation of the Jewish-led anti-Nazi boycott of 1933.[132]
|
64 |
+
|
65 |
+
When Germany invaded Poland on 1 September 1939, triggering a declaration of war from France and the UK, it gained control of an additional two million Jews, reduced to around 1.7 – 1.8 million in the German zone when the Soviet Union invaded from the east on 17 September.[133] The German army, the Wehrmacht, was accompanied by seven SS Einsatzgruppen ("special task forces") and an Einsatzkommando, numbering altogether 3,000 men, whose role was to deal with "all anti-German elements in hostile country behind the troops in combat". Most of the Einsatzgruppen commanders were professionals; 15 of the 25 leaders had PhDs. By 29 August, two days before the invasion, they had already drawn up a list of 30,000 people to send to concentration camps. By the first week of the invasion, 200 people were being executed every day.[134]
|
66 |
+
|
67 |
+
The Germans began sending Jews from territories they had recently annexed (Austria, Czechoslovakia, and western Poland) to the central section of Poland, which they called the General Government.[135] To
|
68 |
+
make it easier to control and deport them, the Jews were concentrated in ghettos in major cities.[136] The Germans planned to set up a Jewish reservation in southeast Poland around the transit camp in Nisko, but the "Nisko plan" failed, in part because it was opposed by Hans Frank, the new Governor-General of the General Government.[137] In mid-October 1940 the idea was revived, this time to be located in Lublin. Resettlement continued until January 1941 under SS officer Odilo Globocnik, but further plans for the Lublin reservation failed for logistical and political reasons.[138]
|
69 |
+
|
70 |
+
There had been anti-Jewish pogroms in Poland before the war, including in around 100 towns between 1935 and 1937,[139] and again in 1938.[140] In June and July 1941, during the Lviv pogroms in Lwów (now Lviv, Ukraine), around 6,000 Polish Jews were murdered in the streets by the Ukrainian People's Militia and local people.[141][n] Another 2,500–3,500 Jews died in mass shootings by Einsatzgruppe C.[142] During the Jedwabne pogrom on 10 July 1941, a group of Poles in Jedwabne killed the town's Jewish community, many of whom were burned alive in a barn.[143] The attack may have been engineered by the German Security Police.[144][o]
|
71 |
+
|
72 |
+
The Germans established ghettos in Poland, in the incorporated territories and General Government area, to confine Jews.[135] These were closed off from the outside world at different times and for different reasons.[146] In early 1941, the Warsaw ghetto contained 445,000 people, including 130,000 from elsewhere,[147] while the second largest, the Łódź ghetto, held 160,000.[148] Although the Warsaw ghetto contained 30 percent of the city's population, it occupied only 2.5 percent of its area, averaging over nine people per room.[149] The massive overcrowding, poor hygiene facilities and lack of food killed thousands. Over 43,000 residents died in 1941.[150]
|
73 |
+
|
74 |
+
According to a letter dated 21 September 1939 from SS-Obergruppenführer Reinhard Heydrich, head of the Reichssicherheitshauptamt (RSHA or Reich Security Head Office), to the Einsatzgruppen, each ghetto had to be run by a Judenrat, or "Jewish Council of Elders", to consist of 24 male Jews with local influence.[151] Judenräte were responsible for the ghetto's day-to-day operations, including distributing food, water, heat, medical care, and shelter. The Germans also required them to confiscate property, organize forced labor, and, finally, facilitate deportations to extermination camps.[152] The Jewish councils' basic strategy was one of trying to minimize losses by cooperating with German authorities, bribing officials, and petitioning for better conditions.[153]
|
75 |
+
|
76 |
+
Germany invaded Norway and Denmark on 9 April 1940, during Operation Weserübung. Denmark was overrun so quickly that there was no time for a resistance to form. Consequently, the Danish government stayed in power and the Germans found it easier to work through it. Because of this, few measures were taken against the Danish Jews before 1942.[154] By June 1940 Norway was completely occupied.[155] In late 1940, the country's 1,800 Jews were banned from certain occupations, and in 1941 all Jews had to register their property with the government.[156] On 26 November 1942, 532 Jews were taken by police officers, at four o'clock in the morning, to Oslo harbor, where they boarded a German ship. From Germany they were sent by freight train to Auschwitz. According to Dan Stone, only nine survived the war.[157]
|
77 |
+
|
78 |
+
In May 1940, Germany invaded the Netherlands, Luxembourg, Belgium, and France. After Belgium's surrender, the country was ruled by a German military governor, Alexander von Falkenhausen, who enacted anti-Jewish measures against its 90,000 Jews, many of them refugees from Germany or Eastern Europe.[158] In the Netherlands, the Germans installed Arthur Seyss-Inquart as Reichskommissar, who began to persecute the country's 140,000 Jews. Jews were forced out of their jobs and had to register with the government. In February 1941, non-Jewish Dutch citizens staged a strike in protest that was quickly crushed.[159] From July 1942, over 107,000 Dutch Jews were deported; only 5,000 survived the war. Most were sent to Auschwitz; the first transport of 1,135 Jews left Holland for Auschwitz on 15 July 1942. Between 2 March and 20 July 1943, 34,313 Jews were sent in 19 transports to the Sobibór extermination camp, where all but 18 are thought to have been gassed on arrival.[160]
|
79 |
+
|
80 |
+
France had approximately 300,000 Jews, divided between the German-occupied north and the unoccupied collaborationist southern areas in Vichy France (named after the town Vichy). The occupied regions were under the control of a military governor, and there, anti-Jewish measures were not enacted as quickly as they were in the Vichy-controlled areas.[161] In July 1940, the Jews in the parts of Alsace-Lorraine that had been annexed to Germany were expelled into Vichy France.[162] Vichy France's government implemented anti-Jewish measures in French Algeria and the two French Protectorates of Tunisia and Morocco.[163] Tunisia had 85,000 Jews when the Germans and Italians arrived in November 1942; an estimated 5,000 Jews were subjected to forced labor.[164]
|
81 |
+
|
82 |
+
The fall of France gave rise to the Madagascar Plan in the summer of 1940, when French Madagascar in Southeast Africa became the focus of discussions about deporting all European Jews there; it was thought that the area's harsh living conditions would hasten deaths.[165] Several Polish, French and British leaders had discussed the idea in the 1930s, as did German leaders from 1938.[166] Adolf Eichmann's office was ordered to investigate the option, but no evidence of planning exists until after the defeat of France in June 1940.[167] Germany's inability to defeat Britain, something that was obvious to the Germans by September 1940, prevented the movement of Jews across the seas,[168] and the Foreign Ministry abandoned the plan in February 1942.[169]
|
83 |
+
|
84 |
+
Yugoslavia and Greece were invaded in April 1941 and surrendered before the end of the month. Germany and Italy divided Greece into occupation zones but did not eliminate it as a country. Yugoslavia, home to around 80,000 Jews, was dismembered; regions in the north were annexed by Germany and regions along the coast made part of Italy. The rest of the country was divided into the Independent State of Croatia, nominally an ally of Germany, and Serbia, which was governed by a combination of military and police administrators.[170] Serbia was declared free of Jews (Judenfrei) in August 1942.[171][172] Croatia's ruling party, the Ustashe, killed the majority of the country's Jews and massacred, expelled or forcibly converted to Catholicism the area's local Orthodox Christian Serb population.[173] On 18 April 1944 Croatia was declared as Judenfrei.[174][175] Jews and Serbs alike were "hacked to death and burned in barns", writes historian Jeremy Black. One difference between the Germans and Croatians was that the Ustashe allowed its Jewish and Serbian victims to convert to Catholicism so they could escape death.[171] According to Jozo Tomasevich of the 115 Jewish religious communities from Yugoslavia which existed in 1939 and 1940, only the Jewish communities from Zagreb managed to survive the war.[176]
|
85 |
+
|
86 |
+
Germany invaded the Soviet Union on 22 June 1941, a day Timothy Snyder called "one of the most significant days in the history of Europe ... the beginning of a calamity that defies description".[177] Jürgen Matthäus described it as "a quantum leap toward the Holocaust".[178] German propaganda portrayed the conflict as an ideological war between German National Socialism and Jewish Bolshevism and as a racial war between the Germans and the Jewish, Romani, and Slavic Untermenschen ("sub-humans").[179] David Cesarani writes that the war was driven primarily by the need for resources: agricultural land to feed Germany, natural resources for German industry, and control over Europe's largest oil fields. But precisely because of the Soviet Union's vast resources, "[v]ictory would have to be swift".[180] Between early fall 1941 and late spring 1942, according to Matthäus, 2 million of the 3.5 million Soviet soldiers captured by the Wehrmacht (Germany's armed forces) had been executed or had died of neglect and abuse. By 1944 the Soviet death toll was at least 20 million.[181]
|
87 |
+
|
88 |
+
As German troops advanced, the mass shooting of "anti-German elements" was assigned, as in Poland, to the Einsatzgruppen, this time under the command of Reinhard Heydrich.[182] The point of the attacks was to destroy the local Communist Party leadership and therefore the state, including "Jews in the Party and State employment", and any "radical elements".[p] Cesarani writes that the killing of Jews was at this point a "subset" of these activities.[184]
|
89 |
+
|
90 |
+
Einsatzgruppe A arrived in the Baltic states (Estonia, Latvia, and Lithuania) with Army Group North; Einsatzgruppe B in Belarus with Army Group Center; Einsatzgruppe C in the Ukraine with Army Group South; and Einsatzgruppe D went further south into Ukraine with the 11th Army.[185] Each Einsatzgruppe numbered around 600–1,000 men, with a few women in administrative roles.[186] Travelling with nine German Order Police battalions and three units of the Waffen-SS,[187] the Einsatzgruppen and their local collaborators had murdered almost 500,000 people by the winter of 1941–1942. By the end of the war, they had killed around two million, including about 1.3 million Jews and up to a quarter of a million Roma.[188] According to Wolfram Wette, the Germany army took part in these shootings as bystanders, photographers and active shooters; to justify their troops' involvement, army commanders would describe the victims as "hostages", "bandits" and "partisans".[189] Local populations helped by identifying and rounding up Jews, and by actively participating in the killing. In Lithuania, Latvia and western Ukraine, locals were deeply involved; Latvian and Lithuanian units participated in the murder of Jews in Belarus, and in the south, Ukrainians killed about 24,000 Jews. Some Ukrainians went to Poland to serve as guards in the camps.[190]
|
91 |
+
|
92 |
+
Typically, victims would undress and give up their valuables before lining up beside a ditch to be shot, or they would be forced to climb into the ditch, lie on a lower layer of corpses, and wait to be killed.[192] The latter was known as Sardinenpackung ("packing sardines"), a method reportedly started by SS officer Friedrich Jeckeln.[193]
|
93 |
+
|
94 |
+
At first the Einsatzgruppen targeted the male Jewish intelligentsia, defined as male Jews aged 15–60 who had worked for the state and in certain professions (the commandos would describe them as "Bolshevist functionaries" and similar), but from August 1941 they began to murder women and children too.[194] Christopher Browning reports that on 1 August, the SS Cavalry Brigade passed an order to its units: "Explicit order by RF-SS [Heinrich Himmler, Reichsführer-SS]. All Jews must be shot. Drive the female Jews into the swamps."[195] In a speech on 6 October 1943 to party leaders, Heinrich Himmler said he had ordered that women and children be shot, but Peter Longerich and Christian Gerlach write that the murder of women and children began at different times in different areas, suggesting local influence.[196]
|
95 |
+
|
96 |
+
Notable massacres include the July 1941 Ponary massacre near Vilnius (Soviet Lithuania), in which Einsatgruppe B and Lithuanian collaborators shot 72,000 Jews and 8,000 non-Jewish Lithuanians and Poles.[197] In the Kamianets-Podilskyi massacre (Soviet Ukraine), nearly 24,000 Jews were killed between 27 and 30 August 1941.[181] The largest massacre was at a ravine called Babi Yar outside Kiev (also Soviet Ukraine), where 33,771 Jews were killed on 29–30 September 1941.[198] Einsatzgruppe C and the Order Police, assisted by Ukrainian militia, carried out the killings,[199] while the German 6th Army helped round up and transport the victims to be shot.[200] The Germans continued to use the ravine for mass killings throughout the war; the total killed there could be as high as 100,000.[201]
|
97 |
+
|
98 |
+
Historians agree that there was a "gradual radicalization" between the spring and autumn of 1941 of what Longerich calls Germany's Judenpolitik, but they disagree about whether a decision—Führerentscheidung (Führer's decision)—to murder the European Jews was made at this point.[202][q] According to Christopher Browning, writing in 2004, most historians maintain that there was no order before the invasion to kill all the Soviet Jews.[204] Longerich wrote in 2010 that the gradual increase in brutality and numbers killed between July and September 1941 suggests there was "no particular order"; instead it was a question of "a process of increasingly radical interpretations of orders".[205]
|
99 |
+
|
100 |
+
According to Dan Stone, the murder of Jews in Romania was "essentially an independent undertaking".[206] Romania implemented anti-Jewish measures in May and June 1940 as part of its efforts towards an alliance with Germany. Jews were forced from government service, pogroms were carried out, and by March 1941 all Jews had lost their jobs and had their property confiscated.[207] In June 1941 Romania joined Germany in its invasion of the Soviet Union.
|
101 |
+
|
102 |
+
Thousands of Jews were killed in January and June 1941 in the Bucharest pogrom and Iaşi pogrom.[208] According to a 2004 report by Tuvia Friling and others, up to 14,850 Jews died during the Iaşi pogrom.[209] The Romanian military killed up to 25,000 Jews during the Odessa massacre between 18 October 1941 and March 1942, assisted by gendarmes and the police.[210] Mihai Antonescu, Romania's deputy prime minister, was reported to have said it was "the most favorable moment in our history" to solve the "Jewish problem".[211] In July 1941 he said it was time for "total ethnic purification, for a revision of national life, and for purging our race of all those elements which are foreign to its soul, which have grown like mistletoes and darken our future".[212] Romania set up concentration camps under its control in Transnistria, reportedly extremely brutal, where 154,000–170,000 Jews were deported from 1941 to 1943.[213]
|
103 |
+
|
104 |
+
Bulgaria introduced anti-Jewish measures between 1940 and 1943, which included a curfew, the requirement to wear a yellow star, restrictions on owning telephones or radios, the banning of mixed marriages (except for Jews who had converted to Christianity), and the registration of property.[214] It annexed Thrace and Macedonia, and in February 1943 agreed to a demand from Germany that it deport 20,000 Jews to the Treblinka extermination camp. All 11,000 Jews from the annexed territories were sent to their deaths, and plans were made to deport an additional 6,000–8,000 Bulgarian Jews from Sofia to meet the quota.[215] When the plans became public, the Orthodox Church and many Bulgarians protested, and King Boris III canceled the deportation of Jews native to Bulgaria.[216] Instead, they were expelled to provincial areas of the country.[215]
|
105 |
+
|
106 |
+
Stone writes that Slovakia, led by Roman Catholic priest Jozef Tiso (president of the Slovak State, 1939–1945), was "one of the most loyal of the collaborationist regimes". It deported 7,500 Jews in 1938 on its own initiative; introduced anti-Jewish measures in 1940; and by the autumn of 1942 had deported around 60,000 Jews to ghettos and concentration camps in Poland. Another 2,396 were deported and 2,257 killed that autumn during an uprising, and 13,500 were deported between October 1944 and March 1945.[217] According to Stone, "the Holocaust in Slovakia was far more than a German project, even if it was carried out in the context of a 'puppet' state."[218]
|
107 |
+
|
108 |
+
Although Hungary expelled Jews who were not citizens from its newly annexed lands in 1941, it did not deport most of its Jews[219] until the German invasion of Hungary in March 1944. Between 15 May and early July 1944, 437,000 Jews were deported from Hungary, mostly to Auschwitz, where most of them were gassed; there were four transports a day, each carrying 3,000 people.[220] In Budapest in October and November 1944, the Hungarian Arrow Cross forced 50,000 Jews to march to the Austrian border as part of a deal with Germany to supply forced labor. So many died that the marches were stopped.[221]
|
109 |
+
|
110 |
+
Italy introduced some antisemitic measures, but there was less antisemitism there than in Germany, and Italian-occupied countries were generally safer for Jews than those occupied by Germany. There were no deportations of Italian Jews to Germany while Italy remained an ally. In some areas, the Italian authorities even tried to protect Jews, such as in the Croatian areas of the Balkans. But while Italian forces in Russia were not as vicious towards Jews as the Germans, they did not try to stop German atrocities either.[222] Several forced labor camps for Jews were established in Italian-controlled Libya; almost 2,600 Libyan Jews were sent to camps, where 562 died.[223] In Finland, the government was pressured in 1942 to hand over its 150–200 non-Finnish Jews to Germany. After opposition from both the government and public, eight non-Finnish Jews were deported in late 1942; only one survived the war.[224] Japan had little antisemitism in its society and did not persecute Jews in most of the territories it controlled. Jews in Shanghai were confined, but despite German pressure they were not killed.[225]
|
111 |
+
|
112 |
+
Germany first used concentration camps as places of terror and unlawful incarceration of political opponents.[227] Large numbers of Jews were not sent there until after Kristallnacht in November 1938.[228] After war broke out in 1939, new camps were established, many outside Germany in occupied Europe.[229] Most wartime prisoners of the camps were not Germans but belonged to countries under German occupation.[230]
|
113 |
+
|
114 |
+
After 1942, the economic function of the camps, previously secondary to their penal and terror functions, came to the fore. Forced labor of camp prisoners became commonplace.[228] The guards became much more brutal, and the death rate increased as the guards not only beat and starved prisoners, but killed them more frequently.[230] Vernichtung durch Arbeit ("extermination through labor") was a policy; camp inmates would literally be worked to death, or to physical exhaustion, at which point they would be gassed or shot.[231] The Germans estimated the average prisoner's lifespan in a concentration camp at three months, as a result of lack of food and clothing, constant epidemics, and frequent punishments for the most minor transgressions.[232] The shifts were long and often involved exposure to dangerous materials.[233]
|
115 |
+
|
116 |
+
Transportation to and between camps was often carried out in closed freight cars with littie air or water, long delays and prisoners packed tightly.[234] In mid-1942 work camps began requiring newly arrived prisoners to be placed in quarantine for four weeks.[235] Prisoners wore colored triangles on their uniforms, the color denoting the reason for their incarceration. Red signified a political prisoner, Jehovah's Witnesses had purple triangles, "asocials" and criminals wore black and green, and gay men wore pink.[236] Jews wore two yellow triangles, one over another to form a six-pointed star.[237] Prisoners in Auschwitz were tattooed on arrival with an identification number.[238]
|
117 |
+
|
118 |
+
On 7 December 1941, Japanese aircraft attacked Pearl Harbor, an American naval base in Honolulu, Hawaii, killing 2,403 Americans. The following day, the United States declared war on Japan, and on 11 December, Germany declared war on the United States.[239] According to Deborah Dwork and Robert Jan van Pelt, Hitler had trusted American Jews, whom he assumed were all powerful, to keep the United States out of the war in the interests of German Jews. When America declared war, he blamed the Jews.[240]
|
119 |
+
|
120 |
+
Nearly three years earlier, on 30 January 1939, Hitler had told the Reichstag: "if the international Jewish financiers in and outside Europe should succeed in plunging the nations once more into a world war, then the result will be not the Bolshevising of the earth, and thus a victory of Jewry, but the annihilation of the Jewish race in Europe!"[241] In the view of Christian Gerlach, Hitler "announced his decision in principle" to annihilate the Jews on or around 12 December 1941, one day after his declaration of war. On that day, Hitler gave a speech in his private apartment at the Reich Chancellery to senior Nazi Party leaders: the Reichsleiter, the most senior, and the Gauleiter, the regional leaders.[242] The following day, Joseph Goebbels, the Reich Minister of Propaganda, noted in his diary:
|
121 |
+
|
122 |
+
Regarding the Jewish question, the Führer is determined to clear the table. He warned the Jews that if they were to cause another world war, it would lead to their destruction. Those were not empty words. Now the world war has come. The destruction of the Jews must be its necessary consequence. We cannot be sentimental about it.[t]
|
123 |
+
|
124 |
+
Christopher Browning argues that Hitler gave no order during the Reich Chancellery meeting but did make clear that he had intended his 1939 warning to the Jews to be taken literally, and he signaled to party leaders that they could give appropriate orders to others.[244] Peter Longerich interprets Hitler's speech to the party leaders as an appeal to radicalize a policy that was already being executed.[245] According to Gerlach, an unidentified former German Sicherheitsdienst officer wrote in a report in 1944, after defecting to Switzerland: "After America entered the war, the annihilation (Ausrottung) of all European Jews was initiated on the Führer's order."[246]
|
125 |
+
|
126 |
+
Four days after Hitler's meeting with party leaders, Hans Frank, Governor-General of the General Government area of occupied Poland, who was at the meeting, spoke to district governors: "We must put an end to the Jews ... I will in principle proceed only on the assumption that they will disappear. They must go."[247][u] On 18 December Hitler and Himmler held a meeting to which Himmler referred in his appointment book as "Juden frage | als Partisanen auszurotten" ("Jewish question / to be exterminated as partisans"). Browning interprets this as a meeting to discuss how to justify and speak about the killing.[249]
|
127 |
+
|
128 |
+
SS-Obergruppenführer Reinhard Heydrich, head of the Reich Security Head Office (RSHA), convened what became known as the Wannsee Conference on 20 January 1942 at Am Großen Wannsee 56–58, a villa in Berlin's Wannsee suburb.[250][251] The meeting had been scheduled for 9 December 1941, and invitations had been sent on 29 November, but it had been postponed indefinitely. A month later, invitations were sent out again, this time for 20 January.[252]
|
129 |
+
|
130 |
+
The 15 men present at Wannsee included Adolf Eichmann (head of Jewish affairs for the RSHA), Heinrich Müller (head of the Gestapo), and other SS and party leaders and department heads.[w] Browning writes that eight of the 15 had doctorates: "Thus it was not a dimwitted crowd unable to grasp what was going to be said to them."[254] Thirty copies of the minutes, known as the Wannsee Protocol, were made. Copy no. 16 was found by American prosecutors in March 1947 in a German Foreign Office folder.[255] Written by Eichmann and stamped "Top Secret", the minutes were written in "euphemistic language" on Heydrich's instructions, according to Eichmann's later testimony.[245]
|
131 |
+
|
132 |
+
Discussing plans for a "final solution to the Jewish question" ("Endlösung der Judenfrage"), and a "final solution to the Jewish question in Europe" ("Endlösung der europäischen Judenfrage"),[250] the conference was held to share information and responsibility, coordinate efforts and policies ("Parallelisierung der Linienführung"), and ensure that authority rested with Heydrich. There was also discussion about whether to include the German Mischlinge (half-Jews).[256] Heydrich told the meeting: "Another possible solution of the problem has now taken the place of emigration, i.e. the evacuation of the Jews to the East, provided that the Fuehrer gives the appropriate approval in advance."[250] He continued:
|
133 |
+
|
134 |
+
Under proper guidance, in the course of the Final Solution, the Jews are to be allocated for appropriate labor in the East. Able-bodied Jews, separated according to sex, will be taken in large work columns to these areas for work on roads, in the course of which action doubtless a large portion will be eliminated by natural causes.
|
135 |
+
|
136 |
+
The possible final remnant will, since it will undoubtedly consist of the most resistant portion, have to be treated accordingly because it is the product of natural selection and would, if released, act as the seed of a new Jewish revival. (See the experience of history.)
|
137 |
+
|
138 |
+
In the course of the practical execution of the Final Solution, Europe will be combed through from west to east. Germany proper, including the Protectorate of Bohemia and Moravia, will have to be handled first due to the housing problem and additional social and political necessities.
|
139 |
+
|
140 |
+
The evacuated Jews will first be sent, group by group, to so-called transit ghettos, from which they will be transported to the East.[250]
|
141 |
+
|
142 |
+
These evacuations were regarded as provisional or "temporary solutions" ("Ausweichmöglichkeiten").[257][x] The final solution would encompass the 11 million Jews living not only in territories controlled by Germany, but elsewhere in Europe and adjacent territories, such as Britain, Ireland, Switzerland, Turkey, Sweden, Portugal, Spain, and Hungary, "dependent on military developments".[257] There was little doubt what the final solution was, writes Longerich: "the Jews were to be annihilated by a combination of forced labour and mass murder."[259]
|
143 |
+
|
144 |
+
At the end of 1941 in occupied Poland, the Germans began building additional camps or expanding existing ones. Auschwitz, for example, was expanded in October 1941 by building Auschwitz II-Birkenau a few kilometers away.[4] By the spring or summer of 1942, gas chambers had been installed in these new facilities, except for Chełmno, which used gas vans.
|
145 |
+
|
146 |
+
Other camps sometimes described as extermination camps include Maly Trostinets near Minsk in the occupied Soviet Union, where 65,000 are thought to have died, mostly by shooting but also in gas vans;[278] Mauthausen in Austria;[279] Stutthof, near Gdańsk, Poland;[280] and Sachsenhausen and Ravensbrück in Germany. The camps in Austria, Germany and Poland all had gas chambers to kill inmates deemed unable to work.[281]
|
147 |
+
|
148 |
+
Chełmno, with gas vans only, had its roots in the Aktion T4 euthanasia program.[282] In December 1939 and January 1940, gas vans equipped with gas cylinders and a sealed compartment had been used to kill the handicapped in occupied Poland.[283] As the mass shootings continued in Russia, Himmler and his subordinates in the field feared that the murders were causing psychological problems for the SS,[284] and began searching for more efficient methods. In December 1941, similar vans, using exhaust fumes rather than bottled gas, were introduced into the camp at Chełmno,[267] Victims were asphyxiated while being driven to prepared burial pits in the nearby forests.[285] The vans were also used in the occupied Soviet Union, for example in smaller clearing actions in the Minsk ghetto,[286] and in Yugoslavia.[287] Apparently, as with the mass shootings, the vans caused emotional problems for the operators, and the small number of victims the vans could handle made them ineffective.[288]
|
149 |
+
|
150 |
+
Christian Gerlach writes that over three million Jews were murdered in 1942, the year that "marked the peak" of the mass murder.[290] At least 1.4 million of these were in the General Government area of Poland.[291] Victims usually arrived at the extermination camps by freight train.[292] Almost all arrivals at Bełżec, Sobibór and Treblinka were sent directly to the gas chambers,[293] with individuals occasionally selected to replace dead workers.[294] At Auschwitz, about 20 percent of Jews were selected to work.[295] Those selected for death at all camps were told to undress and hand their valuables to camp workers.[40] They were then herded naked into the gas chambers. To prevent panic, they were told the gas chambers were showers or delousing chambers.[296]
|
151 |
+
|
152 |
+
At Auschwitz, after the chambers were filled, the doors were shut and pellets of Zyklon-B were dropped into the chambers through vents,[297] releasing toxic prussic acid.[298] Those inside died within 20 minutes; the speed of death depended on how close the inmate was standing to a gas vent, according to the commandant Rudolf Höss, who estimated that about one-third of the victims died immediately.[299] Johann Kremer, an SS doctor who oversaw the gassings, testified that: "Shouting and screaming of the victims could be heard through the opening and it was clear that they fought for their lives."[300] The gas was then pumped out, and the Sonderkommando—work groups of mostly Jewish prisoners—carried out the bodies, extracted gold fillings, cut off women's hair, and removed jewellery, artificial limbs and glasses.[301] At Auschwitz, the bodies were at first buried in deep pits and covered with lime, but between September and November 1942, on the orders of Himmler, 100,000 bodies were dug up and burned. In early 1943, new gas chambers and crematoria were built to accommodate the numbers.[302]
|
153 |
+
|
154 |
+
Bełżec, Sobibór and Treblinka became known as the Operation Reinhard camps, named after the German plan to murder the Jews in the General Government area of occupied Poland.[303] Between March 1942 and November 1943, around 1,526,500 Jews were gassed in these three camps in gas chambers using carbon monoxide from the exhaust fumes of stationary diesel engines.[4] Gold fillings were pulled from the corpses before burial, but unlike in Auschwitz the women's hair was cut before death. At Treblinka, to calm the victims, the arrival platform was made to look like a train station, complete with fake clock.[304] Most of the victims at these three camps were buried in pits at first. From mid-1942, as part of Sonderaktion 1005, prisoners at Auschwitz, Chelmno, Bełżec, Sobibór, and Treblinka were forced to exhume and burn bodies that had been buried, in part to hide the evidence, and in part because of the terrible smell pervading the camps and a fear that the drinking water would become polluted.[305] The corpses—700,000 in Treblinka—were burned on wood in open fire pits and the remaining bones crushed into powder.[306]
|
155 |
+
|
156 |
+
There was almost no resistance in the ghettos in Poland until the end of 1942.[308] Raul Hilberg accounted for this by evoking the history of Jewish persecution: compliance might avoid inflaming the situation until the onslaught abated.[309] Timothy Snyder noted that it was only during the three months after the deportations of July–September 1942 that agreement on the need for armed resistance was reached.[310]
|
157 |
+
|
158 |
+
Several resistance groups were formed, such as the Jewish Combat Organization (ŻOB) and Jewish Military Union (ŻZW) in the Warsaw Ghetto and the United Partisan Organization in Vilna.[311] Over 100 revolts and uprisings occurred in at least 19 ghettos and elsewhere in Eastern Europe. The best known is the Warsaw Ghetto Uprising in April 1943, when the Germans arrived to send the remaining inhabitants to extermination camps. Forced to retreat on 19 April from the ŻOB and ŻZW fighters, they returned later that day under the command of SS General Jürgen Stroop (author of the Stroop Report about the uprising).[312] Around 1,000 poorly armed fighters held the SS at bay for four weeks.[313] Polish and Jewish accounts stated that hundreds or thousands of Germans had been killed,[314] while the Germans reported 16 dead.[315] The Germans said that 14,000 Jews had been killed—7000 during the fighting and 7000 sent to Treblinka[316]—and between 53,000[317] and 56,000 deported.[315] According to Gwardia Ludowa, a Polish resistance newspaper, in May 1943:
|
159 |
+
|
160 |
+
From behind the screen of smoke and fire, in which the ranks of fighting Jewish partisans are dying, the legend of the exceptional fighting qualities of the Germans is being undermined. ... The fighting Jews have won for us what is most important: the truth about the weakness of the Germans."[318]
|
161 |
+
|
162 |
+
During a revolt in Treblinka on 2 August 1943, inmates killed five or six guards and set fire to camp buildings; several managed to escape.[319] In the Białystok Ghetto on 16 August, Jewish insurgents fought for five days when the Germans announced mass deportations.[320] On 14 October, Jewish prisoners in Sobibór attempted an escape, killing 11 SS officers, as well as two or three Ukrainian and Volksdeutsche guards. According to Yitzhak Arad, this was the highest number of SS officers killed in a single revolt.[321] Around 300 inmates escaped (out of 600 in the main camp), but 100 were recaptured and shot.[322] On 7 October 1944, 300 Jewish members, mostly Greek or Hungarian, of the Sonderkommando at Auschwitz learned they were about to be killed, and staged an uprising, blowing up crematorium IV.[323] Three SS officers were killed.[324] The Sonderkommando at crematorium II threw their Oberkapo into an oven when they heard the commotion, believing that a camp uprising had begun.[325] By the time the SS had regained control, 451 members of the Sonderkommando were dead; 212 survived.[326]
|
163 |
+
|
164 |
+
Estimates of Jewish participation in partisan units throughout Europe range from 20,000 to 100,000.[327] In the occupied Polish and Soviet territories, thousands of Jews fled into the swamps or forests and joined the partisans,[328] although the partisan movements did not always welcome them.[329] An estimated 20,000 to 30,000 joined the Soviet partisan movement.[330] One of the famous Jewish groups was the Bielski partisans in Belarus, led by the Bielski brothers.[328] Jews also joined Polish forces, including the Home Army. According to Timothy Snyder, "more Jews fought in the Warsaw Uprising of August 1944 than in the Warsaw Ghetto Uprising of April 1943."[331][ab]
|
165 |
+
|
166 |
+
The Polish government-in-exile in London learned about Auschwitz from the Polish leadership in Warsaw, who from late 1940 "received a continual flow of information" about the camp, according to historian Michael Fleming.[337] This was in large measure thanks to Captain Witold Pilecki of the Polish Home Army, who allowed himself to be arrested in September 1940 and sent there. An inmate until he escaped in April 1943, his mission was to set up a resistance movement (ZOW), prepare to take over the camp, and smuggle out information about it.[338]
|
167 |
+
|
168 |
+
On 6 January 1942, the Soviet Minister of Foreign Affairs, Vyacheslav Molotov, sent out diplomatic notes about German atrocities. The notes were based on reports about mass graves and bodies surfacing from pits and quarries in areas the Red Army had liberated, as well as witness reports from German-occupied areas.[339] The following month, Szlama Ber Winer escaped from the Chełmno concentration camp in Poland and passed information about it to the Oneg Shabbat group in the Warsaw Ghetto. His report, known by his pseudonym as the Grojanowski Report, had reached London by June 1942.[268][340] Also in 1942, Jan Karski sent information to the Allies after being smuggled into the Warsaw Ghetto twice.[341] By late July or early August 1942, Polish leaders in Warsaw had learned about the mass killing of Jews in Auschwitz, according to Fleming.[337][ac] The Polish Interior Ministry prepared a report, Sprawozdanie 6/42,[343] which said at the end:
|
169 |
+
|
170 |
+
There are different methods of execution. People are shot by firing squads, killed by an "air hammer" /Hammerluft/, and poisoned by gas in special gas chambers. Prisoners condemned to death by the Gestapo are murdered by the first two methods. The third method, the gas chamber, is employed for those who are ill or incapable of work and those who have been brought in transports especially for the purpose /Soviet prisoners of war, and, recently Jews/.[337]
|
171 |
+
|
172 |
+
Sprawozdanie 6/42 was sent to Polish officials in London by courier and had reached them by 12 November 1942, where it was translated into English and added to another, "Report on Conditions in Poland", dated 27 November. Fleming writes that the latter was sent to the Polish Embassy in the United States.[344] On 10 December 1942, the Polish Foreign Affairs Minister, Edward Raczyński, addressed the fledgling United Nations on the killings; the address was distributed with the title The Mass Extermination of Jews in German Occupied Poland. He told them about the use of poison gas; about Treblinka, Bełżec and Sobibór; that the Polish underground had referred to them as extermination camps; and that tens of thousands of Jews had been killed in Bełżec in March and April 1942.[345] One in three Jews in Poland were already dead, he estimated, from a population of 3,130,000.[346] Raczyński's address was covered by the New York Times and The Times of London. Winston Churchill received it, and Anthony Eden presented it to the British cabinet. On 17 December 1942, 11 Allies issued the Joint Declaration by Members of the United Nations condemning the "bestial policy of cold-blooded extermination".[347][348]
|
173 |
+
|
174 |
+
The British and American governments were reluctant to publicize the intelligence they had received. A BBC Hungarian Service memo, written by Carlile Macartney, a BBC broadcaster and senior Foreign Office adviser on Hungary, stated in 1942: "We shouldn't mention the Jews at all." The British government's view was that the Hungarian people's antisemitism would make them distrust the Allies if their broadcasts focused on the Jews.[349] The US government similarly feared turning the war into one about the Jews; antisemitism and isolationism were common in the US before its entry into the war.[350] Although governments and the German public appear to have understood what was happening, it seems the Jews themselves did not. According to Saul Friedländer, "[t]estimonies left by Jews from all over occupied Europe indicate that, in contradistinction to vast segments of surrounding society, the victims did not understand what was ultimately in store for them." In Western Europe, he writes, Jewish communities seem to have failed to piece the information together, while in Eastern Europe, they could not accept that the stories they had heard from elsewhere would end up applying to them too.[351]
|
175 |
+
|
176 |
+
The SS liquidated most of the Jewish ghettos of the General Government area of Poland in 1942–1943 and shipped their populations to the camps for extermination.[353] The only exception was the Lodz Ghetto, which was not liquidated until mid-1944.[354] About 42,000 Jews in the General Government were shot during Operation Harvest Festival (Aktion Erntefest) on 3–4 November 1943.[355] At the same time, rail shipments were arriving regularly from western and southern Europe at the extermination camps.[356] Shipments of Jews to the camps had priority on the German railways over anything but the army's needs, and continued even in the face of the increasingly dire military situation at the end of 1942.[357] Army leaders and economic managers complained about this diversion of resources and the killing of skilled Jewish workers,[358] but Nazi leaders rated ideological imperatives above economic considerations.[359]
|
177 |
+
|
178 |
+
By 1943 it was evident to the armed forces leadership that Germany was losing the war.[360] The mass murder continued nevertheless, reaching a "frenetic" pace in 1944[361] when Auschwitz gassed nearly 500,000 people.[362] On 19 March 1944, Hitler ordered the military occupation of Hungary and dispatched Adolf Eichmann to Budapest to supervise the deportation of the country's Jews.[363] From 22 March Jews were required to wear the yellow star; were forbidden from owning cars, bicycles, radios or telephones; and were later forced into ghettos.[364] Between 15 May and 9 July, 437,000 Jews were deported from Hungary to Auschwitz II-Birkenau, almost all sent directly to the gas chambers.[ad] A month before the deportations began, Eichmann offered through an intermediary, Joel Brand, to exchange one million Jews for 10,000 trucks from the Allies, which the Germans would undertake not to use on the Western front.[367] The British leaked the proposal to the press; The Times called it "a new level of fantasy and self-deception".[368] By mid-1944 Jewish communities within easy reach of the Nazi regime had largely been exterminated.[369]
|
179 |
+
|
180 |
+
As the Soviet armed forces advanced, the SS closed down the camps in eastern Poland and made efforts to conceal what had happened. The gas chambers were dismantled, the crematoria dynamited, and the mass graves dug up and corpses cremated.[370] From January to April 1945, the SS sent inmates westward on "death marches" to camps in Germany and Austria.[371][372] In January 1945, the Germans held records of 714,000 inmates in concentration camps; by May, 250,000 (35 percent) had died during death marches.[373] Already sick after months or years of violence and starvation, they were marched to train stations and transported for days at a time without food or shelter in open freight cars, then forced to march again at the other end to the new camp. Some went by truck or wagons; others were marched the entire distance to the new camp. Those who lagged behind or fell were shot.[374]
|
181 |
+
|
182 |
+
The first major camp to be encountered by Allied troops, Majdanek, was discovered by the advancing Soviets, along with its gas chambers, on 25 July 1944.[375] Treblinka, Sobibór, and Bełżec were never liberated, but were destroyed by the Germans in 1943.[376] On 17 January 1945, 58,000 Auschwitz inmates were sent on a death march westwards;[377] when the camp was liberated by the Soviets on 27 January, they found just 7,000 inmates in the three main camps and 500 in subcamps.[378] Buchenwald was liberated by the Americans on 11 April;[379] Bergen-Belsen by the British on 15 April;[380] Dachau by the Americans on 29 April;[381] Ravensbrück by the Soviets on 30 April;[382] and Mauthausen by the Americans on 5 May.[383] The Red Cross took control of Theresienstadt on 3 May, days before the Soviets arrived.[384]
|
183 |
+
|
184 |
+
The British 11th Armoured Division found around 60,000 prisoners (90 percent Jews) when they liberated Bergen-Belsen,[380][385] as well as 13,000 unburied corpses; another 10,000 people died from typhus or malnutrition over the following weeks.[386] The BBC's war correspondent Richard Dimbleby described the scenes that greeted him and the British Army at Belsen, in a report so graphic the BBC declined to broadcast it for four days, and did so, on 19 April, only after Dimbleby threatened to resign.[387] He said he had "never seen British soldiers so moved to cold fury":[388]
|
185 |
+
|
186 |
+
Here over an acre of ground lay dead and dying people. You could not see which was which. ... The living lay with their heads against the corpses and around them moved the awful, ghostly procession of emaciated, aimless people, with nothing to do and with no hope of life, unable to move out of your way, unable to look at the terrible sights around them ... Babies had been born here, tiny wizened things that could not live. A mother, driven mad, screamed at a British sentry to give her milk for her child, and thrust the tiny mite into his arms. ... He opened the bundle and found the baby had been dead for days. This day at Belsen was the most horrible of my life.
|
187 |
+
|
188 |
+
The Jews killed represented around one third of world Jewry[394] and about two-thirds of European Jewry, based on a pre-war estimate of 9.7 million Jews in Europe.[395] According to the Yad Vashem Holocaust Martyrs' and Heroes' Remembrance Authority in Jerusalem, "[a]ll the serious research" confirms that between five and six million Jews died.[396] Early postwar calculations were 4.2–4.5 million from Gerald Reitlinger;[397] 5.1 million from Raul Hilberg; and 5.95 million from Jacob Lestschinsky.[398] In 1990 Yehuda Bauer and Robert Rozett estimated 5.59–5.86 million,[399] and in 1991 Wolfgang Benz suggested 5.29 to just over 6 million.[400][af] The figures include over one million children.[401] Much of the uncertainty stems from the lack of a reliable figure for the number of Jews in Europe in 1939, border changes that make double-counting of victims difficult to avoid, lack of accurate records from the perpetrators, and uncertainty about whether to include post-liberation deaths caused by the persecution.[397]
|
189 |
+
|
190 |
+
The death camps in occupied Poland accounted for half the Jews killed. At Auschwitz the Jewish death toll was 960,000;[402] Treblinka 870,000;[276] Bełżec 600,000;[266] Chełmno 320,000;[268] Sobibór 250,000;[274] and Majdanek 79,000.[273]
|
191 |
+
|
192 |
+
Death rates were heavily dependent on the survival of European states willing to protect their Jewish citizens.[403] In countries allied to Germany, the control over Jewish citizens was sometimes seen as a matter of sovereignty; the continuous presence of state institutions prevented the Jewish communities' complete destruction.[403] In occupied countries, the survival of the state was likewise correlated with lower Jewish death rates: 75 percent of Jews died in the Netherlands, as did 99 percent of Jews who were in Estonia when the Germans arrived—the Nazis declared Estonia Judenfrei ("free of Jews") in January 1942 at the Wannsee Conference—while 75 percent survived in France and 99 percent in Denmark.[404]
|
193 |
+
|
194 |
+
The survival of Jews in countries where states survived demonstrates, writes Christian Gerlach, "that there were limits to German power" and that the influence of non-Germans—governments and others—was "crucial".[47] Jews who lived where pre-war statehood was destroyed (Poland and the Baltic states) or displaced (western USSR) were at the mercy of both German power and sometimes hostile local populations. Almost all Jews living in German-occupied Poland, Baltic states and the USSR were killed, with a 5 percent chance of survival on average.[403] Of Poland's 3.3 million Jews, about 90 percent were killed.[405]
|
195 |
+
|
196 |
+
The Nazis regarded the Slavs as Untermenschen (subhuman).[24] German troops destroyed villages throughout the Soviet Union,[409] rounded up civilians for forced labor in Germany, and caused famine by taking foodstuffs.[410] In Belarus, Germany imposed a regime that deported 380,000 people for slave labor and killed hundreds of thousands. Over 600 villages had their populations killed and at least 5,295 Belarusian settlements were destroyed. According to Timothy Snyder, of nine million people in Soviet Belarus in 1941, around 1.6 million were killed by Germans away from the battlefield.[411] The United States Holocaust Memorial Museum estimates that 3.3 million of 5.7 million Soviet POWs died in German custody.[412]
|
197 |
+
|
198 |
+
Hitler made clear that Polish workers were to be kept in what Robert Gellately called a "permanent condition of inferiority".[413] In a memorandum to Hitler dated 25 May 1940, "A Few Thoughts on the Treatment of the Ethnically Alien Population in the East", Himmler stated that it was in German interests to foster divisions between the ethnic groups in the East. He wanted to restrict non-Germans in the conquered territories to an elementary-school education that would teach them how to write their names, count up to 500, work hard, and obey Germans.[414] The Polish political class became the target of a campaign of murder (Intelligenzaktion and AB-Aktion).[415] Between 1.8 and 1.9 million non-Jewish Polish citizens were killed by Germans during the war; about four-fifths were ethnic Poles and the rest Ukrainians and Belarusians.[406] At least 200,000 died in concentration camps, around 146,000 in Auschwitz. Others died in massacres or in uprisings such as the Warsaw Uprising, where 120,000–200,000 were killed.[416]
|
199 |
+
|
200 |
+
Germany and its allies killed up to 220,000 Roma, around 25 percent of the community in Europe,[417] in what the Romani people call the Pořajmos.[418] Robert Ritter, head of the Rassenhygienische und Bevolkerungsbiologische Forschungsstelle called them "a peculiar form of the human species who are incapable of development and came about by mutation".[419] In May 1942 they were placed under similar laws to the Jews, and in December Himmler ordered that they be sent to Auschwitz, unless they had served in the Wehrmacht.[420] He adjusted the order on 15 November 1943 to allow "sedentary Gypsies and part-Gypsies" in the occupied Soviet areas to be viewed as citizens.[421] In Belgium, France and the Netherlands, the Roma were subject to restrictions on movement and confinement to collection camps,[422] while in Eastern Europe they were sent to concentration camps, where large numbers were murdered.[423] In the camps, they were usually counted among the asocials and required to wear brown or black triangles on their prison clothes.[424]
|
201 |
+
|
202 |
+
German communists, socialists and trade unionists were among the earliest opponents of the Nazis[425] and among the first to be sent to concentration camps.[426] Nacht und Nebel ("Night and Fog"), a directive issued by Hitler on 7 December 1941, resulted in the disappearance, torture and death of political activists throughout German-occupied Europe; the courts had sentenced 1,793 people to death by April 1944, according to Jack Fischel.[427] Because they refused to pledge allegiance to the Nazi party or serve in the military, Jehovah's Witnesses were sent to concentration camps, where they were identified by purple triangles and given the option of renouncing their faith and submitting to the state's authority.[428] The United States Holocaust Memorial Museum estimates that between 2,700 and 3,300 were sent to the camps, where 1,400 died.[407] According to German historian Detlef Garbe, "no other religious movement resisted the pressure to conform to National Socialism with comparable unanimity and steadfastness."[429]
|
203 |
+
|
204 |
+
Around 100,000 gay men were arrested in Germany and 50,000 jailed between 1933 and 1945; 5,000–15,000 are thought to have been sent to concentration camps, where they were identified by a pink triangle on their camp clothes. It is not known how many died.[408] Hundreds were castrated, sometimes "voluntarily" to avoid criminal sentences.[430] In 1936 Himmler created the Reich Central Office for the Combating of Homosexuality and Abortion.[431] The police closed gay bars and shut down gay publications.[408] Lesbians were left relatively unaffected; the Nazis saw them as "asocials", rather than sexual deviants.[432]
|
205 |
+
|
206 |
+
There were 5,000–25,000 Afro-Germans in Germany when the Nazis came to power.[433] Although blacks in Germany and German-occupied Europe were subjected to incarceration, sterilization and murder, there was no program to kill them as a group.[434]
|
207 |
+
|
208 |
+
The Nuremberg trials were a series of military tribunals held after the war by the Allies in Nuremberg, Germany, to prosecute the German leadership. The first was the 1945–1946 trial of 22 political and military leaders before the International Military Tribunal.[435] Adolf Hitler, Heinrich Himmler, and Joseph Goebbels had committed suicide months earlier.[436] The prosecution entered indictments against 24 men (two were dropped before the end of the trial)[ag] and seven organizations: the Reich Cabinet, Schutzstaffel (SS), Sicherheitsdienst (SD), Gestapo, Sturmabteilung (SA), and the "General Staff and High Command".[437]
|
209 |
+
|
210 |
+
The indictments were for participation in a common plan or conspiracy for the accomplishment of a crime against peace; planning, initiating and waging wars of aggression and other crimes against peace; war crimes; and crimes against humanity. The tribunal passed judgements ranging from acquittal to death by hanging.[437] Eleven defendants were executed, including Joachim von Ribbentrop, Wilhelm Keitel, Alfred Rosenberg, and Alfred Jodl. Ribbentrop, the judgement declared, "played an important part in Hitler's 'final solution of the Jewish question'".[438]
|
211 |
+
|
212 |
+
The subsequent Nuremberg trials, 1946–1949, tried another 185 defendants.[439] West Germany initially tried few ex-Nazis, but after the 1958 Ulm Einsatzkommando trial, the government set up a dedicated agency.[440] Other trials of Nazis and collaborators took place in Western and Eastern Europe. In 1960 Mossad agents captured Adolf Eichmann in Argentina and brought him to Israel to stand trial on 15 indictments, including war crimes, crimes against humanity, and crimes against the Jewish people. He was convicted in December 1961 and executed in June 1962. Eichmann's trial and death revived interest in war criminals and the Holocaust in general.[441]
|
213 |
+
|
214 |
+
The government of Israel requested $1.5 billion from the Federal Republic of Germany in March 1951 to finance the rehabilitation of 500,000 Jewish survivors, arguing that Germany had stolen $6 billion from the European Jews. Israelis were divided about the idea of taking money from Germany. The Conference on Jewish Material Claims Against Germany (known as the Claims Conference) was opened in New York, and after negotiations the claim was reduced to $845 million.[442][443]
|
215 |
+
|
216 |
+
West Germany allocated another $125 million for reparations in 1988. Companies such as BMW, Deutsche Bank, Ford, Opel, Siemens, and Volkswagen faced lawsuits for their use of forced labor during the war.[442] In response, Germany set up the "Remembrance, Responsibility and Future" Foundation in 2000, which paid €4.45 billion to former slave laborers (up to €7,670 each).[444] In 2013 Germany agreed to provide €772 million to fund nursing care, social services, and medication for 56,000 Holocaust survivors around the world.[445] The French state-owned railway company, the SNCF, agreed in 2014 to pay $60 million to Jewish-American survivors, around $100,000 each, for its role in the transport of 76,000 Jews from France to extermination camps between 1942 and 1944.[446]
|
217 |
+
|
218 |
+
In the early decades of Holocaust studies, scholars approached the Holocaust as a genocide unique in its reach and specificity; Nora Levin called the "world of Auschwitz" a "new planet."[447] This was questioned in the 1980s during the West German Historikerstreit ("historians' dispute"), an attempt to re-position the Holocaust within German historiography.[448][ah]
|
219 |
+
|
220 |
+
Ernst Nolte triggered the Historikerstreit in June 1986 with an article in the conservative newspaper Frankfurter Allgemeine Zeitung: "The past that will not pass: A speech that could be written but no longer delivered".[450][451][ai] Rather than being studied as an historical event, the Nazi era was suspended like a sword over Germany's present, he wrote. He compared "the guilt of the Germans" to the Nazi idea of "the guilt of the Jews", and argued that the focus on the Final Solution overlooked the Nazi's euthanasia program and treatment of Soviet POWs, as well as post-war issues such as the Vietnam War and Soviet–Afghan War. Comparing Auschwitz to the Gulag, he suggested that the Holocaust was a response to Hitler's fear of the Soviet Union: "Did the Gulag Archipelago not precede Auschwitz? Was the Bolshevik murder of an entire class not the logical and factual prius of the 'racial murder' of National Socialism? ... Was Auschwitz perhaps rooted in a past that would not pass?"[aj]
|
221 |
+
|
222 |
+
Nolte's arguments were viewed as an attempt to normalize the Holocaust; one of the debate's key questions, according to historian Ernst Piper, was whether history should "historicize" or "moralize".[455][ak] In September 1986 in the left-leaning Die Zeit, Eberhard Jäckel responded that "never before had a state, with the authority of its leader, decided and announced that a specific group of humans, including the elderly, women, children and infants, would be killed as quickly as possible, then carried out this resolution using every possible means of state power."[i]
|
223 |
+
|
224 |
+
Despite the criticism of Nolte, the Historikerstreit put "the question of comparison" on the agenda, according to Dan Stone in 2010. [448] He argued that the idea of the Holocaust as unique was overtaken by attempts to place it within the context of Stalinism, ethnic cleansing, and the Nazis' intentions for post-war "demographic reordering", particularly the Generalplan Ost, the plan to kill tens of millions of Slavs to create living space for Germans.[457] Jäckel's position continued nevertheless to inform the views of many specialists. Richard J. Evans argued in 2015:
|
225 |
+
|
226 |
+
Thus although the Nazi "Final Solution" was one genocide among many, it had features that made it stand out from all the rest as well. Unlike all the others it was bounded neither by space nor by time. It was launched not against a local or regional obstacle, but at a world-enemy seen as operating on a global scale. It was bound to an even larger plan of racial reordering and reconstruction involving further genocidal killing on an almost unimaginable scale, aimed, however, at clearing the way in a particular region – Eastern Europe – for a further struggle against the Jews and those the Nazis regarded as their puppets. It was set in motion by ideologues who saw world history in racial terms. It was, in part, carried out by industrial methods. These things all make it unique.
|
227 |
+
|
228 |
+
In September 2018, an online CNN–ComRes poll of 7,092 adults in seven European countries—Austria, France, Germany, Great Britain, Hungary, Poland, and Sweden—found that one in 20 had never heard of the Holocaust. The figure included one in five people in France aged 18–34. Four in 10 Austrians said they knew "just a little" about it; 12 percent of young people there said they had never heard of it.[459] A 2018 survey in the United States found that 22 percent of 1,350 adults said they had never heard of it, while 41 percent of Americans and 66 percent of millennials did not know what Auschwitz was.[460][461] In 2019, a survey of 1,100 Canadians found that 49 percent could not name any of the concentration camps.[462]
|
229 |
+
|
230 |
+
Yad Vashem (undated): "The Holocaust was the murder of approximately six million Jews by the Nazis and their collaborators. Between the German invasion of the Soviet Union in the summer of 1941 and the end of the war in Europe in May 1945, Nazi Germany and its accomplices strove to murder every Jew under their domination."[29]
|
231 |
+
|
232 |
+
The Century Dictionary (1904): "a sacrifice or offering entirely consumed by fire, in use among the Jews and some pagan nations. Figuratively, a great slaughter or sacrifice of life, as by fire or other accident, or in battle."[11]
|
233 |
+
|
234 |
+
Translation, Avalon Project: "These actions are, however, only to be considered provisional, but practical experience is already being collected which is of the greatest importance in relation to the future final solution of the Jewish question."[250]
|
235 |
+
|
236 |
+
The speech that could not be delivered referred to a lecture Nolte had planned to give to the Römerberg-Gesprächen (Römerberg Colloquium) in Frankfurt; he said his invitation had been withdrawn, which the organizers disputed.[453] At that point, his lecture had the title "The Past That Will Not Pass: To Debate or to Draw the Line?".[454]
|
237 |
+
|
238 |
+
"The Holocaust: Definition and Preliminary Discussion". Holocaust Resource Center, Yad Vashem. Archived from the original on 26 June 2015.
|
239 |
+
|
240 |
+
German: "Wannsee-Protokoll". EuroDocs. Harold B. Lee Library, Brigham Young University. Archived from the original on 22 June 2006.
|
241 |
+
|
242 |
+
"Audio slideshow: Liberation of Belsen". BBC News. 15 April 2005. Archived from the original on 13 February 2009.
|
243 |
+
|
244 |
+
For France, Gerlach 2016, p. 14; for Denmark and Estonia, Snyder 2015, p. 212; for Estonia (Estland) and the Wannsee Conference, "Besprechungsprotokoll" (PDF). Haus der Wannsee-Konferenz. p. 171. Archived (PDF) from the original on 2 February 2019.
|
245 |
+
|
246 |
+
Davies, Lizzie (17 February 2009). "France responsible for sending Jews to concentration camps, says court". The Guardian. Archived from the original on 10 October 2017.
|
247 |
+
|
248 |
+
Knowlton & Cates 1993, pp. 18–23; partly reproduced in "The Past That Will Not Pass" (translation), German History in Documents and Images.
|
en/2594.html.txt
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Homer (/ˈhoʊmər/; Ancient Greek: Ὅμηρος Greek pronunciation: [hómɛːros], Hómēros) is the presumed author of the Iliad and the Odyssey, two epic poems that are the central works of ancient Greek literature. The Iliad is set during the Trojan War, the ten-year siege of the city of Troy by a coalition of Greek kingdoms. It focuses on a quarrel between King Agamemnon and the warrior Achilles lasting a few weeks during the last year of the war. The Odyssey focuses on the ten-year journey home of Odysseus, king of Ithaca, after the fall of Troy. Many accounts of Homer's life circulated in classical antiquity, the most widespread being that he was a blind bard from Ionia, a region of central coastal Anatolia in present-day Turkey. Modern scholars consider these accounts legendary.[2][3][4]
|
4 |
+
|
5 |
+
The Homeric Question – concerning by whom, when, where and under what circumstances the Iliad and Odyssey were composed – continues to be debated. Broadly speaking, modern scholarly opinion falls into two groups. One holds that most of the Iliad and (according to some) the Odyssey are the works of a single poet of genius. The other considers the Homeric poems to be the result of a process of working and reworking by many contributors, and that "Homer" is best seen as a label for an entire tradition.[4] It is generally accepted that the poems were composed at some point around the late eighth or early seventh century BC.[5]
|
6 |
+
|
7 |
+
The poems are in Homeric Greek, also known as Epic Greek, a literary language which shows a mixture of features of the Ionic and Aeolic dialects from different centuries; the predominant influence is Eastern Ionic.[6][7] Most researchers believe that the poems were originally transmitted orally.[8] From antiquity until the present day, the influence of Homeric epic on Western civilization has been great, inspiring many of its most famous works of literature, music, art and film.[9] The Homeric epics were the greatest influence on ancient Greek culture and education; to Plato, Homer was simply the one who "has taught Greece" – ten Hellada pepaideuken.[10][11]
|
8 |
+
|
9 |
+
Today only the Iliad and Odyssey are associated with the name 'Homer'. In antiquity, a very large number of other works were sometimes attributed to him, including the Homeric Hymns, the Contest of Homer and Hesiod, the Little Iliad, the Nostoi, the Thebaid, the Cypria, the Epigoni, the comic mini-epic Batrachomyomachia ("The Frog-Mouse War"), the Margites, the Capture of Oechalia, and the Phocais. These claims are not considered authentic today and were by no means universally accepted in the ancient world. As with the multitude of legends surrounding Homer's life, they indicate little more than the centrality of Homer to ancient Greek culture.[12][13][14]
|
10 |
+
|
11 |
+
Many traditions circulated in the ancient world concerning Homer, most of which are lost. Modern scholarly consensus is that they have no value as history. Some claims were established early and repeated often. They include that Homer was blind (taking as self-referential a passage describing the blind bard Demodocus[15][16]), that he was born in Chios, that he was the son of the river Meles and the nymph Critheïs, that he was a wandering bard, that he composed a varying list of other works (the "Homerica"), that he died either in Ios or after failing to solve a riddle set by fishermen, and various explanations for the name "Homer". The two best known ancient biographies of Homer are the Life of Homer by the Pseudo-Herodotus and the Contest of Homer and Hesiod.[17][18]
|
12 |
+
|
13 |
+
The study of Homer is one of the oldest topics in scholarship, dating back to antiquity.[19][20][21] Nonetheless, the aims of Homeric studies have changed over the course of the millennia.[19] The earliest preserved comments on Homer concern his treatment of the gods, which hostile critics such as the poet Xenophanes of Colophon denounced as immoral.[21] The allegorist Theagenes of Rhegium is said to have defended Homer by arguing that the Homeric poems are allegories.[21] The Iliad and the Odyssey were widely used as school texts in ancient Greek and Hellenistic cultures.[19][21][22] They were the first literary works taught to all students.[22] The Iliad, particularly its first few books, was far more intently studied than the Odyssey during the Hellenistic and Roman periods.[22]
|
14 |
+
|
15 |
+
As a result of the poems' prominence in classical Greek education, extensive commentaries on them developed to explain parts of the poems that were culturally or linguistically difficult.[19][21] During the Hellenistic and Roman periods, many interpreters, especially the Stoics, who believed that Homeric poems conveyed Stoic doctrines, regarded them as allegories, containing hidden wisdom.[21] Perhaps partially because of the Homeric poems' extensive use in education, many authors believed that Homer's original purpose had been to educate.[21] Homer's wisdom became so widely praised that he began to acquire the image of almost a prototypical philosopher.[21] Byzantine scholars such as Eustathius of Thessalonica and John Tzetzes produced commentaries, extensions and scholia to Homer, especially in the twelfth century.[23][21] Eustathius's commentary on the Iliad alone is massive, sprawling over nearly 4,000 oversized pages in a twenty-first century printed version and his commentary on the Odyssey an additional nearly 2,000.[21]
|
16 |
+
|
17 |
+
In 1488, the Greek scholar Demetrios Chalkokondyles published the editio princeps of the Homeric poems.[21] The earliest modern Homeric scholars started with the same basic approaches towards the Homeric poems as scholars in antiquity.[21][20][19] The allegorical interpretation of the Homeric poems that had been so prevalent in antiquity returned to become the prevailing view of the Renaissance.[21] Renaissance humanists praised Homer as the archetypically wise poet, whose writings contain hidden wisdom, disguised through allegory.[21] In western Europe during the Renaissance, Virgil was more widely read than Homer and Homer was often seen through a Virgilian lens.[24]
|
18 |
+
|
19 |
+
In 1664, contradicting the widespread praise of Homer as the epitome of wisdom, François Hédelin, abbé d'Aubignac wrote a scathing attack on the Homeric poems, declaring that they were incoherent, immoral, tasteless, and without style, that Homer never existed, and that the poems were hastily cobbled together by incompetent editors from unrelated oral songs.[20] Fifty years later, the English scholar Richard Bentley concluded that Homer did exist, but that he was an obscure, prehistoric oral poet whose compositions bear little relation to the Iliad and the Odyssey as they have been passed down.[20] According to Bentley, Homer "wrote a Sequel of Songs and Rhapsodies, to be sung by himself for small Earnings and good Cheer at Festivals and other Days of Merriment; the Ilias he wrote for men, and the Odysseis for the other Sex. These loose songs were not collected together in the Form of an epic Poem till Pisistratus' time, about 500 Years after."[20]
|
20 |
+
|
21 |
+
Friedrich August Wolf's Prolegomena ad Homerum, published in 1795, argued that much of the material later incorporated into the Iliad and the Odyssey was originally composed in the tenth century BC in the form of short, separate oral songs,[25][26][20] which passed through oral tradition for roughly four hundred years before being assembled into prototypical versions of the Iliad and the Odyssey in the sixth century BC by literate authors.[25][26][20] After being written down, Wolf maintained that the two poems were extensively edited, modernized, and eventually shaped into their present state as artistic unities.[25][26][20] Wolf and the "Analyst" school, which led the field in the nineteenth century, sought to recover the original, authentic poems which were thought to be concealed by later excrescences.[25][26][20][27]
|
22 |
+
|
23 |
+
Within the Analyst school were two camps: proponents of the "lay theory," which held that the Iliad and the Odyssey were put together from a large number of short, independent songs,[20] and proponents of the "nucleus theory", which held that Homer had originally composed shorter versions of the Iliad and the Odyssey, which later poets expanded and revised.[20] A small group of scholars opposed to the Analysts, dubbed "Unitarians", saw the later additions as superior, the work of a single inspired poet.[25][26][20] By around 1830, the central preoccupations of Homeric scholars, dealing with whether or not "Homer" actually existed, when and how the Homeric poems originated, how they were transmitted, when and how they were finally written down, and their overall unity, had been dubbed "the Homeric Question".[20]
|
24 |
+
|
25 |
+
Following World War I, the Analyst school began to fall out of favor among Homeric scholars.[20] It did not die out entirely, but it came to be increasingly seen as a discredited dead end.[20] Starting in around 1928, Milman Parry and Albert Lord, after their studies of folk bards in the Balkans, developed the "Oral-Formulaic Theory" that the Homeric poems were originally composed through improvised oral performances, which relied on traditional epithets and poetic formulas.[28][27][20] This theory found very wide scholarly acceptance[28][27][20] and explained many previously puzzling features of the Homeric poems, including their unusually archaic language, their extensive use of stock epithets, and their other "repetitive" features.[27] Many scholars concluded that the "Homeric question" had finally been answered.[20] Meanwhile, the 'Neoanalysts' sought to bridge the gap between the 'Analysts' and 'Unitarians'.[29][30] The Neoanalysts sought to trace the relationships between the Homeric poems and other epic poems, which have now been lost, but of which modern scholars do possess some patchy knowledge.[20] Knowledge of earlier versions of the epics can be derived from anomalies of structure and detail in our surviving version of the Iliad and Odyssey. These anomalies point to earlier versions of the Iliad in which Ajax played a more prominent role, in which the Achaean embassy to Achilles comprised different characters, and in which Patroclus was actually mistaken for Achilles by the Trojans. They point to earlier versions of the Odyssey in which Telemachus went in search of news of his father not to Menelaus in Sparta but to Idomeneus in Crete, in which Telemachus met up with his father in Crete and conspired with him to return to Ithaca disguised as the soothsayer Theoclymenus, and in which Penelope recognized Odysseus much earlier in the narrative and conspired with him in the destruction of the suitors.[31] Neoanalysis can be viewed as a form of Analysis informed by the principles of Oral Theory, recognizing as it does the existence and influence of previously existing tales and yet appreciating the technique of a single poet in adapting them to his Iliad and Odyssey.
|
26 |
+
|
27 |
+
Most contemporary scholars, although they disagree on other questions about the genesis of the poems, agree that the Iliad and the Odyssey were not produced by the same author, based on "the many differences of narrative manner, theology, ethics, vocabulary, and geographical perspective, and by the apparently imitative character of certain passages of the Odyssey in relation to the Iliad."[32][33][34][20] Nearly all scholars agree that the Iliad and the Odyssey are unified poems, in that each poem shows a clear overall design, and that they are not merely strung together from unrelated songs.[20] It is also generally agreed that each poem was composed mostly by a single author, who probably relied heavily on older oral traditions.[20] Nearly all scholars agree that the Doloneia in Book X of the Iliad is not part of the original poem, but rather a later insertion by a different poet.[20]
|
28 |
+
|
29 |
+
Some ancient scholars believed Homer to have been an eyewitness to the Trojan War; others thought he had lived up to 500 years afterwards.[35] Contemporary scholars continue to debate the date of the poems.[36][37][20] A long history of oral transmission lies behind the composition of the poems, complicating the search for a precise date.[38] At one extreme, Richard Janko has proposed a date for both poems to the eighth century BC based on linguistic analysis and statistics.[36][37] Barry B. Powell dates the composition of the Iliad and the Odyssey to sometime between 800 and 750 BC, based on the statement from Herodotus, who lived in the late fifth century BC, that Homer lived four hundred years before his own time "and not more" (καὶ οὐ πλέοσι), and on the fact that the poems do not mention hoplite battle tactics, inhumation, or literacy.[39] Martin Litchfield West has argued that the Iliad echoes the poetry of Hesiod, and that it must have been composed around 660–650 BC at the earliest, with the Odyssey up to a generation later.[40][41][20] He also interprets passages in the Iliad as showing knowledge of historical events that occurred in the ancient Near East during the middle of the seventh century BC, including the destruction of Babylon by Sennacherib in 689 BC and the Sack of Thebes by Ashurbanipal in 663/4 BC.[20] At the other extreme, a few American scholars such as Gregory Nagy see "Homer" as a continually evolving tradition, which grew much more stable as the tradition progressed, but which did not fully cease to continue changing and evolving until as late as the middle of the second century BC.[36][37][20]
|
30 |
+
|
31 |
+
"'Homer" is a name of unknown etymological origin, around which many theories were erected in antiquity. One such linkage was to the Greek ὅμηρος (hómēros), "hostage" (or "surety"). The explanations suggested by modern scholars tend to mirror their position on the overall Homeric question. Nagy interprets it as "he who fits (the song) together". West has advanced both possible Greek and Phoenician etymologies.[42][43]
|
32 |
+
|
33 |
+
Scholars continue to debate questions such as whether the Trojan War actually took place – and if so when and where – and to what extent the society depicted by Homer is based on his own or one which was, even at the time of the poems' composition, known only as legend. The Homeric epics are largely set in the east and center of the Mediterranean, with some scattered references to Egypt, Ethiopia and other distant lands, in a warlike society that resembles that of the Greek world slightly before the hypothesized date of the poems' composition.[44][45][46][47]
|
34 |
+
|
35 |
+
In ancient Greek chronology, the sack of Troy was dated to 1184 BC. By the nineteenth century, there was widespread scholarly skepticism that the Trojan War had ever happened and that Troy had even existed, but in 1873 Heinrich Schliemann announced to the world that he had discovered the ruins of Homer's Troy at Hissarlik in modern Turkey. Some contemporary scholars think the destruction of Troy VIIa circa 1220 BC was the origin of the myth of the Trojan War, others that the poem was inspired by multiple similar sieges that took place over the centuries.[48]
|
36 |
+
|
37 |
+
Most scholars now agree that the Homeric poems depict customs and elements of the material world that are derived from different periods of Greek history.[27][49][50] For instance, the heroes in the poems use bronze weapons, characteristic of the Bronze Age in which the poems are set, rather than the later Iron Age during which they were composed;[27][49][50] yet the same heroes are cremated (an Iron Age practice) rather than buried (as they were in the Bronze Age).[27][49][50] In some parts of the Homeric poems, heroes are accurately described as carrying large shields like those used by warriors during the Mycenaean period,[27] but, in other places, they are instead described carrying the smaller shields that were commonly used during the time when the poems were written in the early Iron Age.[27]
|
38 |
+
|
39 |
+
In the Iliad 10.260–265, Odysseus is described as wearing a helmet made of boar's tusks. Such helmets were not worn in Homer's time, but were commonly worn by aristocratic warriors between 1600 and 1150 BC.[51][52][53] The decipherment of Linear B in the 1950s by Michael Ventris and continued archaeological investigation has increased modern scholars' understanding of Aegean civilisation, which in many ways resembles the ancient Near East more than the society described by Homer.[54] Some aspects of the Homeric world are simply made up;[27] for instance, the Iliad 22.145–56 describes there being two springs that run near the city of Troy, one that runs steaming hot and the other that runs icy cold.[27] It is here that Hector takes his final stand against Achilles.[27] Archaeologists, however, have uncovered no evidence that springs of this description ever actually existed.[27]
|
40 |
+
|
41 |
+
The Homeric epics are written in an artificial literary language or 'Kunstsprache' only used in epic hexameter poetry. Homeric Greek shows features of multiple regional Greek dialects and periods, but is fundamentally based on Ionic Greek, in keeping with the tradition that Homer was from Ionia. Linguistic analysis suggests that the Iliad was composed slightly before the Odyssey, and that Homeric formulae preserve older features than other parts of the poems.[55][56]
|
42 |
+
|
43 |
+
The Homeric poems were composed in unrhymed dactylic hexameter; ancient Greek metre was quantity-based rather than stress-based.[57][58] Homer frequently uses set phrases such as epithets ('crafty Odysseus', 'rosy-fingered Dawn', 'owl-eyed Athena', etc.), Homeric formulae ('and then answered [him/her], Agamemnon, king of men', 'when the early-born rose-fingered Dawn came to light', 'thus he/she spoke'), simile, type scenes, ring composition and repetition. These habits aid the extemporizing bard, and are characteristic of oral poetry. For instance, the main words of a Homeric sentence are generally placed towards the beginning, whereas literate poets like Virgil or Milton use longer and more complicated syntactical structures. Homer then expands on these ideas in subsequent clauses; this technique is called parataxis.[59]
|
44 |
+
|
45 |
+
The so-called 'type scenes' (typischen Scenen), were named by Walter Arend in 1933. He noted that Homer often, when describing frequently recurring activities such as eating, praying, fighting and dressing, used blocks of set phrases in sequence that were then elaborated by the poet. The 'Analyst' school had considered these repetitions as un-Homeric, whereas Arend interpreted them philosophically. Parry and Lord noted that these conventions are found in many other cultures.[60][61]
|
46 |
+
|
47 |
+
'Ring composition' or chiastic structure (when a phrase or idea is repeated at both the beginning and end of a story, or a series of such ideas first appears in the order A, B, C... before being reversed as ...C, B, A) has been observed in the Homeric epics. Opinion differs as to whether these occurrences are a conscious artistic device, a mnemonic aid or a spontaneous feature of human storytelling.[62][63]
|
48 |
+
|
49 |
+
Both of the Homeric poems begin with an invocation to the Muse.[64] In the Iliad, the poet invokes her to sing of "the anger of Achilles",[64] and, in the Odyssey, he asks her to sing of "the man of many ways".[64] A similar opening was later employed by Virgil in his Aeneid.[64]
|
50 |
+
|
51 |
+
The orally transmitted Homeric poems were put into written form at some point between the eighth and sixth centuries BC. Some scholars believe that they were dictated to a scribe by the poet and that our inherited versions of the Iliad and Odyssey were in origin orally-dictated texts.[65] Albert Lord noted that the Balkan bards that he was studying revised and expanded their songs in their process of dictating.[66] Some scholars hypothesize that a similar process of revision and expansion occurred when the Homeric poems were first written down.[67][68] Other scholars hold that, after the poems were created in the eighth century, they continued to be orally transmitted with considerable revision until they were written down in the sixth century.[69] After textualisation, the poems were each divided into 24 rhapsodes, today referred to as books, and labelled by the letters of the Greek alphabet. Most scholars attribute the book divisions to the Hellenistic scholars of Alexandria, in Egypt.[70] Some trace the divisions back further to the Classical period.[71] Very few credit Homer himself with the divisions.[72]
|
52 |
+
|
53 |
+
In antiquity, it was widely held that the Homeric poems were collected and organised in Athens in the late sixth century BC by the tyrant Peisistratos (died 528/7 BC), in what subsequent scholars have dubbed the "Peisistratean recension".[73][21] The idea that the Homeric poems were originally transmitted orally and first written down during the reign of Peisistratos is referenced by the first-century BC Roman orator Cicero and is also referenced in a number of other surviving sources, including two ancient Lives of Homer.[21] From around 150 BC, the texts of the Homeric poems seem to have become relatively established. After the establishment of the Library of Alexandria, Homeric scholars such as Zenodotus of Ephesus, Aristophanes of Byzantium and in particular Aristarchus of Samothrace helped establish a canonical text.[74]
|
54 |
+
|
55 |
+
The first printed edition of Homer was produced in 1488 in Milan, Italy. Today scholars use medieval manuscripts, papyri and other sources; some argue for a "multi-text" view, rather than seeking a single definitive text. The nineteenth-century edition of Arthur Ludwich mainly follows Aristarchus's work, whereas van Thiel's (1991, 1996) follows the medieval vulgate. Others, such as Martin West (1998–2000) or T.W. Allen, fall somewhere between these two extremes.[74]
|
56 |
+
|
57 |
+
This is a partial list of translations into English of Homer's Iliad and Odyssey.
|
en/2595.html.txt
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Homer (/ˈhoʊmər/; Ancient Greek: Ὅμηρος Greek pronunciation: [hómɛːros], Hómēros) is the presumed author of the Iliad and the Odyssey, two epic poems that are the central works of ancient Greek literature. The Iliad is set during the Trojan War, the ten-year siege of the city of Troy by a coalition of Greek kingdoms. It focuses on a quarrel between King Agamemnon and the warrior Achilles lasting a few weeks during the last year of the war. The Odyssey focuses on the ten-year journey home of Odysseus, king of Ithaca, after the fall of Troy. Many accounts of Homer's life circulated in classical antiquity, the most widespread being that he was a blind bard from Ionia, a region of central coastal Anatolia in present-day Turkey. Modern scholars consider these accounts legendary.[2][3][4]
|
4 |
+
|
5 |
+
The Homeric Question – concerning by whom, when, where and under what circumstances the Iliad and Odyssey were composed – continues to be debated. Broadly speaking, modern scholarly opinion falls into two groups. One holds that most of the Iliad and (according to some) the Odyssey are the works of a single poet of genius. The other considers the Homeric poems to be the result of a process of working and reworking by many contributors, and that "Homer" is best seen as a label for an entire tradition.[4] It is generally accepted that the poems were composed at some point around the late eighth or early seventh century BC.[5]
|
6 |
+
|
7 |
+
The poems are in Homeric Greek, also known as Epic Greek, a literary language which shows a mixture of features of the Ionic and Aeolic dialects from different centuries; the predominant influence is Eastern Ionic.[6][7] Most researchers believe that the poems were originally transmitted orally.[8] From antiquity until the present day, the influence of Homeric epic on Western civilization has been great, inspiring many of its most famous works of literature, music, art and film.[9] The Homeric epics were the greatest influence on ancient Greek culture and education; to Plato, Homer was simply the one who "has taught Greece" – ten Hellada pepaideuken.[10][11]
|
8 |
+
|
9 |
+
Today only the Iliad and Odyssey are associated with the name 'Homer'. In antiquity, a very large number of other works were sometimes attributed to him, including the Homeric Hymns, the Contest of Homer and Hesiod, the Little Iliad, the Nostoi, the Thebaid, the Cypria, the Epigoni, the comic mini-epic Batrachomyomachia ("The Frog-Mouse War"), the Margites, the Capture of Oechalia, and the Phocais. These claims are not considered authentic today and were by no means universally accepted in the ancient world. As with the multitude of legends surrounding Homer's life, they indicate little more than the centrality of Homer to ancient Greek culture.[12][13][14]
|
10 |
+
|
11 |
+
Many traditions circulated in the ancient world concerning Homer, most of which are lost. Modern scholarly consensus is that they have no value as history. Some claims were established early and repeated often. They include that Homer was blind (taking as self-referential a passage describing the blind bard Demodocus[15][16]), that he was born in Chios, that he was the son of the river Meles and the nymph Critheïs, that he was a wandering bard, that he composed a varying list of other works (the "Homerica"), that he died either in Ios or after failing to solve a riddle set by fishermen, and various explanations for the name "Homer". The two best known ancient biographies of Homer are the Life of Homer by the Pseudo-Herodotus and the Contest of Homer and Hesiod.[17][18]
|
12 |
+
|
13 |
+
The study of Homer is one of the oldest topics in scholarship, dating back to antiquity.[19][20][21] Nonetheless, the aims of Homeric studies have changed over the course of the millennia.[19] The earliest preserved comments on Homer concern his treatment of the gods, which hostile critics such as the poet Xenophanes of Colophon denounced as immoral.[21] The allegorist Theagenes of Rhegium is said to have defended Homer by arguing that the Homeric poems are allegories.[21] The Iliad and the Odyssey were widely used as school texts in ancient Greek and Hellenistic cultures.[19][21][22] They were the first literary works taught to all students.[22] The Iliad, particularly its first few books, was far more intently studied than the Odyssey during the Hellenistic and Roman periods.[22]
|
14 |
+
|
15 |
+
As a result of the poems' prominence in classical Greek education, extensive commentaries on them developed to explain parts of the poems that were culturally or linguistically difficult.[19][21] During the Hellenistic and Roman periods, many interpreters, especially the Stoics, who believed that Homeric poems conveyed Stoic doctrines, regarded them as allegories, containing hidden wisdom.[21] Perhaps partially because of the Homeric poems' extensive use in education, many authors believed that Homer's original purpose had been to educate.[21] Homer's wisdom became so widely praised that he began to acquire the image of almost a prototypical philosopher.[21] Byzantine scholars such as Eustathius of Thessalonica and John Tzetzes produced commentaries, extensions and scholia to Homer, especially in the twelfth century.[23][21] Eustathius's commentary on the Iliad alone is massive, sprawling over nearly 4,000 oversized pages in a twenty-first century printed version and his commentary on the Odyssey an additional nearly 2,000.[21]
|
16 |
+
|
17 |
+
In 1488, the Greek scholar Demetrios Chalkokondyles published the editio princeps of the Homeric poems.[21] The earliest modern Homeric scholars started with the same basic approaches towards the Homeric poems as scholars in antiquity.[21][20][19] The allegorical interpretation of the Homeric poems that had been so prevalent in antiquity returned to become the prevailing view of the Renaissance.[21] Renaissance humanists praised Homer as the archetypically wise poet, whose writings contain hidden wisdom, disguised through allegory.[21] In western Europe during the Renaissance, Virgil was more widely read than Homer and Homer was often seen through a Virgilian lens.[24]
|
18 |
+
|
19 |
+
In 1664, contradicting the widespread praise of Homer as the epitome of wisdom, François Hédelin, abbé d'Aubignac wrote a scathing attack on the Homeric poems, declaring that they were incoherent, immoral, tasteless, and without style, that Homer never existed, and that the poems were hastily cobbled together by incompetent editors from unrelated oral songs.[20] Fifty years later, the English scholar Richard Bentley concluded that Homer did exist, but that he was an obscure, prehistoric oral poet whose compositions bear little relation to the Iliad and the Odyssey as they have been passed down.[20] According to Bentley, Homer "wrote a Sequel of Songs and Rhapsodies, to be sung by himself for small Earnings and good Cheer at Festivals and other Days of Merriment; the Ilias he wrote for men, and the Odysseis for the other Sex. These loose songs were not collected together in the Form of an epic Poem till Pisistratus' time, about 500 Years after."[20]
|
20 |
+
|
21 |
+
Friedrich August Wolf's Prolegomena ad Homerum, published in 1795, argued that much of the material later incorporated into the Iliad and the Odyssey was originally composed in the tenth century BC in the form of short, separate oral songs,[25][26][20] which passed through oral tradition for roughly four hundred years before being assembled into prototypical versions of the Iliad and the Odyssey in the sixth century BC by literate authors.[25][26][20] After being written down, Wolf maintained that the two poems were extensively edited, modernized, and eventually shaped into their present state as artistic unities.[25][26][20] Wolf and the "Analyst" school, which led the field in the nineteenth century, sought to recover the original, authentic poems which were thought to be concealed by later excrescences.[25][26][20][27]
|
22 |
+
|
23 |
+
Within the Analyst school were two camps: proponents of the "lay theory," which held that the Iliad and the Odyssey were put together from a large number of short, independent songs,[20] and proponents of the "nucleus theory", which held that Homer had originally composed shorter versions of the Iliad and the Odyssey, which later poets expanded and revised.[20] A small group of scholars opposed to the Analysts, dubbed "Unitarians", saw the later additions as superior, the work of a single inspired poet.[25][26][20] By around 1830, the central preoccupations of Homeric scholars, dealing with whether or not "Homer" actually existed, when and how the Homeric poems originated, how they were transmitted, when and how they were finally written down, and their overall unity, had been dubbed "the Homeric Question".[20]
|
24 |
+
|
25 |
+
Following World War I, the Analyst school began to fall out of favor among Homeric scholars.[20] It did not die out entirely, but it came to be increasingly seen as a discredited dead end.[20] Starting in around 1928, Milman Parry and Albert Lord, after their studies of folk bards in the Balkans, developed the "Oral-Formulaic Theory" that the Homeric poems were originally composed through improvised oral performances, which relied on traditional epithets and poetic formulas.[28][27][20] This theory found very wide scholarly acceptance[28][27][20] and explained many previously puzzling features of the Homeric poems, including their unusually archaic language, their extensive use of stock epithets, and their other "repetitive" features.[27] Many scholars concluded that the "Homeric question" had finally been answered.[20] Meanwhile, the 'Neoanalysts' sought to bridge the gap between the 'Analysts' and 'Unitarians'.[29][30] The Neoanalysts sought to trace the relationships between the Homeric poems and other epic poems, which have now been lost, but of which modern scholars do possess some patchy knowledge.[20] Knowledge of earlier versions of the epics can be derived from anomalies of structure and detail in our surviving version of the Iliad and Odyssey. These anomalies point to earlier versions of the Iliad in which Ajax played a more prominent role, in which the Achaean embassy to Achilles comprised different characters, and in which Patroclus was actually mistaken for Achilles by the Trojans. They point to earlier versions of the Odyssey in which Telemachus went in search of news of his father not to Menelaus in Sparta but to Idomeneus in Crete, in which Telemachus met up with his father in Crete and conspired with him to return to Ithaca disguised as the soothsayer Theoclymenus, and in which Penelope recognized Odysseus much earlier in the narrative and conspired with him in the destruction of the suitors.[31] Neoanalysis can be viewed as a form of Analysis informed by the principles of Oral Theory, recognizing as it does the existence and influence of previously existing tales and yet appreciating the technique of a single poet in adapting them to his Iliad and Odyssey.
|
26 |
+
|
27 |
+
Most contemporary scholars, although they disagree on other questions about the genesis of the poems, agree that the Iliad and the Odyssey were not produced by the same author, based on "the many differences of narrative manner, theology, ethics, vocabulary, and geographical perspective, and by the apparently imitative character of certain passages of the Odyssey in relation to the Iliad."[32][33][34][20] Nearly all scholars agree that the Iliad and the Odyssey are unified poems, in that each poem shows a clear overall design, and that they are not merely strung together from unrelated songs.[20] It is also generally agreed that each poem was composed mostly by a single author, who probably relied heavily on older oral traditions.[20] Nearly all scholars agree that the Doloneia in Book X of the Iliad is not part of the original poem, but rather a later insertion by a different poet.[20]
|
28 |
+
|
29 |
+
Some ancient scholars believed Homer to have been an eyewitness to the Trojan War; others thought he had lived up to 500 years afterwards.[35] Contemporary scholars continue to debate the date of the poems.[36][37][20] A long history of oral transmission lies behind the composition of the poems, complicating the search for a precise date.[38] At one extreme, Richard Janko has proposed a date for both poems to the eighth century BC based on linguistic analysis and statistics.[36][37] Barry B. Powell dates the composition of the Iliad and the Odyssey to sometime between 800 and 750 BC, based on the statement from Herodotus, who lived in the late fifth century BC, that Homer lived four hundred years before his own time "and not more" (καὶ οὐ πλέοσι), and on the fact that the poems do not mention hoplite battle tactics, inhumation, or literacy.[39] Martin Litchfield West has argued that the Iliad echoes the poetry of Hesiod, and that it must have been composed around 660–650 BC at the earliest, with the Odyssey up to a generation later.[40][41][20] He also interprets passages in the Iliad as showing knowledge of historical events that occurred in the ancient Near East during the middle of the seventh century BC, including the destruction of Babylon by Sennacherib in 689 BC and the Sack of Thebes by Ashurbanipal in 663/4 BC.[20] At the other extreme, a few American scholars such as Gregory Nagy see "Homer" as a continually evolving tradition, which grew much more stable as the tradition progressed, but which did not fully cease to continue changing and evolving until as late as the middle of the second century BC.[36][37][20]
|
30 |
+
|
31 |
+
"'Homer" is a name of unknown etymological origin, around which many theories were erected in antiquity. One such linkage was to the Greek ὅμηρος (hómēros), "hostage" (or "surety"). The explanations suggested by modern scholars tend to mirror their position on the overall Homeric question. Nagy interprets it as "he who fits (the song) together". West has advanced both possible Greek and Phoenician etymologies.[42][43]
|
32 |
+
|
33 |
+
Scholars continue to debate questions such as whether the Trojan War actually took place – and if so when and where – and to what extent the society depicted by Homer is based on his own or one which was, even at the time of the poems' composition, known only as legend. The Homeric epics are largely set in the east and center of the Mediterranean, with some scattered references to Egypt, Ethiopia and other distant lands, in a warlike society that resembles that of the Greek world slightly before the hypothesized date of the poems' composition.[44][45][46][47]
|
34 |
+
|
35 |
+
In ancient Greek chronology, the sack of Troy was dated to 1184 BC. By the nineteenth century, there was widespread scholarly skepticism that the Trojan War had ever happened and that Troy had even existed, but in 1873 Heinrich Schliemann announced to the world that he had discovered the ruins of Homer's Troy at Hissarlik in modern Turkey. Some contemporary scholars think the destruction of Troy VIIa circa 1220 BC was the origin of the myth of the Trojan War, others that the poem was inspired by multiple similar sieges that took place over the centuries.[48]
|
36 |
+
|
37 |
+
Most scholars now agree that the Homeric poems depict customs and elements of the material world that are derived from different periods of Greek history.[27][49][50] For instance, the heroes in the poems use bronze weapons, characteristic of the Bronze Age in which the poems are set, rather than the later Iron Age during which they were composed;[27][49][50] yet the same heroes are cremated (an Iron Age practice) rather than buried (as they were in the Bronze Age).[27][49][50] In some parts of the Homeric poems, heroes are accurately described as carrying large shields like those used by warriors during the Mycenaean period,[27] but, in other places, they are instead described carrying the smaller shields that were commonly used during the time when the poems were written in the early Iron Age.[27]
|
38 |
+
|
39 |
+
In the Iliad 10.260–265, Odysseus is described as wearing a helmet made of boar's tusks. Such helmets were not worn in Homer's time, but were commonly worn by aristocratic warriors between 1600 and 1150 BC.[51][52][53] The decipherment of Linear B in the 1950s by Michael Ventris and continued archaeological investigation has increased modern scholars' understanding of Aegean civilisation, which in many ways resembles the ancient Near East more than the society described by Homer.[54] Some aspects of the Homeric world are simply made up;[27] for instance, the Iliad 22.145–56 describes there being two springs that run near the city of Troy, one that runs steaming hot and the other that runs icy cold.[27] It is here that Hector takes his final stand against Achilles.[27] Archaeologists, however, have uncovered no evidence that springs of this description ever actually existed.[27]
|
40 |
+
|
41 |
+
The Homeric epics are written in an artificial literary language or 'Kunstsprache' only used in epic hexameter poetry. Homeric Greek shows features of multiple regional Greek dialects and periods, but is fundamentally based on Ionic Greek, in keeping with the tradition that Homer was from Ionia. Linguistic analysis suggests that the Iliad was composed slightly before the Odyssey, and that Homeric formulae preserve older features than other parts of the poems.[55][56]
|
42 |
+
|
43 |
+
The Homeric poems were composed in unrhymed dactylic hexameter; ancient Greek metre was quantity-based rather than stress-based.[57][58] Homer frequently uses set phrases such as epithets ('crafty Odysseus', 'rosy-fingered Dawn', 'owl-eyed Athena', etc.), Homeric formulae ('and then answered [him/her], Agamemnon, king of men', 'when the early-born rose-fingered Dawn came to light', 'thus he/she spoke'), simile, type scenes, ring composition and repetition. These habits aid the extemporizing bard, and are characteristic of oral poetry. For instance, the main words of a Homeric sentence are generally placed towards the beginning, whereas literate poets like Virgil or Milton use longer and more complicated syntactical structures. Homer then expands on these ideas in subsequent clauses; this technique is called parataxis.[59]
|
44 |
+
|
45 |
+
The so-called 'type scenes' (typischen Scenen), were named by Walter Arend in 1933. He noted that Homer often, when describing frequently recurring activities such as eating, praying, fighting and dressing, used blocks of set phrases in sequence that were then elaborated by the poet. The 'Analyst' school had considered these repetitions as un-Homeric, whereas Arend interpreted them philosophically. Parry and Lord noted that these conventions are found in many other cultures.[60][61]
|
46 |
+
|
47 |
+
'Ring composition' or chiastic structure (when a phrase or idea is repeated at both the beginning and end of a story, or a series of such ideas first appears in the order A, B, C... before being reversed as ...C, B, A) has been observed in the Homeric epics. Opinion differs as to whether these occurrences are a conscious artistic device, a mnemonic aid or a spontaneous feature of human storytelling.[62][63]
|
48 |
+
|
49 |
+
Both of the Homeric poems begin with an invocation to the Muse.[64] In the Iliad, the poet invokes her to sing of "the anger of Achilles",[64] and, in the Odyssey, he asks her to sing of "the man of many ways".[64] A similar opening was later employed by Virgil in his Aeneid.[64]
|
50 |
+
|
51 |
+
The orally transmitted Homeric poems were put into written form at some point between the eighth and sixth centuries BC. Some scholars believe that they were dictated to a scribe by the poet and that our inherited versions of the Iliad and Odyssey were in origin orally-dictated texts.[65] Albert Lord noted that the Balkan bards that he was studying revised and expanded their songs in their process of dictating.[66] Some scholars hypothesize that a similar process of revision and expansion occurred when the Homeric poems were first written down.[67][68] Other scholars hold that, after the poems were created in the eighth century, they continued to be orally transmitted with considerable revision until they were written down in the sixth century.[69] After textualisation, the poems were each divided into 24 rhapsodes, today referred to as books, and labelled by the letters of the Greek alphabet. Most scholars attribute the book divisions to the Hellenistic scholars of Alexandria, in Egypt.[70] Some trace the divisions back further to the Classical period.[71] Very few credit Homer himself with the divisions.[72]
|
52 |
+
|
53 |
+
In antiquity, it was widely held that the Homeric poems were collected and organised in Athens in the late sixth century BC by the tyrant Peisistratos (died 528/7 BC), in what subsequent scholars have dubbed the "Peisistratean recension".[73][21] The idea that the Homeric poems were originally transmitted orally and first written down during the reign of Peisistratos is referenced by the first-century BC Roman orator Cicero and is also referenced in a number of other surviving sources, including two ancient Lives of Homer.[21] From around 150 BC, the texts of the Homeric poems seem to have become relatively established. After the establishment of the Library of Alexandria, Homeric scholars such as Zenodotus of Ephesus, Aristophanes of Byzantium and in particular Aristarchus of Samothrace helped establish a canonical text.[74]
|
54 |
+
|
55 |
+
The first printed edition of Homer was produced in 1488 in Milan, Italy. Today scholars use medieval manuscripts, papyri and other sources; some argue for a "multi-text" view, rather than seeking a single definitive text. The nineteenth-century edition of Arthur Ludwich mainly follows Aristarchus's work, whereas van Thiel's (1991, 1996) follows the medieval vulgate. Others, such as Martin West (1998–2000) or T.W. Allen, fall somewhere between these two extremes.[74]
|
56 |
+
|
57 |
+
This is a partial list of translations into English of Homer's Iliad and Odyssey.
|
en/2596.html.txt
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Homer (/ˈhoʊmər/; Ancient Greek: Ὅμηρος Greek pronunciation: [hómɛːros], Hómēros) is the presumed author of the Iliad and the Odyssey, two epic poems that are the central works of ancient Greek literature. The Iliad is set during the Trojan War, the ten-year siege of the city of Troy by a coalition of Greek kingdoms. It focuses on a quarrel between King Agamemnon and the warrior Achilles lasting a few weeks during the last year of the war. The Odyssey focuses on the ten-year journey home of Odysseus, king of Ithaca, after the fall of Troy. Many accounts of Homer's life circulated in classical antiquity, the most widespread being that he was a blind bard from Ionia, a region of central coastal Anatolia in present-day Turkey. Modern scholars consider these accounts legendary.[2][3][4]
|
4 |
+
|
5 |
+
The Homeric Question – concerning by whom, when, where and under what circumstances the Iliad and Odyssey were composed – continues to be debated. Broadly speaking, modern scholarly opinion falls into two groups. One holds that most of the Iliad and (according to some) the Odyssey are the works of a single poet of genius. The other considers the Homeric poems to be the result of a process of working and reworking by many contributors, and that "Homer" is best seen as a label for an entire tradition.[4] It is generally accepted that the poems were composed at some point around the late eighth or early seventh century BC.[5]
|
6 |
+
|
7 |
+
The poems are in Homeric Greek, also known as Epic Greek, a literary language which shows a mixture of features of the Ionic and Aeolic dialects from different centuries; the predominant influence is Eastern Ionic.[6][7] Most researchers believe that the poems were originally transmitted orally.[8] From antiquity until the present day, the influence of Homeric epic on Western civilization has been great, inspiring many of its most famous works of literature, music, art and film.[9] The Homeric epics were the greatest influence on ancient Greek culture and education; to Plato, Homer was simply the one who "has taught Greece" – ten Hellada pepaideuken.[10][11]
|
8 |
+
|
9 |
+
Today only the Iliad and Odyssey are associated with the name 'Homer'. In antiquity, a very large number of other works were sometimes attributed to him, including the Homeric Hymns, the Contest of Homer and Hesiod, the Little Iliad, the Nostoi, the Thebaid, the Cypria, the Epigoni, the comic mini-epic Batrachomyomachia ("The Frog-Mouse War"), the Margites, the Capture of Oechalia, and the Phocais. These claims are not considered authentic today and were by no means universally accepted in the ancient world. As with the multitude of legends surrounding Homer's life, they indicate little more than the centrality of Homer to ancient Greek culture.[12][13][14]
|
10 |
+
|
11 |
+
Many traditions circulated in the ancient world concerning Homer, most of which are lost. Modern scholarly consensus is that they have no value as history. Some claims were established early and repeated often. They include that Homer was blind (taking as self-referential a passage describing the blind bard Demodocus[15][16]), that he was born in Chios, that he was the son of the river Meles and the nymph Critheïs, that he was a wandering bard, that he composed a varying list of other works (the "Homerica"), that he died either in Ios or after failing to solve a riddle set by fishermen, and various explanations for the name "Homer". The two best known ancient biographies of Homer are the Life of Homer by the Pseudo-Herodotus and the Contest of Homer and Hesiod.[17][18]
|
12 |
+
|
13 |
+
The study of Homer is one of the oldest topics in scholarship, dating back to antiquity.[19][20][21] Nonetheless, the aims of Homeric studies have changed over the course of the millennia.[19] The earliest preserved comments on Homer concern his treatment of the gods, which hostile critics such as the poet Xenophanes of Colophon denounced as immoral.[21] The allegorist Theagenes of Rhegium is said to have defended Homer by arguing that the Homeric poems are allegories.[21] The Iliad and the Odyssey were widely used as school texts in ancient Greek and Hellenistic cultures.[19][21][22] They were the first literary works taught to all students.[22] The Iliad, particularly its first few books, was far more intently studied than the Odyssey during the Hellenistic and Roman periods.[22]
|
14 |
+
|
15 |
+
As a result of the poems' prominence in classical Greek education, extensive commentaries on them developed to explain parts of the poems that were culturally or linguistically difficult.[19][21] During the Hellenistic and Roman periods, many interpreters, especially the Stoics, who believed that Homeric poems conveyed Stoic doctrines, regarded them as allegories, containing hidden wisdom.[21] Perhaps partially because of the Homeric poems' extensive use in education, many authors believed that Homer's original purpose had been to educate.[21] Homer's wisdom became so widely praised that he began to acquire the image of almost a prototypical philosopher.[21] Byzantine scholars such as Eustathius of Thessalonica and John Tzetzes produced commentaries, extensions and scholia to Homer, especially in the twelfth century.[23][21] Eustathius's commentary on the Iliad alone is massive, sprawling over nearly 4,000 oversized pages in a twenty-first century printed version and his commentary on the Odyssey an additional nearly 2,000.[21]
|
16 |
+
|
17 |
+
In 1488, the Greek scholar Demetrios Chalkokondyles published the editio princeps of the Homeric poems.[21] The earliest modern Homeric scholars started with the same basic approaches towards the Homeric poems as scholars in antiquity.[21][20][19] The allegorical interpretation of the Homeric poems that had been so prevalent in antiquity returned to become the prevailing view of the Renaissance.[21] Renaissance humanists praised Homer as the archetypically wise poet, whose writings contain hidden wisdom, disguised through allegory.[21] In western Europe during the Renaissance, Virgil was more widely read than Homer and Homer was often seen through a Virgilian lens.[24]
|
18 |
+
|
19 |
+
In 1664, contradicting the widespread praise of Homer as the epitome of wisdom, François Hédelin, abbé d'Aubignac wrote a scathing attack on the Homeric poems, declaring that they were incoherent, immoral, tasteless, and without style, that Homer never existed, and that the poems were hastily cobbled together by incompetent editors from unrelated oral songs.[20] Fifty years later, the English scholar Richard Bentley concluded that Homer did exist, but that he was an obscure, prehistoric oral poet whose compositions bear little relation to the Iliad and the Odyssey as they have been passed down.[20] According to Bentley, Homer "wrote a Sequel of Songs and Rhapsodies, to be sung by himself for small Earnings and good Cheer at Festivals and other Days of Merriment; the Ilias he wrote for men, and the Odysseis for the other Sex. These loose songs were not collected together in the Form of an epic Poem till Pisistratus' time, about 500 Years after."[20]
|
20 |
+
|
21 |
+
Friedrich August Wolf's Prolegomena ad Homerum, published in 1795, argued that much of the material later incorporated into the Iliad and the Odyssey was originally composed in the tenth century BC in the form of short, separate oral songs,[25][26][20] which passed through oral tradition for roughly four hundred years before being assembled into prototypical versions of the Iliad and the Odyssey in the sixth century BC by literate authors.[25][26][20] After being written down, Wolf maintained that the two poems were extensively edited, modernized, and eventually shaped into their present state as artistic unities.[25][26][20] Wolf and the "Analyst" school, which led the field in the nineteenth century, sought to recover the original, authentic poems which were thought to be concealed by later excrescences.[25][26][20][27]
|
22 |
+
|
23 |
+
Within the Analyst school were two camps: proponents of the "lay theory," which held that the Iliad and the Odyssey were put together from a large number of short, independent songs,[20] and proponents of the "nucleus theory", which held that Homer had originally composed shorter versions of the Iliad and the Odyssey, which later poets expanded and revised.[20] A small group of scholars opposed to the Analysts, dubbed "Unitarians", saw the later additions as superior, the work of a single inspired poet.[25][26][20] By around 1830, the central preoccupations of Homeric scholars, dealing with whether or not "Homer" actually existed, when and how the Homeric poems originated, how they were transmitted, when and how they were finally written down, and their overall unity, had been dubbed "the Homeric Question".[20]
|
24 |
+
|
25 |
+
Following World War I, the Analyst school began to fall out of favor among Homeric scholars.[20] It did not die out entirely, but it came to be increasingly seen as a discredited dead end.[20] Starting in around 1928, Milman Parry and Albert Lord, after their studies of folk bards in the Balkans, developed the "Oral-Formulaic Theory" that the Homeric poems were originally composed through improvised oral performances, which relied on traditional epithets and poetic formulas.[28][27][20] This theory found very wide scholarly acceptance[28][27][20] and explained many previously puzzling features of the Homeric poems, including their unusually archaic language, their extensive use of stock epithets, and their other "repetitive" features.[27] Many scholars concluded that the "Homeric question" had finally been answered.[20] Meanwhile, the 'Neoanalysts' sought to bridge the gap between the 'Analysts' and 'Unitarians'.[29][30] The Neoanalysts sought to trace the relationships between the Homeric poems and other epic poems, which have now been lost, but of which modern scholars do possess some patchy knowledge.[20] Knowledge of earlier versions of the epics can be derived from anomalies of structure and detail in our surviving version of the Iliad and Odyssey. These anomalies point to earlier versions of the Iliad in which Ajax played a more prominent role, in which the Achaean embassy to Achilles comprised different characters, and in which Patroclus was actually mistaken for Achilles by the Trojans. They point to earlier versions of the Odyssey in which Telemachus went in search of news of his father not to Menelaus in Sparta but to Idomeneus in Crete, in which Telemachus met up with his father in Crete and conspired with him to return to Ithaca disguised as the soothsayer Theoclymenus, and in which Penelope recognized Odysseus much earlier in the narrative and conspired with him in the destruction of the suitors.[31] Neoanalysis can be viewed as a form of Analysis informed by the principles of Oral Theory, recognizing as it does the existence and influence of previously existing tales and yet appreciating the technique of a single poet in adapting them to his Iliad and Odyssey.
|
26 |
+
|
27 |
+
Most contemporary scholars, although they disagree on other questions about the genesis of the poems, agree that the Iliad and the Odyssey were not produced by the same author, based on "the many differences of narrative manner, theology, ethics, vocabulary, and geographical perspective, and by the apparently imitative character of certain passages of the Odyssey in relation to the Iliad."[32][33][34][20] Nearly all scholars agree that the Iliad and the Odyssey are unified poems, in that each poem shows a clear overall design, and that they are not merely strung together from unrelated songs.[20] It is also generally agreed that each poem was composed mostly by a single author, who probably relied heavily on older oral traditions.[20] Nearly all scholars agree that the Doloneia in Book X of the Iliad is not part of the original poem, but rather a later insertion by a different poet.[20]
|
28 |
+
|
29 |
+
Some ancient scholars believed Homer to have been an eyewitness to the Trojan War; others thought he had lived up to 500 years afterwards.[35] Contemporary scholars continue to debate the date of the poems.[36][37][20] A long history of oral transmission lies behind the composition of the poems, complicating the search for a precise date.[38] At one extreme, Richard Janko has proposed a date for both poems to the eighth century BC based on linguistic analysis and statistics.[36][37] Barry B. Powell dates the composition of the Iliad and the Odyssey to sometime between 800 and 750 BC, based on the statement from Herodotus, who lived in the late fifth century BC, that Homer lived four hundred years before his own time "and not more" (καὶ οὐ πλέοσι), and on the fact that the poems do not mention hoplite battle tactics, inhumation, or literacy.[39] Martin Litchfield West has argued that the Iliad echoes the poetry of Hesiod, and that it must have been composed around 660–650 BC at the earliest, with the Odyssey up to a generation later.[40][41][20] He also interprets passages in the Iliad as showing knowledge of historical events that occurred in the ancient Near East during the middle of the seventh century BC, including the destruction of Babylon by Sennacherib in 689 BC and the Sack of Thebes by Ashurbanipal in 663/4 BC.[20] At the other extreme, a few American scholars such as Gregory Nagy see "Homer" as a continually evolving tradition, which grew much more stable as the tradition progressed, but which did not fully cease to continue changing and evolving until as late as the middle of the second century BC.[36][37][20]
|
30 |
+
|
31 |
+
"'Homer" is a name of unknown etymological origin, around which many theories were erected in antiquity. One such linkage was to the Greek ὅμηρος (hómēros), "hostage" (or "surety"). The explanations suggested by modern scholars tend to mirror their position on the overall Homeric question. Nagy interprets it as "he who fits (the song) together". West has advanced both possible Greek and Phoenician etymologies.[42][43]
|
32 |
+
|
33 |
+
Scholars continue to debate questions such as whether the Trojan War actually took place – and if so when and where – and to what extent the society depicted by Homer is based on his own or one which was, even at the time of the poems' composition, known only as legend. The Homeric epics are largely set in the east and center of the Mediterranean, with some scattered references to Egypt, Ethiopia and other distant lands, in a warlike society that resembles that of the Greek world slightly before the hypothesized date of the poems' composition.[44][45][46][47]
|
34 |
+
|
35 |
+
In ancient Greek chronology, the sack of Troy was dated to 1184 BC. By the nineteenth century, there was widespread scholarly skepticism that the Trojan War had ever happened and that Troy had even existed, but in 1873 Heinrich Schliemann announced to the world that he had discovered the ruins of Homer's Troy at Hissarlik in modern Turkey. Some contemporary scholars think the destruction of Troy VIIa circa 1220 BC was the origin of the myth of the Trojan War, others that the poem was inspired by multiple similar sieges that took place over the centuries.[48]
|
36 |
+
|
37 |
+
Most scholars now agree that the Homeric poems depict customs and elements of the material world that are derived from different periods of Greek history.[27][49][50] For instance, the heroes in the poems use bronze weapons, characteristic of the Bronze Age in which the poems are set, rather than the later Iron Age during which they were composed;[27][49][50] yet the same heroes are cremated (an Iron Age practice) rather than buried (as they were in the Bronze Age).[27][49][50] In some parts of the Homeric poems, heroes are accurately described as carrying large shields like those used by warriors during the Mycenaean period,[27] but, in other places, they are instead described carrying the smaller shields that were commonly used during the time when the poems were written in the early Iron Age.[27]
|
38 |
+
|
39 |
+
In the Iliad 10.260–265, Odysseus is described as wearing a helmet made of boar's tusks. Such helmets were not worn in Homer's time, but were commonly worn by aristocratic warriors between 1600 and 1150 BC.[51][52][53] The decipherment of Linear B in the 1950s by Michael Ventris and continued archaeological investigation has increased modern scholars' understanding of Aegean civilisation, which in many ways resembles the ancient Near East more than the society described by Homer.[54] Some aspects of the Homeric world are simply made up;[27] for instance, the Iliad 22.145–56 describes there being two springs that run near the city of Troy, one that runs steaming hot and the other that runs icy cold.[27] It is here that Hector takes his final stand against Achilles.[27] Archaeologists, however, have uncovered no evidence that springs of this description ever actually existed.[27]
|
40 |
+
|
41 |
+
The Homeric epics are written in an artificial literary language or 'Kunstsprache' only used in epic hexameter poetry. Homeric Greek shows features of multiple regional Greek dialects and periods, but is fundamentally based on Ionic Greek, in keeping with the tradition that Homer was from Ionia. Linguistic analysis suggests that the Iliad was composed slightly before the Odyssey, and that Homeric formulae preserve older features than other parts of the poems.[55][56]
|
42 |
+
|
43 |
+
The Homeric poems were composed in unrhymed dactylic hexameter; ancient Greek metre was quantity-based rather than stress-based.[57][58] Homer frequently uses set phrases such as epithets ('crafty Odysseus', 'rosy-fingered Dawn', 'owl-eyed Athena', etc.), Homeric formulae ('and then answered [him/her], Agamemnon, king of men', 'when the early-born rose-fingered Dawn came to light', 'thus he/she spoke'), simile, type scenes, ring composition and repetition. These habits aid the extemporizing bard, and are characteristic of oral poetry. For instance, the main words of a Homeric sentence are generally placed towards the beginning, whereas literate poets like Virgil or Milton use longer and more complicated syntactical structures. Homer then expands on these ideas in subsequent clauses; this technique is called parataxis.[59]
|
44 |
+
|
45 |
+
The so-called 'type scenes' (typischen Scenen), were named by Walter Arend in 1933. He noted that Homer often, when describing frequently recurring activities such as eating, praying, fighting and dressing, used blocks of set phrases in sequence that were then elaborated by the poet. The 'Analyst' school had considered these repetitions as un-Homeric, whereas Arend interpreted them philosophically. Parry and Lord noted that these conventions are found in many other cultures.[60][61]
|
46 |
+
|
47 |
+
'Ring composition' or chiastic structure (when a phrase or idea is repeated at both the beginning and end of a story, or a series of such ideas first appears in the order A, B, C... before being reversed as ...C, B, A) has been observed in the Homeric epics. Opinion differs as to whether these occurrences are a conscious artistic device, a mnemonic aid or a spontaneous feature of human storytelling.[62][63]
|
48 |
+
|
49 |
+
Both of the Homeric poems begin with an invocation to the Muse.[64] In the Iliad, the poet invokes her to sing of "the anger of Achilles",[64] and, in the Odyssey, he asks her to sing of "the man of many ways".[64] A similar opening was later employed by Virgil in his Aeneid.[64]
|
50 |
+
|
51 |
+
The orally transmitted Homeric poems were put into written form at some point between the eighth and sixth centuries BC. Some scholars believe that they were dictated to a scribe by the poet and that our inherited versions of the Iliad and Odyssey were in origin orally-dictated texts.[65] Albert Lord noted that the Balkan bards that he was studying revised and expanded their songs in their process of dictating.[66] Some scholars hypothesize that a similar process of revision and expansion occurred when the Homeric poems were first written down.[67][68] Other scholars hold that, after the poems were created in the eighth century, they continued to be orally transmitted with considerable revision until they were written down in the sixth century.[69] After textualisation, the poems were each divided into 24 rhapsodes, today referred to as books, and labelled by the letters of the Greek alphabet. Most scholars attribute the book divisions to the Hellenistic scholars of Alexandria, in Egypt.[70] Some trace the divisions back further to the Classical period.[71] Very few credit Homer himself with the divisions.[72]
|
52 |
+
|
53 |
+
In antiquity, it was widely held that the Homeric poems were collected and organised in Athens in the late sixth century BC by the tyrant Peisistratos (died 528/7 BC), in what subsequent scholars have dubbed the "Peisistratean recension".[73][21] The idea that the Homeric poems were originally transmitted orally and first written down during the reign of Peisistratos is referenced by the first-century BC Roman orator Cicero and is also referenced in a number of other surviving sources, including two ancient Lives of Homer.[21] From around 150 BC, the texts of the Homeric poems seem to have become relatively established. After the establishment of the Library of Alexandria, Homeric scholars such as Zenodotus of Ephesus, Aristophanes of Byzantium and in particular Aristarchus of Samothrace helped establish a canonical text.[74]
|
54 |
+
|
55 |
+
The first printed edition of Homer was produced in 1488 in Milan, Italy. Today scholars use medieval manuscripts, papyri and other sources; some argue for a "multi-text" view, rather than seeking a single definitive text. The nineteenth-century edition of Arthur Ludwich mainly follows Aristarchus's work, whereas van Thiel's (1991, 1996) follows the medieval vulgate. Others, such as Martin West (1998–2000) or T.W. Allen, fall somewhere between these two extremes.[74]
|
56 |
+
|
57 |
+
This is a partial list of translations into English of Homer's Iliad and Odyssey.
|
en/2597.html.txt
ADDED
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Homer Jay Simpson is a fictional character and the main protagonist of the American animated sitcom The Simpsons. He is voiced by Dan Castellaneta and first appeared on television, along with the rest of his family, in The Tracey Ullman Show short "Good Night" on April 19, 1987. Homer was created and designed by cartoonist Matt Groening while he was waiting in the lobby of James L. Brooks' office. Groening had been called to pitch a series of shorts based on his comic strip Life in Hell but instead decided to create a new set of characters. He named the character after his father, Homer Groening. After appearing for three seasons on The Tracey Ullman Show, the Simpson family got their own series on Fox, which debuted December 17, 1989.
|
4 |
+
|
5 |
+
As patriarch of the eponymous family, Homer and his wife Marge have three children: Bart, Lisa and Maggie. As the family's provider, he works at the Springfield Nuclear Power Plant as safety inspector. Homer embodies many American working class stereotypes: he is obese, immature, outspoken, aggressive, balding, lazy, ignorant, unprofessional, arrogant, and addicted to beer, junk food and watching television. However, he often tries to be a good man and is staunchly protective of his family, especially when they need him the most. Despite the suburban blue-collar routine of his life, he has had a number of remarkable experiences, including going to space, climbing the tallest mountain in Springfield by himself, fighting former President George H. W. Bush and winning a Grammy Award as a member of a barbershop quartet.
|
6 |
+
|
7 |
+
In the shorts and earlier episodes, Castellaneta voiced Homer with a loose impression of Walter Matthau; however, during the second and third seasons of the half-hour show, Homer's voice evolved to become more robust, to allow the expression of a fuller range of emotions. He has appeared in other media relating to The Simpsons—including video games, The Simpsons Movie, The Simpsons Ride, commercials, and comic books—and inspired an entire line of merchandise. His signature catchphrase, the annoyed grunt "D'oh!", has been included in The New Oxford Dictionary of English since 1998 and the Oxford English Dictionary since 2001.
|
8 |
+
|
9 |
+
Homer is one of the most influential characters in the history of television, and is widely considered to be an American cultural icon. The British newspaper The Sunday Times described him as "The greatest comic creation of [modern] time". He was named the greatest character "of the last 20 years" in 2010 by Entertainment Weekly, was ranked the second-greatest cartoon character by TV Guide, behind Bugs Bunny, and was voted the greatest television character of all time by Channel 4 viewers. For voicing Homer, Castellaneta has won four Primetime Emmy Awards for Outstanding Voice-Over Performance and a special-achievement Annie Award. In 2000, Homer and his family were awarded a star on the Hollywood Walk of Fame.
|
10 |
+
|
11 |
+
Homer Jay Simpson is the bumbling husband of Marge, and father to Bart, Lisa and Maggie Simpson.[1] He is the son of Mona and Abraham "Grampa" Simpson. Homer held over 188 different jobs in the first 400 episodes of The Simpsons.[2] In most episodes, he works as the nuclear safety inspector at the Springfield Nuclear Power Plant, a position which, whilst being wholly unsuitable for, he has held since "Homer's Odyssey", the third episode of the series.[3] At the nuclear plant, Homer is often ignored and completely forgotten by his boss Mr. Burns, and he constantly falls asleep and neglects his duties. Matt Groening has stated that he decided to have Homer work at the power plant because of the potential for Homer to wreak severe havoc.[4] Each of his other jobs has lasted only one episode. In the first half of the series, the writers developed an explanation about how he got fired from the plant and was then rehired in every episode. In later episodes, he often began a new job on impulse, without any mention of his regular employment.[5]
|
12 |
+
|
13 |
+
The Simpsons uses a floating timeline in which the characters never physically age, and, as such, the show is generally assumed to be always set in the current year. Nevertheless, in several episodes, events in Homer's life have been linked to specific time periods.[1] "Mother Simpson" (season seven, 1995) depicts Homer's mother, Mona, as a radical who went into hiding in 1969 following a run-in with the law;[6] "The Way We Was" (season two, 1991) shows Homer falling in love with Marge Bouvier as a senior at Springfield High School in 1974;[7] and "I Married Marge" (season three, 1991) implies that Marge became pregnant with Bart in 1980.[8] However, the episode "That '90s Show" (season 19, 2008) contradicted much of this backstory, portraying Homer and Marge as a twentysomething childless couple in the early 1990s.[9]
|
14 |
+
|
15 |
+
Due to the floating timeline, Homer's age has changed occasionally as the series developed; he was 34 in the early episodes,[7] 36 in season four,[10] 38 and 39 in season eight,[11] and 40 in the eighteenth season,[12] although even in those seasons his age is inconsistent.[1] During Bill Oakley and Josh Weinstein's period as showrunners, they found that as they aged, Homer seemed to become older too, so they increased his age to 38. His height is 6' (1.83 m).[13]
|
16 |
+
|
17 |
+
During the opening of the 71st Primetime Emmy Awards broadcast on Fox on September 22, 2019, Homer Simpson was introduced as the host of the show. Unfortunately, after some brief opening comments, a piano fell on him – D'oh![14] The remainder of the award show was then broadcast without a host.[15]
|
18 |
+
|
19 |
+
Naming the characters after members of his own family, Groening named Homer after his father, who himself had been named after the ancient Greek poet of the same name.[16][17][18] Very little else of Homer's character was based on him, and to prove that the meaning behind Homer's name was not significant, Groening later named his own son Homer.[19][20] According to Groening, "Homer originated with my goal to both amuse my real father, and just annoy him a little bit. My father was an athletic, creative, intelligent filmmaker and writer, and the only thing he had in common with Homer was a love of donuts."[21] Although Groening has stated in several interviews that Homer was named after his father, he also claimed in several 1990 interviews that a character in the 1939 Nathanael West novel The Day of the Locust was the inspiration for naming Homer.[1][22][23] Homer's middle initial "J", which stands for "Jay",[24] is a "tribute" to animated characters such as Bullwinkle J. Moose and Rocket J. Squirrel from The Rocky and Bullwinkle Show, who got their middle initial from Jay Ward.[25]
|
20 |
+
|
21 |
+
Homer made his debut with the rest of the Simpson family on April 19, 1987, in The Tracey Ullman Show short "Good Night".[26] In 1989, the shorts were adapted into The Simpsons, a half-hour series airing on the Fox Broadcasting Company. Homer and the Simpson family remained the main characters on this new show.[27]
|
22 |
+
|
23 |
+
As currently depicted in the series, Homer's everyday clothing consists of a white shirt with short sleeves and open collar, blue pants, and gray shoes. He is overweight and bald, except for a fringe of hair around the back and sides of his head and two curling hairs on top, and his face always sports a growth of beard stubble that instantly regrows whenever he shaves.
|
24 |
+
|
25 |
+
The entire Simpson family was designed so that they would be recognizable in silhouette.[28] The family was crudely drawn because Groening had submitted basic sketches to the animators, assuming they would clean them up; instead, they just traced over his drawings.[16] By coincidence or not, Homer's look bears a resemblance to the cartoon character Adamsson, created by Swedish cartoonist Oscar Jacobsson in 1920.[29] Homer's physical features are generally not used in other characters; for example, in the later seasons, no characters other than Homer, Grampa Simpson, Lenny Leonard, and Krusty the Clown have a similar beard line.[30] When Groening originally designed Homer, he put his initials into the character's hairline and ear: the hairline resembled an 'M', and the right ear resembled a 'G'. Groening decided that this would be too distracting and redesigned the ear to look normal. However, he still draws the ear as a 'G' when he draws pictures of Homer for fans.[31] The basic shape of Homer's head is described by director Mark Kirkland as a tube-shaped coffee can with a salad bowl on top.[32] During the shorts, the animators experimented with the way Homer would move his mouth when talking. At one point, his mouth would stretch out back "beyond his beardline"; but this was dropped when it got "out of control."[33] In some early episodes, Homer's hair was rounded rather than sharply pointed because animation director Wes Archer felt it should look disheveled. Homer's hair evolved to be consistently pointed.[34] During the first three seasons, Homer's design for some close-up shots included small lines which were meant to be eyebrows. Groening strongly disliked them and they were eventually dropped.[34]
|
26 |
+
|
27 |
+
In the season seven (1995) episode "Treehouse of Horror VI", Homer was computer animated into a three-dimensional character for the first time for the "Homer3" segment of the episode. The computer animation directors at Pacific Data Images worked hard not to "reinvent the character".[35] In the final minute of the segment, the 3D Homer ends up in a real world, live-action Los Angeles. The scene was directed by David Mirkin and was the first time a Simpsons character had been in the real world in the series.[35] Because "Lisa's Wedding" (season six, 1995) is set fifteen years in the future, Homer's design was altered to make him older in the episode. He is heavier; one of the hairs on top of his head was removed; and an extra line was placed under the eye. A similar design has been used in subsequent flashforwards.[36]
|
28 |
+
|
29 |
+
Homer's voice is performed by Dan Castellaneta, who voices numerous other characters, including Grampa Simpson, Krusty the Clown, Barney Gumble, Groundskeeper Willie, Mayor Quimby and Hans Moleman. Castellaneta had been part of the regular cast of The Tracey Ullman Show and had previously done some voice-over work in Chicago alongside his wife Deb Lacusta. Voices were needed for the Simpsons shorts, so the producers decided to ask Castellaneta and fellow cast member Julie Kavner to voice Homer and Marge rather than hire more actors.[37][38] In the shorts and first few seasons of the half-hour show, Homer's voice is different from the majority of the series. The voice began as a loose impression of Walter Matthau, but Castellaneta could not "get enough power behind that voice",[38] or sustain his Matthau impression for the nine- to ten-hour-long recording sessions, and had to find something easier.[2] During the second and third seasons of the half-hour show, Castellaneta "dropped the voice down"[37] and developed it as more versatile and humorous, allowing Homer a fuller range of emotions.[39]
|
30 |
+
|
31 |
+
Castellaneta's normal speaking voice does not bear any resemblance to Homer's.[40] To perform Homer's voice, Castellaneta lowers his chin to his chest[38] and is said to "let his I.Q. go".[41] While in this state, he has ad-libbed several of Homer's least intelligent comments,[41] such as the line "S-M-R-T; I mean, S-M-A-R-T!" from "Homer Goes to College" (season five, 1993) which was a genuine mistake made by Castellaneta during recording.[42] Castellaneta likes to stay in character during recording sessions,[43] and he tries to visualize a scene so that he can give the proper voice to it.[44] Despite Homer's fame, Castellaneta claims he is rarely recognized in public, "except, maybe, by a die-hard fan".[43]
|
32 |
+
|
33 |
+
"Homer's Barbershop Quartet" (season five, 1993) is the only episode where Homer's voice was provided by someone other than Castellaneta. The episode features Homer forming a barbershop quartet called The Be Sharps; and, at some points, his singing voice is provided by a member of The Dapper Dans.[45] The Dapper Dans had recorded the singing parts for all four members of The Be Sharps. Their singing was intermixed with the normal voice actors' voices, often with a regular voice actor singing the melody and the Dapper Dans providing backup.[46]
|
34 |
+
|
35 |
+
Until 1998, Castellaneta was paid $30,000 per episode. During a pay dispute in 1998, Fox threatened to replace the six main voice actors with new actors, going as far as preparing for casting of new voices.[47] However, the dispute was soon resolved and he received $125,000 per episode until 2004 when the voice actors demanded that they be paid $360,000 an episode.[47] The issue was resolved a month later,[48] and Castellaneta earned $250,000 per episode.[49] After salary re-negotiations in 2008, the voice actors receive approximately $400,000 per episode.[50] Three years later, with Fox threatening to cancel the series unless production costs were cut, Castellaneta and the other cast members accepted a 30 percent pay cut, down to just over $300,000 per episode.[51]
|
36 |
+
|
37 |
+
Executive producer Al Jean notes that in The Simpsons' writing room, "everyone loves writing for Homer", and many of his adventures are based on experiences of the writers.[52] In the early seasons of the show, Bart was the main focus. But, around the fourth season, Homer became more of the focus. According to Matt Groening, this was because "With Homer, there's just a wider range of jokes you can do. And there are far more drastic consequences to Homer's stupidity. There's only so far you can go with a juvenile delinquent. We wanted Bart to do anything up to the point of him being tried in court as a dad. But Homer is a dad, and his boneheaded-ness is funnier. [...] Homer is launching himself headfirst into every single impulsive thought that occurs to him."[21]
|
38 |
+
|
39 |
+
Homer's behavior has changed a number of times through the run of the series. He was originally "very angry" and oppressive toward Bart, but these characteristics were toned down somewhat as his persona was further explored.[53] In early seasons, Homer appeared concerned that his family was going to make him look bad; however, in later episodes he was less anxious about how he was perceived by others.[54] In the first several years, Homer was often portrayed as sweet and sincere, but during Mike Scully's tenure as executive producer (seasons nine, 1997 to twelve, 2001), he became more of "a boorish, self-aggrandizing oaf".[55] Chris Suellentrop of Slate wrote, "under Scully's tenure, The Simpsons became, well, a cartoon. ... Episodes that once would have ended with Homer and Marge bicycling into the sunset ... now end with Homer blowing a tranquilizer dart into Marge's neck."[56] Fans have dubbed this incarnation of the character "Jerkass Homer".[57][58][59] At voice recording sessions, Castellaneta has rejected material written in the script that portrayed Homer as being too mean. He believes that Homer is "boorish and unthinking, but he'd never be mean on purpose."[60] When editing The Simpsons Movie, several scenes were changed to make Homer more sympathetic.[61]
|
40 |
+
|
41 |
+
The writers have made Homer's intelligence appear to decline over the years; they explain this was not done intentionally, but it was necessary to top previous jokes.[62] For example, in "When You Dish Upon a Star", (season 10, 1998) the writers included a scene where Homer admits that he cannot read. The writers debated including this plot twist because it would contradict previous scenes in which Homer does read, but eventually they decided to keep the joke because they found it humorous. The writers often debate how far to go in portraying Homer's stupidity; one suggested rule is that "he can never forget his own name".[63]
|
42 |
+
|
43 |
+
The comic efficacy of Homer's personality lies in his frequent bouts of bumbling stupidity, laziness and his explosive anger. He has a low intelligence level and is described by director David Silverman as "creatively brilliant in his stupidity".[64] Homer also shows immense apathy towards work, is overweight, and "is devoted to his stomach".[64] His short attention span is evidenced by his impulsive decisions to engage in various hobbies and enterprises, only to "change ... his mind when things go badly".[64] Homer often spends his evenings drinking Duff Beer at Moe's Tavern, and was shown in the episode "Duffless" (season four, 1993) as a full-blown alcoholic.[65] He is very envious of his neighbors, Ned Flanders and his family, and is easily enraged by Bart. Homer will often strangle Bart on impulse upon Bart angering him (and can also be seen saying one of his catchphrases, "Why you little!") in a cartoonish manner. The first instance of Homer strangling Bart was in the short "Family Portrait". According to Groening, the rule was that Homer could only strangle Bart impulsively, never with premeditation,[66] because doing so "seems sadistic. If we keep it that he's ruled by his impulses, then he can easily switch impulses. So, even though he impulsively wants to strangle Bart, he also gives up fairly easily."[21] Another of the original ideas entertained by Groening was that Homer would "always get his comeuppance or Bart had to strangle him back", but this was dropped.[67] Homer shows no compunction about expressing his rage, and does not attempt to hide his actions from people outside the family.[64]
|
44 |
+
|
45 |
+
Homer has complex relationships with his family. As previously noted, he and Bart are the most at odds; but the two commonly share adventures and are sometimes allies, with some episodes (particularly in later seasons) showing that the pair have a strange respect for each other's cunning. Homer and Lisa have opposite personalities and he usually overlooks Lisa's talents, but when made aware of his neglect, does everything he can to help her. The show also occasionally implies Homer forgets he has a third child, Maggie; while the episode "And Maggie Makes Three" suggests she is the chief reason Homer took and remains at his regular job (season six, 1995). While Homer's thoughtless antics often upset his family, he on many occasions has also revealed himself to be a caring and loving father and husband: in "Lisa the Beauty Queen", (season four, 1992) he sold his cherished ride on the Duff blimp and used the money to enter Lisa in a beauty pageant so she could feel better about herself;[10] in "Rosebud", (season five, 1993) he gave up his chance at wealth to allow Maggie to keep a cherished teddy bear;[68] in "Radio Bart", (season three, 1992) he spearheads an attempt to dig Bart out after he had fallen down a well;[69] in "A Milhouse Divided", (season eight, 1996) he arranges a surprise second wedding with Marge to make up for their unsatisfactory first ceremony;[70] and despite a poor relationship with his father Abraham "Grampa" Simpson, whom he placed in a nursing home as soon as he could[71] while the Simpson family often do their best to avoid unnecessary contact with Grampa, Homer has shown feelings of love for his father from time to time.[72]
|
46 |
+
|
47 |
+
Homer is "a (happy) slave to his various appetites".[73] He has an apparently vacuous mind, but occasionally exhibits a surprising depth of knowledge about various subjects, such as the composition of the Supreme Court of the United States,[74] Inca mythology,[75] bankruptcy law,[76] and cell biology.[77] Homer's brief periods of intelligence are overshadowed, however, by much longer and consistent periods of ignorance, forgetfulness, and stupidity. Homer has a low IQ of 55, which has variously been attributed to the hereditary "Simpson Gene" (which eventually causes every male member of the family to become incredibly stupid),[78] his alcohol problem, exposure to radioactive waste, repetitive cranial trauma,[79] and a crayon lodged in the frontal lobe of his brain.[80] In the 2001 episode "HOMR", Homer has the crayon removed, boosting his IQ to 105; although he bonds with Lisa, his newfound capacity for understanding and reason makes him unhappy, and he has the crayon reinserted.[80] Homer often debates with his own mind, expressed in voiceover. His mind has a tendency to offer dubious advice, which occasionally helps him make the right decision, but often fails spectacularly. His mind has even become completely frustrated and, through sound effects, walked out on Homer.[81] These exchanges were often introduced because they filled time and were easy for the animators to work on.[82] They were phased out after the producers "used every possible permutation".[82]
|
48 |
+
|
49 |
+
Producer Mike Reiss said Homer was his favorite Simpsons character to write: "Homer's just a comedy writer's dream. He has everything wrong with him, every comedy trope. He's fat and bald and stupid and lazy and angry and an alcoholic. I'm pretty sure he embodies all seven deadly sins." John Swartzwelder, who wrote 60 episodes, said he wrote for Homer as if he were "a big dog". Reiss felt this was insightful, saying: "Homer is just pure emotion, no long-term memory, everything is instant gratification. And, you know, has good dog qualities, too. I think, loyalty, friendliness, and just kind of continuous optimism."[83]
|
50 |
+
|
51 |
+
}}
|
52 |
+
|
53 |
+
Homer's influence on comedy and culture has been significant. In 2010, Entertainment Weekly named Homer "the greatest character of the last 20 years".[84] He was placed second on TV Guide's 2002 Top 50 Greatest Cartoon Characters, behind Bugs Bunny;[85] fifth on Bravo's 100 Greatest TV Characters, one of only four cartoon characters on that list;[86] and first in a Channel 4 poll of the greatest television characters of all time.[87] In 2007, Entertainment Weekly placed Homer ninth on their list of the "50 Greatest TV icons" and first on their 2010 list of the "Top 100 Characters of the Past Twenty Years".[88][21][89] Homer was also the runaway winner in British polls that determined who viewers thought was the "greatest American"[90] and which fictional character people would like to see become the President of the United States.[91] His relationship with Marge was included in TV Guide's list of "The Best TV Couples of All Time".[92]
|
54 |
+
|
55 |
+
Dan Castellaneta has won several awards for voicing Homer, including four Primetime Emmy Awards for "Outstanding Voice-Over Performance" in 1992 for "Lisa's Pony", 1993 for "Mr. Plow",[93] in 2004 for "Today I Am a Clown",[94] and in 2009 for "Father Knows Worst".[95] Although in the case of "Today I Am a Clown", it was for voicing "various characters" and not solely for Homer.[94] In 2010, Castellaneta received a fifth Emmy nomination for voicing Homer and Grampa in the episode "Thursdays with Abie".[96] In 1993, Castellaneta was given a special Annie Award, "Outstanding Individual Achievement in the Field of Animation", for his work as Homer on The Simpsons.[97][98] In 2004, Castellaneta and Julie Kavner (the voice of Marge) won a Young Artist Award for "Most Popular Mom & Dad in a TV Series".[99] In 2005, Homer and Marge were nominated for a Teen Choice Award for "Choice TV Parental Units".[100] Various episodes in which Homer is strongly featured have won Emmy Awards for Outstanding Animated Program, including "Homer vs. Lisa and the 8th Commandment" in 1991, "Lisa's Wedding" in 1995, "Homer's Phobia" in 1997, "Trash of the Titans" in 1998, "HOMR" in 2001, "Three Gays of the Condo" in 2003 and "Eternal Moonshine of the Simpson Mind" in 2008.[93] In 2000, Homer and the rest of the Simpson family were awarded a star on the Hollywood Walk of Fame located at 7021 Hollywood Boulevard.[101] In 2017, Homer Simpson was celebrated by the National Baseball Hall of Fame, to honor the 25th anniversary of the episode "Homer at the Bat".[102]
|
56 |
+
|
57 |
+
Homer is an "everyman" and embodies several American stereotypes of working class blue-collar men: he is crude, overweight, incompetent, dim-witted, childish, clumsy and a borderline alcoholic.[1] Matt Groening describes him as "completely ruled by his impulses".[103] Dan Castellaneta calls him "a dog trapped in a man's body", adding, "He's incredibly loyal – not entirely clean – but you gotta love him."[38] In his book Planet Simpson, author Chris Turner describes Homer as "the most American of the Simpsons" and believes that while the other Simpson family members could be changed to other nationalities, Homer is "pure American".[104] In the book God in the Details: American Religion in Popular Culture, the authors comment that "Homer's progress (or lack thereof) reveals a character who can do the right thing, if accidentally or begrudgingly."[105] The book The Simpsons and Philosophy: The D'oh! of Homer includes a chapter analyzing Homer's character from the perspective of Aristotelian virtue ethics. Raja Halwani writes that Homer's "love of life" is an admirable character trait, "for many people are tempted to see in Homer nothing but buffoonery and immorality. ... He is not politically correct, he is more than happy to judge others, and he certainly does not seem to be obsessed with his health. These qualities might not make Homer an admirable person, but they do make him admirable in some ways, and, more importantly, makes us crave him and the Homer Simpsons of this world."[106] In 2008, Entertainment Weekly justified designating The Simpsons as a television classic by stating, "we all hail Simpson patriarch Homer because his joy is as palpable as his stupidity is stunning".[107]
|
58 |
+
|
59 |
+
In the season eight episode "Homer's Enemy" the writers decided to examine "what it would be like to actually work alongside Homer Simpson".[108] The episode explores the possibilities of a realistic character with a strong work ethic named Frank Grimes placed alongside Homer in a work environment. In the episode, Homer is portrayed as an everyman and the embodiment of the American spirit; however, in some scenes his negative characteristics and silliness are prominently highlighted.[109][110] By the end of the episode, Grimes, a hard working and persevering "real American hero", has become the villain; the viewer is intended to be pleased that Homer has emerged victorious.[109]
|
60 |
+
|
61 |
+
In Gilligan Unbound, author Paul Arthur Cantor states that he believes Homer's devotion to his family has added to the popularity of the character. He writes, "Homer is the distillation of pure fatherhood. ... This is why, for all his stupidity, bigotry and self-centered quality, we cannot hate Homer. He continually fails at being a good father, but he never gives up trying, and in some basic and important sense that makes him a good father."[111] The Sunday Times remarked "Homer is good because, above all, he is capable of great love. When the chips are down, he always does the right thing by his children—he is never unfaithful in spite of several opportunities."[60]
|
62 |
+
|
63 |
+
Homer Simpson is one of the most popular and influential television characters by a variety of standards. USA Today cited the character as being one of the "top 25 most influential people of the past 25 years" in 2007, adding that Homer "epitomized the irony and irreverence at the core of American humor".[112] Robert Thompson, director of Syracuse University's Center for the Study of Popular Television, believes that "three centuries from now, English professors are going to be regarding Homer Simpson as one of the greatest creations in human storytelling."[113] Animation historian Jerry Beck described Homer as one of the best animated characters, saying, "you know someone like it, or you identify with (it). That's really the key to a classic character."[85] Homer has been described by The Sunday Times as "the greatest comic creation of [modern] time". The article remarked, "every age needs its great, consoling failure, its lovable, pretension-free mediocrity. And we have ours in Homer Simpson."[60]
|
64 |
+
|
65 |
+
Homer has been cited as a bad influence on children; for example, a five-year study of more than 2,000 middle-aged people in France found a possible link between weight and brain function, the findings of which were dubbed the "Homer Simpson syndrome".[114] Results from a word memory test showed that people with a body mass index (BMI) of 20 (considered to be a healthy level) remembered an average of nine out of 16 words. Meanwhile, people with a BMI of 30 (inside the obese range) remembered an average of just seven out of 16 words.[114]
|
66 |
+
|
67 |
+
Despite Homer's partial embodiment of American culture, his influence has spread to other parts of the world. In 2003, Matt Groening revealed that his father, after whom Homer was named, was Canadian, and said that this made Homer himself a Canadian.[115] The character was later made an honorary citizen of Winnipeg, Manitoba, Canada, because Homer Groening was believed to be from there, although sources say the senior Groening was actually born in the province of Saskatchewan.[116] In 2007, an image of Homer was painted next to the Cerne Abbas Giant in Dorset, England as part of a promotion for The Simpsons Movie. This caused outrage among local neopagans who performed "rain magic" to try to get it washed away.[117] In 2008, a defaced Spanish euro coin was found in Avilés, Spain with the face of Homer replacing the effigy of King Juan Carlos I.[118]
|
68 |
+
|
69 |
+
On April 9, 2009, the United States Postal Service unveiled a series of five 44-cent stamps featuring Homer and the four other members of the Simpson family. They are the first characters from a television series to receive this recognition while the show is still in production.[119] The stamps, designed by Matt Groening, were made available for purchase on May 7, 2009.[120][121]
|
70 |
+
|
71 |
+
Homer has appeared, voiced by Castellaneta, in several other television shows, including the sixth season of American Idol where he opened the show;[122] The Tonight Show with Jay Leno where he performed a special animated opening monologue for the July 24, 2007, edition;[123] and the 2008 fundraising television special Stand Up to Cancer where he was shown having a colonoscopy.[124]
|
72 |
+
|
73 |
+
On February 28, 1999, Homer Simpson was made an honorary member of the Junior Common Room of Worcester College, Oxford. Homer was granted the membership by the college's undergraduate body in the belief that ″he would benefit greatly from an Oxford education″.[125]
|
74 |
+
|
75 |
+
Homer's main and most famous catchphrase, the annoyed grunt "D'oh!", is typically uttered when he injures himself, realizes that he has done something stupid, or when something bad has happened or is about to happen to him. During the voice recording session for a Tracey Ullman Show short, Homer was required to utter what was written in the script as an "annoyed grunt".[126] Dan Castellaneta rendered it as a drawn out "d'ooooooh". This was inspired by Jimmy Finlayson, the mustachioed Scottish actor who appeared in 33 Laurel and Hardy films.[126] Finlayson had used the term as a minced oath to stand in for the word "Damn!" Matt Groening felt that it would better suit the timing of animation if it were spoken faster. Castellaneta then shortened it to a quickly uttered "D'oh!"[127] The first intentional use of D'oh! occurred in the Ullman short "The Krusty the Clown Show"[127] (1989), and its first usage in the series was in the series premiere, "Simpsons Roasting on an Open Fire".[128]
|
76 |
+
|
77 |
+
"D'oh!" was first added to The New Oxford Dictionary of English in 1998.[126] It is defined as an interjection "used to comment on an action perceived as foolish or stupid".[129] In 2001, "D'oh!" was added to the Oxford English Dictionary, without the apostrophe ("Doh!").[130] The definition of the word is "expressing frustration at the realization that things have turned out badly or not as planned, or that one has just said or done something foolish".[131] In 2006, "D'oh!" was placed in sixth position on TV Land's list of the 100 greatest television catchphrases.[132][133] "D'oh!" is also included in The Oxford Dictionary of Quotations.[134] The book includes several other quotations from Homer, including "Kids, you tried your best and you failed miserably. The lesson is never try", from "Burns' Heir" (season five, 1994) as well as "Kids are the best, Apu. You can teach them to hate the things you hate. And they practically raise themselves, what with the Internet and all", from "Eight Misbehavin'" (season 11, 1999). Both quotes entered the dictionary in August 2007.[135]
|
78 |
+
|
79 |
+
Homer's inclusion in many Simpsons publications, toys, and other merchandise is evidence of his enduring popularity. The Homer Book, about Homer's personality and attributes, was released in 2004 and is commercially available.[136][137] It has been described as "an entertaining little book for occasional reading"[138] and was listed as one of "the most interesting books of 2004" by The Chattanoogan.[139] Other merchandise includes dolls, posters, figurines, bobblehead dolls, mugs, alarm clocks, jigsaw puzzles, Chia Pets, and clothing such as slippers, T-shirts, baseball caps, and boxer shorts.[140] Homer has appeared in commercials for Coke, 1-800-COLLECT, Burger King, Butterfinger, C.C. Lemon, Church's Chicken, Domino's Pizza, Intel, Kentucky Fried Chicken, Ramada Inn, Subway and T.G.I. Friday's. In 2004, Homer starred in a MasterCard Priceless commercial that aired during Super Bowl XXXVIII.[141] In 2001, Kelloggs launched a brand of cereal called "Homer's Cinnamon Donut Cereal", which was available for a limited time.[137][142] In June 2009, Dutch automotive navigation systems manufacturer TomTom announced that Homer would be added to its downloadable GPS voice lineup. Homer's voice, recorded by Dan Castellaneta, features several in-character comments such as "Take the third right. We might find an ice cream truck! Mmm... ice cream."[143]
|
80 |
+
|
81 |
+
Homer has appeared in other media relating to The Simpsons. He has appeared in every one of The Simpsons video games, including the most recent, The Simpsons Game.[144] Homer appears as a playable character in the toys-to-life video game Lego Dimensions, released via a "Level Pack" packaged with Homer's Car and "Taunt-o-Vision" accessories in September 2015; the pack also adds an additional level based on the episode "The Mysterious Voyage of Homer".[145] Alongside the television series, Homer regularly appears in issues of Simpsons Comics, which were first published on November 29, 1993, and are still issued monthly.[146][147] Homer also plays a role in The Simpsons Ride, launched in 2008 at Universal Studios Florida and Hollywood.[148]
|
82 |
+
|
en/2598.html.txt
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Literature, can refer to a body of written or oral work, but it commonly refers specifically to writings considered to be an art form, especially prose fiction, drama, and poetry, in contrast to academic writing and newspapers.[1] In recent centuries, the definition has expanded to now include oral literature, much of which has been transcribed.[2]
|
4 |
+
|
5 |
+
Literature, as an art form, can also include works in various non-fiction genres, such as autobiography, diaries, memoir, letters, the essay, as well as history and philosophy.[3]
|
6 |
+
|
7 |
+
Its Latin root literatura/litteratura (from littera: letter of the alphabet or handwriting) was used to refer to all written accounts. Developments in print technology have allowed an ever-growing distribution and proliferation of written works, which now includes electronic literature.
|
8 |
+
|
9 |
+
Literature is classified according to whether it is poetry, prose or drama, and such works are categorized according to historical periods, or their adherence to certain aesthetic features, or genre.
|
10 |
+
|
11 |
+
Definitions of literature have varied over time: it is a "culturally relative definition".[4] In Western Europe prior to the 18th century, literature denoted all books and writing.[4] A more restricted sense of the term emerged during the Romantic period, in which it began to demarcate "imaginative" writing.[5] Contemporary debates over what constitutes literature can be seen as returning to older, more inclusive notions; cultural studies, for instance, takes as its subject of analysis both popular and minority genres, in addition to canonical works.
|
12 |
+
|
13 |
+
The value judgment definition of literature considers it to cover exclusively those writings that possess high quality or distinction, forming part of the so-called belles-lettres ('fine writing') tradition.[6] This sort of definition is that used in the Encyclopædia Britannica Eleventh Edition (1910–11) when it classifies literature as "the best expression of the best thought reduced to writing."[7] Problematic in this view is that there is no objective definition of what constitutes "literature": anything can be literature, and anything which is universally regarded as literature has the potential to be excluded, since value judgments can change over time.[8]
|
14 |
+
|
15 |
+
The formalist definition is that "literature" foregrounds poetic effects; it is the "literariness" or "poetic" of literature that distinguishes it from ordinary speech or other kinds of writing (e.g., journalism).[9][10] Jim Meyer considers this a useful characteristic in explaining the use of the term to mean published material in a particular field (e.g., "scientific literature"), as such writing must use language according to particular standards.[11] The problem with the formalist definition is that in order to say that literature deviates from ordinary uses of language, those uses must first be identified; this is difficult because "ordinary language" is an unstable category, differing according to social categories and across history.[12]
|
16 |
+
|
17 |
+
Etymologically, the term derives from Latin literatura/litteratura "learning, a writing, grammar," originally "writing formed with letters," from litera/littera "letter".[13] In spite of this, the term has also been applied to spoken or sung texts.[11][14]
|
18 |
+
|
19 |
+
Literary genre is a mode of categorizing literature. A French term for "a literary type or class".[15]
|
20 |
+
|
21 |
+
Ancient Egyptian literature,[16] along with Sumerian literature, are considered the world's oldest literatures.[17] The primary genres of the literature of ancient Egypt—didactic texts, hymns and prayers, and tales—were written almost entirely in verse;[18] By the Old Kingdom (26th century BC to 22nd century BC), literary works included funerary texts, epistles and letters, hymns and poems, and commemorative autobiographical texts recounting the careers of prominent administrative officials. It was not until the early Middle Kingdom (21st century BC to 17th century BC) that a narrative Egyptian literature was created.
|
22 |
+
|
23 |
+
Many works of earlier periods, even in narrative form, had a covert moral or didactic purpose, such as the Sanskrit Panchatantra or the Metamorphoses of Ovid. Drama and satire also developed as urban culture provided a larger public audience, and later readership, for literary production. Lyric poetry (as opposed to epic poetry) was often the speciality of courts and aristocratic circles, particularly in East Asia where songs were collected by the Chinese aristocracy as poems, the most notable being the Shijing or Book of Songs. Over a long period, the poetry of popular pre-literate balladry and song interpenetrated and eventually influenced poetry in the literary medium.
|
24 |
+
|
25 |
+
In ancient China, early literature was primarily focused on philosophy, historiography, military science, agriculture, and poetry. China, the origin of modern paper making and woodblock printing, produced the world's first print cultures.[19] Much of Chinese literature originates with the Hundred Schools of Thought period that occurred during the Eastern Zhou Dynasty (769‒269 BCE). The most important of these include the Classics of Confucianism, of Daoism, of Mohism, of Legalism, as well as works of military science (e.g. Sun Tzu's The Art of War) and Chinese history (e.g. Sima Qian's Records of the Grand Historian). Ancient Chinese literature had a heavy emphasis on historiography, with often very detailed court records. An exemplary piece of narrative history of ancient China was the Zuo Zhuan, which was compiled no later than 389 BCE, and attributed to the blind 5th-century BCE historian Zuo Qiuming.
|
26 |
+
|
27 |
+
In ancient India, literature originated from stories that were originally orally transmitted. Early genres included drama, fables, sutras and epic poetry. Sanskrit literature begins with the Vedas, dating back to 1500–1000 BCE, and continues with the Sanskrit Epics of Iron Age India. The Vedas are among the oldest sacred texts. The Samhitas (vedic collections) date to roughly 1500–1000 BCE, and the "circum-Vedic" texts, as well as the redaction of the Samhitas, date to c. 1000‒500 BCE, resulting in a Vedic period, spanning the mid-2nd to mid 1st millennium BCE, or the Late Bronze Age and the Iron Age.[20] The period between approximately the 6th to 1st centuries BCE saw the composition and redaction of the two most influential Indian epics, the Mahabharata and the Ramayana, with subsequent redaction progressing down to the 4th century AD. Other major literary works are Ramcharitmanas & Krishnacharitmanas.
|
28 |
+
|
29 |
+
Homer's, epic poems the Iliad and the Odyssey, are central works of ancient Greek literature. It is generally accepted that the poems were composed at some point around the late eighth or early seventh century BC.[21] Modern scholars consider these accounts legendary.[22][23][24] Most researchers believe that the poems were originally transmitted orally.[25] From antiquity until the present day, the influence of Homeric epic on Western civilization has been great, inspiring many of its most famous works of literature, music, art and film.[26] The Homeric epics were the greatest influence on ancient Greek culture and education; to Plato, Homer was simply the one who "has taught Greece" – ten Hellada pepaideuken.[27][28] Hesiod'sWorks and Days and Theogony, are some of the earliest, and most influential, of ancient Greek literature. Classical Greek genres included philosophy, poetry, historiography, comedies and dramas. Plato and Aristotle authored philosophical texts that are the foundation of Western philosophy, Sappho and Pindar were influential lyric poets, and Herodotus and Thucydides were early Greek historians. Although drama was popular in ancient Greece, of the hundreds of tragedies written and performed during the classical age, only a limited number of plays by three authors still exist: Aeschylus, Sophocles, and Euripides. The plays of Aristophanes provide the only real examples of a genre of comic drama known as Old Comedy, the earliest form of Greek Comedy, and are in fact used to define the genre.[29]
|
30 |
+
|
31 |
+
Roman histories and biographies anticipated the extensive mediaeval literature of lives of saints and miraculous chronicles, but the most characteristic form of the Middle Ages was the romance, an adventurous and sometimes magical narrative with strong popular appeal. Controversial, religious, political and instructional literature proliferated during the Renaissance as a result of the invention of printing, while the mediaeval romance developed into a more character-based and psychological form of narrative, the novel, of which anearly and important example is the 16th century Chinese Journey to the West (Monkey).
|
32 |
+
|
33 |
+
Theorists suggest that literature allows readers to access intimate emotional aspects of a person's character that would not be obvious otherwise.[30][clarification needed] That literature aids the psychological development and understanding of the reader, allowing someone to access emotional states from which they had distanced themselves.[clarification needed] Some researchers focus on the significance of literature in an individual's psychological development. For example, language learning uses literature because it articulates or contains culture, which is an element considered crucial in learning a language.[31] This is demonstrated in the case of a study that revealed how the presence of cultural values and culturally familiar passages in literary texts played an important impact on the performance of minority students in English reading.[32] Psychologists have also been using literature as a tool or therapeutic vehicle for people, to help them understand challenges and issues - for example in the integration of subliminal messages in literary texts or in the rewriting of traditional narratives to help readers address their problems or mold them into contemporary social messages.[33][34]
|
34 |
+
|
35 |
+
Hogan also explains that the time and emotion which a person devotes to understanding a character's situation makes literature "ecological[ly] valid in the study of emotion".[35][clarification needed] Thus literature can unite a large community by provoking universal emotions, as well as allowing readers to access cultural aspects that they have not been exposed to, and that produce new emotional experiences.[36] Theorists[which?] argue that authors choose literary devices according to what psychological emotion they are attempting to describe.[37]
|
36 |
+
|
37 |
+
Some psychologists regard literature as a valid research tool, because it allows them to discover new psychological ideas.[38] Psychological theories about literature, such as Maslow's Hierarchy of Needs[39] have become universally recognized[by whom?].
|
38 |
+
|
39 |
+
Psychologist Maslow's "Third Force Psychology Theory" helps literary analysts to critically understand how characters reflect the culture and the history to which they belong. It also allows them to understand an author's intention and psychology.[40] The theory suggests that human beings possess within them their true "self" and that the fulfillment of this is the reason for living. It also suggests that neurological development hinders actualizing this and that a person becomes estranged from his or her true self.[41] Maslow argues that literature explores this struggle for self-fulfillment.[37] Paris in his "Third Force Psychology and the Study of Literature" argues that "D.H. Lawrence's 'pristine unconscious' is a metaphor for the real self".[42] Literature, it is here suggested[by whom?], is therefore a tool that allows readers to develop and apply critical reasoning to the nature of emotions.
|
40 |
+
|
41 |
+
Symbols[43]
|
42 |
+
and imagery[44]
|
43 |
+
can contribute to shaping psychological and esthetic responses to texts.
|
44 |
+
|
45 |
+
Poetry is a form of literary art which uses the aesthetic qualities of language (including music and rhythm) to evoke meanings beyond a prose paraphrase.[45] Poetry has traditionally been distinguished from prose by its being set in verse; prose is cast in sentences, poetry in lines; the syntax of prose is dictated by meaning, whereas that of poetry is held across meter or the visual aspects of the poem.[46][47] This distinction is complicated by various hybrid forms such as the prose poem[48] and prosimetrum,[49] and more generally by the fact that prose possesses rhythm.[50] Abram Lipsky refers to it as an "open secret" that "prose is not distinguished from poetry by lack of rhythm".[51]
|
46 |
+
|
47 |
+
Prior to the 19th century, poetry was commonly understood to be something set in metrical lines; accordingly, in 1658 a definition of poetry is "any kind of subject consisting of Rhythm or Verses".[45] Possibly as a result of Aristotle's influence (his Poetics), "poetry" before the 19th century was usually less a technical designation for verse than a normative category of fictive or rhetorical art.[52] As a form it may pre-date literacy, with the earliest works being composed within and sustained by an oral tradition;[53][54] hence it constitutes the earliest example of literature.
|
48 |
+
|
49 |
+
Prose is a form of language that possesses ordinary syntax and natural speech, rather than a regular metre; in which regard, along with its presentation in sentences rather than lines, it differs from most poetry.[46][47][55] However, developments in modern literature, including free verse and prose poetry have tended to blur any differences, and American poet T.S. Eliot suggested that while: "the distinction between verse and prose is clear, the distinction between poetry and prose is obscure".[56]
|
50 |
+
|
51 |
+
On the historical development of prose, Richard Graff notes that "[In the case of ancient Greece] recent scholarship has emphasized the fact that formal prose was a comparatively late development, an "invention" properly associated with the classical period".[57]
|
52 |
+
|
53 |
+
The majors forms of literature in prose are novels, novellas and short stories, which earned the name "fiction" to distinguish them from non-fiction writings also expressed in prose.
|
54 |
+
|
55 |
+
A novel is a long fictional prose narrative. In English, the term emerged from the Romance languages in the late 15th century, with the meaning of "news"; it came to indicate something new, without a distinction between fact or fiction.[58] The romance is a closely related long prose narrative. Walter Scott defined it as "a fictitious narrative in prose or verse; the interest of which turns upon marvellous and uncommon incidents", whereas in the novel "the events are accommodated to the ordinary train of human events and the modern state of society".[59] Other European languages do not distinguish between romance and novel: "a novel is le roman, der Roman, il romanzo",[60] indicates the proximity of the forms.[61]
|
56 |
+
|
57 |
+
Although there are many historical prototypes, so-called "novels before the novel",[62] the modern novel form emerges late in cultural history—roughly during the eighteenth century.[63] Initially subject to much criticism, the novel has acquired a dominant position amongst literary forms, both popularly and critically.[61][64][65]
|
58 |
+
|
59 |
+
In purely quantitative terms, the novella exists between the novel and short story; the publisher Melville House classifies it as "too short to be a novel, too long to be a short story".[66] There is no precise definition in terms of word or page count.[67] Literary prizes and publishing houses often have their own arbitrary limits,[68] which vary according to their particular intentions. Summarizing the variable definitions of the novella, William Giraldi concludes "[it is a form] whose identity seems destined to be disputed into perpetuity".[69] It has been suggested that the size restriction of the form produces various stylistic results, both some that are shared with the novel or short story,[67][70][71] and others unique to the form.[72]
|
60 |
+
|
61 |
+
A dilemma in defining the "short story" as a literary form is how to, or whether one should, distinguish it from any short narrative; hence it also has a contested origin,[73] variably suggested as the earliest short narratives (e.g. the Bible), early short story writers (e.g. Edgar Allan Poe), or the clearly modern short story writers (e.g. Anton Chekhov).[74] Apart from its distinct size, various theorists have suggested that the short story has a characteristic subject matter or structure;[75][76] these discussions often position the form in some relation to the novel.[77]
|
62 |
+
|
63 |
+
Drama is literature intended for performance.[78] The form is combined with music and dance in opera and musical theatre. A play is a subset of this form, referring to the written dramatic work of a playwright that is intended for performance in a theater; it comprises chiefly dialogue between characters, and usually aims at dramatic or theatrical performance rather than at reading. A closet drama, by contrast, refers to a play written to be read rather than to be performed; hence, it is intended that the meaning of such a work can be realized fully on the page.[79] Nearly all drama took verse form until comparatively recently.
|
64 |
+
|
65 |
+
Greek drama is the earliest form of drama of which we have substantial knowledge. Tragedy, as a dramatic genre, developed as a performance associated with religious and civic festivals, typically enacting or developing upon well-known historical or mythological themes. Tragedies generally presented very serious themes. With the advent of newer technologies, scripts written for non-stage media have been added to this form. War of the Worlds (radio) in 1938 saw the advent of literature written for radio broadcast, and many works of Drama have been adapted for film or television. Conversely, television, film, and radio literature have been adapted to printed or electronic media.
|
66 |
+
|
67 |
+
Literary technique and literary device are used by authors to produce specific effects.
|
68 |
+
|
69 |
+
Literary techniques encompass a wide range of approaches: examples for fiction are, whether a work is narrated in first-person, or from another perspective; whether a traditional linear narrative or a nonlinear narrative is used; the literary genre that is chosen.
|
70 |
+
|
71 |
+
Literary devices involves specific elements within the work that make it effective. Examples include metaphor, simile, ellipsis, narrative motifs, and allegory. Even simple word play functions as a literary device. In fiction stream-of-consciousness narrative is a literary device.
|
72 |
+
|
73 |
+
Literary works have been protected by copyright law from unauthorized reproduction since at least 1710.[80] Literary works are defined by copyright law to mean any work, other than a dramatic or musical work, which is written, spoken or sung, and accordingly includes (a) a table or compilation (other than a database), (b) a computer program, (c) preparatory design material for a computer program, and (d) a database.
|
74 |
+
|
75 |
+
Literary works are not limited to works of literature, but include all works expressed in print or writing (other than dramatic or musical works).[81]
|
76 |
+
|
77 |
+
There are numerous awards recognizing achievement and contribution in literature. Given the diversity of the field, awards are typically limited in scope, usually on: form, genre, language, nationality and output (e.g. for first-time writers or debut novels).[82]
|
78 |
+
|
79 |
+
The Nobel Prize in Literature was one of the six Nobel Prizes established by the will of Alfred Nobel in 1895,[83] and is awarded to an author on the basis of their body of work, rather than to, or for, a particular work itself.[a] Other literary prizes for which all nationalities are eligible include: the Neustadt International Prize for Literature, the Man Booker International Prize, Pulitzer Prize, Hugo Award, Guardian First Book Award and the Franz Kafka Prize.
|
80 |
+
|
81 |
+
Lists
|
82 |
+
|
83 |
+
Related topics
|
84 |
+
|
85 |
+
Citations
|
86 |
+
|
87 |
+
Cite error: A list-defined reference with group name "" is not used in the content (see the help page).
|
88 |
+
|
89 |
+
Bibliography
|
90 |
+
|
91 |
+
Major forms
|
92 |
+
|
93 |
+
History
|
en/2599.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/26.html.txt
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In Greek mythology, Achilles (/əˈkɪliːz/ ə-KIL-eez) or Achilleus (Ancient Greek: Ἀχιλλεύς, [a.kʰilˈleu̯s]) was a hero of the Trojan War, the greatest of all the Greek warriors, and is the central character of Homer's Iliad. He was the son of the Nereid Thetis and Peleus, king of Phthia.
|
4 |
+
|
5 |
+
Achilles' most notable feat during the Trojan War was the slaying of the Trojan prince Hector outside the gates of Troy. Although the death of Achilles is not presented in the Iliad, other sources concur that he was killed near the end of the Trojan War by Paris, who shot him in the heel with an arrow. Later legends (beginning with Statius' unfinished epic Achilleid, written in the 1st century AD) state that Achilles was invulnerable in all of his body except for his heel because, when his mother Thetis dipped him in the river Styx as an infant, she held him by one of his heels. Alluding to these legends, the term "Achilles' heel" has come to mean a point of weakness, especially in someone or something with an otherwise strong constitution. The Achilles tendon is also named after him due to these legends.
|
6 |
+
|
7 |
+
Linear B tablets attest to the personal name Achilleus in the forms a-ki-re-u and a-ki-re-we,[1] the latter being the dative of the former.[2] The name grew more popular, even becoming common soon after the seventh century BC[3] and was also turned into the female form Ἀχιλλεία (Achilleía), attested in Attica in the fourth century BC (IG II² 1617) and, in the form Achillia, on a stele in Halicarnassus as the name of a female gladiator fighting an "Amazon".
|
8 |
+
|
9 |
+
Achilles' name can be analyzed as a combination of ἄχος (áchos) "distress, pain, sorrow, grief"[4] and λαός (laós) "people, soldiers, nation", resulting in a proto-form *Akhí-lāu̯os "he who has the people distressed" or "he whose people have distress".[5][6] The grief or distress of the people is a theme raised numerous times in the Iliad (and frequently by Achilles himself). Achilles' role as the hero of grief or distress forms an ironic juxtaposition with the conventional view of him as the hero of κλέος kléos ("glory", usually in war). Furthermore, laós has been construed by Gregory Nagy, following Leonard Palmer, to mean "a corps of soldiers", a muster.[6] With this derivation, the name obtains a double meaning in the poem: when the hero is functioning rightly, his men bring distress to the enemy, but when wrongly, his men get the grief of war. The poem is in part about the misdirection of anger on the part of leadership.
|
10 |
+
|
11 |
+
Another etymology relates the name to a Proto-Indo-European compound *h₂eḱ-pṓds "sharp foot" which first gave an Illyrian *āk̂pediós, evolving through time into *ākhpdeós and then *akhiddeús. The shift from -dd- to -ll- is then ascribed to the passing of the name into Greek via a Pre-Greek source. The first root part *h₂eḱ- "sharp, pointed" also gave Greek ἀκή (akḗ "point, silence, healing"), ἀκμή (akmḗ "point, edge, zenith") and ὀξύς (oxús "sharp, pointed, keen, quick, clever"), whereas ἄχος stems from the root *h₂egʰ- "to be upset, afraid". The whole expression would be comparable to the Latin acupedius "swift of foot". Compare also the Latin word family of aciēs "sharp edge or point, battle line, battle, engagement", acus "needle, pin, bodkin", and acuō "to make pointed, sharpen, whet; to exercise; to arouse" (whence acute).[7] Some topical epitheta of Achilles in the Iliad point to this "swift-footedness", namely ποδάρκης δῖος Ἀχιλλεὺς (podárkēs dĩos Achilleús "swift-footed divine Achilles")[8] or, even more frequently, πόδας ὠκὺς Ἀχιλλεύς (pódas ōkús Achilleús "quick-footed Achilles").[9]
|
12 |
+
|
13 |
+
Some researchers deem the name a loan word, possibly from a Pre-Greek language.[1] Achilles' descent from the Nereid Thetis and a similarity of his name with those of river deities such as Acheron and Achelous have led to speculations about him being an old water divinity (see below Worship).[10] Robert S. P. Beekes has suggested a Pre-Greek origin of the name, based among other things on the coexistence of -λλ- and -λ- in epic language, which may account for a palatalized phoneme /ly/ in the original language.[2]
|
14 |
+
|
15 |
+
Achilles was the son of the Nereid Thetis and of Peleus, the king of the Myrmidons. Zeus and Poseidon had been rivals for the hand of Thetis until Prometheus, the fore-thinker, warned Zeus of a prophecy (originally uttered by Themis, goddess of divine law) that Thetis would bear a son greater than his father. For this reason, the two gods withdrew their pursuit, and had her wed Peleus.[11]
|
16 |
+
|
17 |
+
There is a tale which offers an alternative version of these events: In the Argonautica (4.760) Zeus' sister and wife Hera alludes to Thetis' chaste resistance to the advances of Zeus, pointing out that Thetis was so loyal to Hera's marriage bond that she coolly rejected the father of gods. Thetis, although a daughter of the sea-god Nereus, was also brought up by Hera, further explaining her resistance to the advances of Zeus. Zeus was furious and decreed that she would never marry an immortal.[12]
|
18 |
+
|
19 |
+
According to the Achilleid, written by Statius in the 1st century AD, and to non-surviving previous sources, when Achilles was born Thetis tried to make him immortal by dipping him in the river Styx; however, he was left vulnerable at the part of the body by which she held him: his left heel[13][14] (see Achilles' heel, Achilles' tendon). It is not clear if this version of events was known earlier. In another version of this story, Thetis anointed the boy in ambrosia and put him on top of a fire in order to burn away the mortal parts of his body. She was interrupted by Peleus and abandoned both father and son in a rage.[15]
|
20 |
+
|
21 |
+
None of the sources before Statius make any reference to this general invulnerability. To the contrary, in the Iliad Homer mentions Achilles being wounded: in Book 21 the Paeonian hero Asteropaeus, son of Pelagon, challenged Achilles by the river Scamander. He cast two spears at once, one grazed Achilles' elbow, "drawing a spurt of blood".
|
22 |
+
|
23 |
+
Also, in the fragmentary poems of the Epic Cycle in which one can find description of the hero's death (i.e. the Cypria, the Little Iliad by Lesches of Pyrrha, the Aithiopis and Iliou persis by Arctinus of Miletus), there is no trace of any reference to his general invulnerability or his famous weakness at the heel; in the later vase paintings presenting the death of Achilles, the arrow (or in many cases, arrows) hit his torso.
|
24 |
+
|
25 |
+
Peleus entrusted Achilles to Chiron the Centaur, on Mount Pelion, to be reared.[16] Thetis foretold that her son's fate was either to gain glory and die young, or to live a long but uneventful life in obscurity. Achilles chose the former, and decided to take part in the Trojan war.[17] According to Homer, Achilles grew up in Phthia together with his companion Patroclus.[1]
|
26 |
+
|
27 |
+
According to Photius, the sixth book of the New History by Ptolemy Hephaestion reported that Thetis burned in a secret place the children she had by Peleus; but when she had Achilles, Peleus noticed, tore him from the flames with only a burnt foot, and confided him to the centaur Chiron. Later Chiron exhumed the body of the Damysus, who was the fastest of all the giants, removed the ankle, and incorporated it into Achilles' burnt foot.[18]
|
28 |
+
|
29 |
+
Among the appellations under which Achilles is generally known are the following:[19]
|
30 |
+
|
31 |
+
Some post-Homeric sources[21] claim that in order to keep Achilles safe from the war, Thetis (or, in some versions, Peleus) hid the young man at the court of Lycomedes, king of Skyros. There, Achilles is disguised as a girl and lives among Lycomedes' daughters, perhaps under the name "Pyrrha" (the red-haired girl). With Lycomedes' daughter Deidamia, whom in the account of Statius he rapes, Achilles there fathers a son, Neoptolemus (also called Pyrrhus, after his father's possible alias). According to this story, Odysseus learns from the prophet Calchas that the Achaeans would be unable to capture Troy without Achilles' aid. Odysseus goes to Skyros in the guise of a peddler selling women's clothes and jewellery and places a shield and spear among his goods. When Achilles instantly takes up the spear, Odysseus sees through his disguise and convinces him to join the Greek campaign. In another version of the story, Odysseus arranges for a trumpet alarm to be sounded while he was with Lycomedes' women; while the women flee in panic, Achilles prepares to defend the court, thus giving his identity away.
|
32 |
+
|
33 |
+
According to the Iliad, Achilles arrived at Troy with 50 ships, each carrying 50 Myrmidons. He appointed five leaders (each leader commanding 500 Myrmidons): Menesthius, Eudorus, Peisander, Phoenix and Alcimedon.[22]
|
34 |
+
|
35 |
+
When the Greeks left for the Trojan War, they accidentally stopped in Mysia, ruled by King Telephus. In the resulting battle, Achilles gave Telephus a wound that would not heal; Telephus consulted an oracle, who stated that "he that wounded shall heal". Guided by the oracle, he arrived at Argos, where Achilles healed him in order that he might become their guide for the voyage to Troy.[23]
|
36 |
+
|
37 |
+
According to other reports in Euripides' lost play about Telephus, he went to Aulis pretending to be a beggar and asked Achilles to heal his wound. Achilles refused, claiming to have no medical knowledge. Alternatively, Telephus held Orestes for ransom, the ransom being Achilles' aid in healing the wound. Odysseus reasoned that the spear had inflicted the wound; therefore, the spear must be able to heal it. Pieces of the spear were scraped off onto the wound and Telephus was healed.[23]
|
38 |
+
|
39 |
+
According to the Cypria (the part of the Epic Cycle that tells the events of the Trojan War before Achilles' wrath), when the Achaeans desired to return home, they were restrained by Achilles, who afterwards attacked the cattle of Aeneas, sacked neighbouring cities (like Pedasus and Lyrnessus, where the Greeks capture the queen Briseis) and killed Tenes, a son of Apollo, as well as Priam's son Troilus in the sanctuary of Apollo Thymbraios; however, the romance between Troilus and Chryseis described in Geoffrey Chaucer's Troilus and Criseyde and in William Shakespeare's Troilus and Cressida is a medieval invention.[24][1]
|
40 |
+
|
41 |
+
In Dares Phrygius' Account of the Destruction of Troy,[25] the Latin summary through which the story of Achilles was transmitted to medieval Europe, as well as in older accounts, Troilus was a young Trojan prince, the youngest of King Priam's and Hecuba's five legitimate sons (or according other sources, another son of Apollo).[26] Despite his youth, he was one of the main Trojan war leaders, a "horse fighter" or "chariot fighter" according to Homer.[27] Prophecies linked Troilus' fate to that of Troy and so he was ambushed in an attempt to capture him. Yet Achilles, struck by the beauty of both Troilus and his sister Polyxena, and overcome with lust, directed his sexual attentions on the youth – who, refusing to yield, instead found himself decapitated upon an altar-omphalos of Apollo Thymbraios.[28][29] Later versions of the story suggested Troilus was accidentally killed by Achilles in an over-ardent lovers' embrace.[30] In this version of the myth, Achilles' death therefore came in retribution for this sacrilege.[28][31] Ancient writers treated Troilus as the epitome of a dead child mourned by his parents. Had Troilus lived to adulthood, the First Vatican Mythographer claimed, Troy would have been invincible; however, the motif is older and found already in Plautus' Bacchides.[32]
|
42 |
+
|
43 |
+
Homer's Iliad is the most famous narrative of Achilles' deeds in the Trojan War. Achilles' wrath (μῆνις Ἀχιλλέως, mênis Achilléōs) is the central theme of the poem. The first two lines of the Iliad read:
|
44 |
+
|
45 |
+
οὐλομένην, ἣ μυρί' Ἀχαιοῖς ἄλγε' ἔθηκε, [...]
|
46 |
+
|
47 |
+
the accursed rage that brought great suffering to the Achaeans, [...]
|
48 |
+
|
49 |
+
The Homeric epic only covers a few weeks of the decade-long war, and does not narrate Achilles' death. It begins with Achilles' withdrawal from battle after being dishonoured by Agamemnon, the commander of the Achaean forces. Agamemnon has taken a woman named Chryseis as his slave. Her father Chryses, a priest of Apollo, begs Agamemnon to return her to him. Agamemnon refuses, and Apollo sends a plague amongst the Greeks. The prophet Calchas correctly determines the source of the troubles but will not speak unless Achilles vows to protect him. Achilles does so, and Calchas declares that Chryseis must be returned to her father. Agamemnon consents, but then commands that Achilles' battle prize Briseis, the daughter of Briseus, be brought to him to replace Chryseis. Angry at the dishonour of having his plunder and glory taken away (and, as he says later, because he loves Briseis),[33] with the urging of his mother Thetis, Achilles refuses to fight or lead his troops alongside the other Greek forces. At the same time, burning with rage over Agamemnon's theft, Achilles prays to Thetis to convince Zeus to help the Trojans gain ground in the war, so that he may regain his honour.
|
50 |
+
|
51 |
+
As the battle turns against the Greeks, thanks to the influence of Zeus, Nestor declares that the Trojans are winning because Agamemnon has angered Achilles, and urges the king to appease the warrior. Agamemnon agrees and sends Odysseus and two other chieftains, Ajax and Phoenix, to Achilles with the offer of the return of Briseis and other gifts. Achilles rejects all Agamemnon offers him and simply urges the Greeks to sail home as he was planning to do.
|
52 |
+
|
53 |
+
The Trojans, led by Hector, subsequently push the Greek army back toward the beaches and assault the Greek ships. With the Greek forces on the verge of absolute destruction, Patroclus leads the Myrmidons into battle, wearing Achilles' armour, though Achilles remains at his camp. Patroclus succeeds in pushing the Trojans back from the beaches, but is killed by Hector before he can lead a proper assault on the city of Troy.
|
54 |
+
|
55 |
+
After receiving the news of the death of Patroclus from Antilochus, the son of Nestor, Achilles grieves over his beloved companion's death. His mother Thetis comes to comfort the distraught Achilles. She persuades Hephaestus to make new armour for him, in place of the armour that Patroclus had been wearing, which was taken by Hector. The new armour includes the Shield of Achilles, described in great detail in the poem.
|
56 |
+
|
57 |
+
Enraged over the death of Patroclus, Achilles ends his refusal to fight and takes the field, killing many men in his rage but always seeking out Hector. Achilles even engages in battle with the river god Scamander, who has become angry that Achilles is choking his waters with all the men he has killed. The god tries to drown Achilles but is stopped by Hera and Hephaestus. Zeus himself takes note of Achilles' rage and sends the gods to restrain him so that he will not go on to sack Troy itself before the time allotted for its destruction, seeming to show that the unhindered rage of Achilles can defy fate itself. Finally, Achilles finds his prey. Achilles chases Hector around the wall of Troy three times before Athena, in the form of Hector's favorite and dearest brother, Deiphobus, persuades Hector to stop running and fight Achilles face to face. After Hector realizes the trick, he knows the battle is inevitable. Wanting to go down fighting, he charges at Achilles with his only weapon, his sword, but misses. Accepting his fate, Hector begs Achilles, not to spare his life, but to treat his body with respect after killing him. Achilles tells Hector it is hopeless to expect that of him, declaring that "my rage, my fury would drive me now to hack your flesh away and eat you raw – such agonies you have caused me".[34] Achilles then kills Hector and drags his corpse by its heels behind his chariot. After having a dream where Patroclus begs Achilles to hold his funeral, Achilles hosts a series of funeral games in his honour.[35]
|
58 |
+
|
59 |
+
At the onset of his duel with Hector, Achilles is referred to as the brightest star in the sky, which comes on in the autumn, Orion's dog (Sirius); a sign of evil. During the cremation of Patroclus, he is compared to Hesperus, the evening/western star (Venus), while the burning of the funeral pyre lasts until Phosphorus, the morning/eastern star (also Venus) has set (descended).
|
60 |
+
|
61 |
+
With the assistance of the god Hermes (Argeiphontes), Hector's father Priam goes to Achilles' tent to plead with Achilles for the return of Hector's body so that he can be buried. Achilles relents and promises a truce for the duration of the funeral, lasting 9 days with a burial on the 10th (in the tradition of Niobe's offspring). The poem ends with a description of Hector's funeral, with the doom of Troy and Achilles himself still to come.
|
62 |
+
|
63 |
+
The Aethiopis (7th century BC) and a work named Posthomerica, composed by Quintus of Smyrna in the fourth century AD, relate further events from the Trojan War. When Penthesilea, queen of the Amazons and daughter of Ares, arrives in Troy, Priam hopes that she will defeat Achilles. After his temporary truce with Priam, Achilles fights and kills the warrior queen, only to grieve over her death later.[36] At first, he was so distracted by her beauty, he did not fight as intensely as usual. Once he realized that his distraction was endangering his life, he refocused and killed her.
|
64 |
+
|
65 |
+
Following the death of Patroclus, Nestor's son Antilochus becomes Achilles' closest companion. When Memnon, son of the Dawn Goddess Eos and king of Ethiopia, slays Antilochus, Achilles once more obtains revenge on the battlefield, killing Memnon. Consequently, Eos will not let the sun rise, until Zeus persuades her. The fight between Achilles and Memnon over Antilochus echoes that of Achilles and Hector over Patroclus, except that Memnon (unlike Hector) was also the son of a goddess.
|
66 |
+
|
67 |
+
Many Homeric scholars argued that episode inspired many details in the Iliad's description of the death of Patroclus and Achilles' reaction to it. The episode then formed the basis of the cyclic epic Aethiopis, which was composed after the Iliad, possibly in the 7th century BC. The Aethiopis is now lost, except for scattered fragments quoted by later authors.
|
68 |
+
|
69 |
+
The exact nature of Achilles' relationship with Patroclus has been a subject of dispute in both the classical period and modern times. In the Iliad, it appears to be the model of a deep and loyal friendship. Homer does not suggest that Achilles and his close friend Patroclus were lovers.[37][38] Although there is no direct evidence in the text of the Iliad that Achilles and Patroclus were lovers, this theory was expressed by some later authors. Commentators from classical antiquity to the present have often interpreted the relationship through the lens of their own cultures. In 5th-century BC Athens, the intense bond was often viewed in light of the Greek custom of paiderasteia. In Plato's Symposium, the participants in a dialogue about love assume that Achilles and Patroclus were a couple; Phaedrus argues that Achilles was the younger and more beautiful one so he was the beloved and Patroclus was the lover.[39] But ancient Greek had no words to distinguish heterosexual and homosexual,[40] and it was assumed that a man could both desire handsome young men and have sex with women. Many pairs of men throughout history have been compared to Achilles and Patroclus to imply a homosexual relationship.
|
70 |
+
|
71 |
+
|
72 |
+
|
73 |
+
The death of Achilles, even if considered solely as it occurred in the oldest sources, is a complex one, with many different versions.[41] In the oldest one, the Iliad, and as predicted by Hector with his dying breath, the hero's death was brought about by Paris with an arrow (to the heel according to Statius). In some versions, the god Apollo guided Paris' arrow. Some retellings also state that Achilles was scaling the gates of Troy and was hit with a poisoned arrow. All of these versions deny Paris any sort of valour, owing to the common conception that Paris was a coward and not the man his brother Hector was, and Achilles remained undefeated on the battlefield. His bones were mingled with those of Patroclus, and funeral games were held. He was represented in the Aethiopis as living after his death in the island of Leuke at the mouth of the river Danube.
|
74 |
+
|
75 |
+
Another version of Achilles' death is that he fell deeply in love with one of the Trojan princesses, Polyxena. Achilles asks Priam for Polyxena's hand in marriage. Priam is willing because it would mean the end of the war and an alliance with the world's greatest warrior. But while Priam is overseeing the private marriage of Polyxena and Achilles, Paris, who would have to give up Helen if Achilles married his sister, hides in the bushes and shoots Achilles with a divine arrow, killing him.
|
76 |
+
|
77 |
+
In the Odyssey, Agamemnon informs Achilles of his pompous burial and the erection of his mound at the Hellespont while they are receiving the dead suitors in Hades.[42] He claims they built a massive burial mound on the beach of Ilion that could be seen by anyone approaching from the ocean.[43] Achilles was cremated and his ashes buried in the same urn as those of Patroclus.[44] Paris was later killed by Philoctetes using the enormous bow of Heracles.
|
78 |
+
|
79 |
+
In Book 11 of Homer's Odyssey, Odysseus sails to the underworld and converses with the shades. One of these is Achilles, who when greeted as "blessed in life, blessed in death", responds that he would rather be a slave to the worst of masters than be king of all the dead. But Achilles then asks Odysseus of his son's exploits in the Trojan war, and when Odysseus tells of Neoptolemus' heroic actions, Achilles is filled with satisfaction.[45] This leaves the reader with an ambiguous understanding of how Achilles felt about the heroic life.
|
80 |
+
|
81 |
+
According to some accounts, he had married Medea in life, so that after both their deaths they were united in the Elysian Fields of Hades – as Hera promised Thetis in Apollonius' Argonautica (3rd century BC).
|
82 |
+
|
83 |
+
Achilles' armour was the object of a feud between Odysseus and Telamonian Ajax (Ajax the greater). They competed for it by giving speeches on why they were the bravest after Achilles to their Trojan prisoners, who after considering both men's presentations decided Odysseus was more deserving of the armour. Furious, Ajax cursed Odysseus, which earned him the ire of Athena, who temporarily made Ajax so mad with grief and anguish that he began killing sheep, thinking them his comrades. After a while, when Athena lifted his madness and Ajax realized that he had actually been killing sheep, he was so ashamed that he committed suicide. Odysseus eventually gave the armour to Neoptolemus, the son of Achilles. When Odysseus encounters the shade of Ajax much later in the House of Hades (Odyssey 11.543–566), Ajax is still so angry about the outcome of the competition that he refuses to speak to Odysseus.
|
84 |
+
|
85 |
+
A relic claimed to be Achilles' bronze-headed spear was for centuries preserved in the temple of Athena on the acropolis of Phaselis, Lycia, a port on the Pamphylian Gulf. The city was visited in 333 BC by Alexander the Great, who envisioned himself as the new Achilles and carried the Iliad with him, but his court biographers do not mention the spear; however, it was shown in the time of Pausanias in the 2nd century AD.[46][47]
|
86 |
+
|
87 |
+
Numerous paintings on pottery have suggested a tale not mentioned in the literary traditions. At some point in the war, Achilles and Ajax were playing a board game (petteia).[48][49] They were absorbed in the game and oblivious to the surrounding battle.[50] The Trojans attacked and reached the heroes, who were saved only by an intervention of Athena.[51]
|
88 |
+
|
89 |
+
The tomb of Achilles,[53] extant throughout antiquity in Troad,[54] was venerated by Thessalians, but also by Persian expeditionary forces, as well as by Alexander the Great and the Roman emperor Caracalla.[55] Achilles' cult was also to be found at other places, e. g. on the island of Astypalaea in the Sporades,[56] in Sparta which had a sanctuary,[57] in Elis and in Achilles' homeland Thessaly, as well as in the Magna Graecia cities of Tarentum, Locri and Croton,[58] accounting for an almost Panhellenic cult to the hero.
|
90 |
+
|
91 |
+
The cult of Achilles is illustrated in the 500 BC Polyxena sarcophagus, where the sacrifice of Polixena near the tumulus of Achilles is depicted.[59] Strabo (13.1.32) also suggested that such a cult of Achilles existed in Troad:[52][60]
|
92 |
+
|
93 |
+
Near the Sigeium is a temple and monument of Achilles, and monuments also of Patroclus and Anthlochus. The Ilienses perform sacred ceremonies in honour of them all, and even of Ajax. But they do not worship Hercules, alleging as a reason that he ravaged their country.
|
94 |
+
|
95 |
+
The spread and intensity of the hero's veneration among the Greeks that had settled on the northern coast of the Pontus Euxinus, today's Black Sea, appears to have been remarkable. An archaic cult is attested for the Milesian colony of Olbia as well as for an island in the middle of the Black Sea, today identified with Snake Island (Ukrainian Зміїний, Zmiinyi, near Kiliya, Ukraine). Early dedicatory inscriptions from the Greek colonies on the Black Sea (graffiti and inscribed clay disks, these possibly being votive offerings, from Olbia, the area of Berezan Island and the Tauric Chersonese[62]) attest the existence of a heroic cult of Achilles[63] from the sixth century BC onwards. The cult was still thriving in the third century AD, when dedicatory stelae from Olbia refer to an Achilles Pontárchēs (Ποντάρχης, roughly "lord of the Sea," or "of the Pontus Euxinus"), who was invoked as a protector of the city of Olbia, venerated on par with Olympian gods such as the local Apollo Prostates, Hermes Agoraeus,[55] or Poseidon.[64]
|
96 |
+
|
97 |
+
Pliny the Elder (23–79 AD) in his Natural History mentions a "port of the Achæi" and an "island of Achilles", famous for the tomb of that "man" (portus Achaeorum, insula Achillis, tumulo eius viri clara), situated somewhat nearby Olbia and the Dnieper-Bug Estuary; furthermore, at 125 Roman miles from this island, he places a peninsula "which stretches forth in the shape of a sword" obliquely, called Dromos Achilleos (Ἀχιλλέως δρόμος, Achilléōs drómos "the Race-course of Achilles")[65] and considered the place of the hero's exercise or of games instituted by him.[55] This last feature of Pliny's account is considered to be the iconic spit, called today Tendra (or Kosa Tendra and Kosa Djarilgatch), situated between the mouth of the Dnieper and Karkinit Bay, but which is hardly 125 Roman miles (c. 185 km) away from the Dnieper-Bug estuary, as Pliny states. (To the "Race-course" he gives a length of 80 miles, c. 120 km, whereas the spit measures c. 70 km today.)
|
98 |
+
|
99 |
+
In the following chapter of his book, Pliny refers to the same island as Achillea and introduces two further names for it: Leuce or Macaron (from Greek [νῆσος] μακαρῶν "island of the blest"). The "present day" measures, he gives at this point, seem to account for an identification of Achillea or Leuce with today's Snake Island.[66] Pliny's contemporary Pomponius Mela (c. 43 AD) tells that Achilles was buried on an island named Achillea, situated between the Borysthenes and the Ister, adding to the geographical confusion.[67] Ruins of a square temple, measuring 30 meters to a side, possibly that dedicated to Achilles, were discovered by Captain Kritzikly in 1823 on Snake Island. A second exploration in 1840 showed that the construction of a lighthouse had destroyed all traces of this temple. A fifth century BC black-glazed lekythos inscription, found on the island in 1840, reads: "Glaukos, son of Poseidon, dedicated me to Achilles, lord of Leuke." In another inscription from the fifth or fourth century BC, a statue is dedicated to Achilles, lord of Leuke, by a citizen of Olbia, while in a further dedication, the city of Olbia confirms its continuous maintenance of the island's cult, again suggesting its quality as a place of a supra-regional hero veneration.[55]
|
100 |
+
|
101 |
+
The heroic cult dedicated to Achilles on Leuce seems to go back to an account from the lost epic Aethiopis according to which, after his untimely death, Thetis had snatched her son from the funeral pyre and removed him to a mythical Λεύκη Νῆσος (Leúkē Nêsos "White Island").[68] Already in the fifth century BC, Pindar had mentioned a cult of Achilles on a "bright island" (φαεννά νᾶσος, phaenná nâsos) of the Black Sea,[69] while in another of his works, Pindar would retell the story of the immortalized Achilles living on a geographically indefinite Island of the Blest together with other heroes such as his father Peleus and Cadmus.[70] Well known is the connection of these mythological Fortunate Isles (μακαρῶν νῆσοι, makárôn nêsoi) or the Homeric Elysium with the stream Oceanus which according to Greek mythology surrounds the inhabited world, which should have accounted for the identification of the northern strands of the Euxine with it.[55] Guy Hedreen has found further evidence for this connection of Achilles with the northern margin of the inhabited world in a poem by Alcaeus, speaking of "Achilles lord of Scythia"[71] and the opposition of North and South, as evoked by Achilles' fight against the Aethiopian prince Memnon, who in his turn would be removed to his homeland by his mother Eos after his death.
|
102 |
+
|
103 |
+
The Periplus of the Euxine Sea (c. 130 AD) gives the following details:
|
104 |
+
|
105 |
+
It is said that the goddess Thetis raised this island from the sea, for her son Achilles, who dwells there. Here is his temple and his statue, an archaic work. This island is not inhabited, and goats graze on it, not many, which the people who happen to arrive here with their ships, sacrifice to Achilles. In this temple are also deposited a great many holy gifts, craters, rings and precious stones, offered to Achilles in gratitude. One can still read inscriptions in Greek and Latin, in which Achilles is praised and celebrated. Some of these are worded in Patroclus' honour, because those who wish to be favored by Achilles, honour Patroclus at the same time. There are also in this island countless numbers of sea birds, which look after Achilles' temple. Every morning they fly out to sea, wet their wings with water, and return quickly to the temple and sprinkle it. And after they finish the sprinkling, they clean the hearth of the temple with their wings. Other people say still more, that some of the men who reach this island, come here intentionally. They bring animals in their ships, destined to be sacrificed. Some of these animals they slaughter, others they set free on the island, in Achilles' honour. But there are others, who are forced to come to this island by sea storms. As they have no sacrificial animals, but wish to get them from the god of the island himself, they consult Achilles' oracle. They ask permission to slaughter the victims chosen from among the animals that graze freely on the island, and to deposit in exchange the price which they consider fair. But in case the oracle denies them permission, because there is an oracle here, they add something to the price offered, and if the oracle refuses again, they add something more, until at last, the oracle agrees that the price is sufficient. And then the victim doesn't run away any more, but waits willingly to be caught. So, there is a great quantity of silver there, consecrated to the hero, as price for the sacrificial victims. To some of the people who come to this island, Achilles appears in dreams, to others he would appear even during their navigation, if they were not too far away, and would instruct them as to which part of the island they would better anchor their ships.[72]
|
106 |
+
|
107 |
+
The Greek geographer Dionysius Periegetes, who lived probably during the first century AD, wrote that the island was called Leuce "because the wild animals which live there are white. It is said that there, in Leuce island, reside the souls of Achilles and other heroes, and that they wander through the uninhabited valleys of this island; this is how Jove rewarded the men who had distinguished themselves through their virtues, because through virtue they had acquired everlasting honour".[73] Similarly, others relate the island's name to its white cliffs, snakes or birds dwelling there.[55][74] Pausanias has been told that the island is "covered with forests and full of animals, some wild, some tame. In this island there is also Achilles' temple and his statue".[75] Leuce had also a reputation as a place of healing. Pausanias reports that the Delphic Pythia sent a lord of Croton to be cured of a chest wound.[76] Ammianus Marcellinus attributes the healing to waters (aquae) on the island.[77]
|
108 |
+
|
109 |
+
A number of important commercial port cities of the Greek waters were dedicated to Achilles. Herodotus, Pliny the Elder and Strabo reported on the existence of a town Achílleion (Ἀχίλλειον), built by settlers from Mytilene in the sixth century BC, close to the hero's presumed burial mound in the Troad.[54] Later attestations point to an Achílleion in Messenia (according to Stephanus Byzantinus) and an Achílleios (Ἀχίλλειος) in Laconia.[78] Nicolae Densuşianu recognized a connection to Achilles in the names of Aquileia and of the northern arm of the Danube delta, called Chilia (presumably from an older Achileii), though his conclusion, that Leuce had sovereign rights over the Black Sea, evokes modern rather than archaic sea-law.[72]
|
110 |
+
|
111 |
+
The kings of Epirus claimed to be descended from Achilles through his son, Neoptolemus. Alexander the Great, son of the Epirote princess Olympias, could therefore also claim this descent, and in many ways strove to be like his great ancestor. He is said to have visited the tomb of Achilles at Achilleion while passing Troy.[79] In AD 216 the Roman Emperor Caracalla, while on his way to war against Parthia, emulated Alexander by holding games around Achilles' tumulus.[80]
|
112 |
+
|
113 |
+
The Greek tragedian Aeschylus wrote a trilogy of plays about Achilles, given the title Achilleis by modern scholars. The tragedies relate the deeds of Achilles during the Trojan War, including his defeat of Hector and eventual death when an arrow shot by Paris and guided by Apollo punctures his heel. Extant fragments of the Achilleis and other Aeschylean fragments have been assembled to produce a workable modern play. The first part of the Achilleis trilogy, The Myrmidons, focused on the relationship between Achilles and chorus, who represent the Achaean army and try to convince Achilles to give up his quarrel with Agamemnon; only a few lines survive today.[81] In Plato's Symposium, Phaedrus points out that Aeschylus portrayed Achilles as the lover and Patroclus as the beloved; Phaedrus argues that this is incorrect because Achilles, being the younger and more beautiful of the two, was the beloved, who loved his lover so much that he chose to die to revenge him.[82]
|
114 |
+
|
115 |
+
The tragedian Sophocles also wrote The Lovers of Achilles, a play with Achilles as the main character. Only a few fragments survive.[83]
|
116 |
+
|
117 |
+
Towards the end of the 5th century BC, a more negative view of Achilles emerges in Greek drama; Euripides refers to Achilles in a bitter or ironic tone in Hecuba, Electra, and Iphigenia in Aulis.[84]
|
118 |
+
|
119 |
+
The philosopher Zeno of Elea centred one of his paradoxes on an imaginary footrace between "swift-footed" Achilles and a tortoise, by which he attempted to show that Achilles could not catch up to a tortoise with a head start, and therefore that motion and change were impossible. As a student of the monist Parmenides and a member of the Eleatic school, Zeno believed time and motion to be illusions.
|
120 |
+
|
121 |
+
The Romans, who traditionally traced their lineage to Troy, took a highly negative view of Achilles.[84] Virgil refers to Achilles as a savage and a merciless butcher of men,[85] while Horace portrays Achilles ruthlessly slaying women and children.[86] Other writers, such as Catullus, Propertius, and Ovid, represent a second strand of disparagement, with an emphasis on Achilles' erotic career. This strand continues in Latin accounts of the Trojan War by writers such as Dictys Cretensis and Dares Phrygius and in Benoît de Sainte-Maure's Roman de Troie and Guido delle Colonne's Historia destructionis Troiae, which remained the most widely read and retold versions of the Matter of Troy until the 17th century.
|
122 |
+
|
123 |
+
Achilles was described by the Byzantine chronicler Leo the Deacon, not as Hellene, but as Scythian, while according to the Byzantine author John Malalas, his army was made up of a tribe previously known as Myrmidons and later as Bulgars.[87][88]
|
124 |
+
|
125 |
+
Achilles has been frequently the subject of operas, ballets and related genres.
|
126 |
+
|
127 |
+
In films Achilles has been portrayed in the following films and television series:
|
128 |
+
|
129 |
+
In 1890, Elisabeth of Bavaria, Empress of Austria, had a summer palace built in Corfu. The building is named the Achilleion, after Achilles. Its paintings and statuary depict scenes from the Trojan War, with particular focus on Achilles.
|
130 |
+
|
131 |
+
Achilles and the Nereid Cymothoe, Attic red-figure kantharos from Volci (Cabinet des Médailles, Bibliothèque nationale, Paris)
|
132 |
+
|
133 |
+
The embassy to Achilles, Attic red-figure hydria, c. 480 BC (Staatliche Antikensammlungen, Berlin)
|
134 |
+
|
135 |
+
Achilles sacrificing to Zeus for Patroclus' safe return,[91] from the Ambrosian Iliad, a 5th-century illuminated manuscript
|
136 |
+
|
137 |
+
Achilles and Penthesilea fighting, Lucanian red-figure bell-krater, late 5th century BC
|
138 |
+
|
139 |
+
Achilles killing Penthesilea, tondo of an Attic red-figure kylix, c. 465 BC, from Vulci.
|
140 |
+
|
141 |
+
Thetis and the Nereids mourning Achilles, Corinthian black-figure hydria, c. 555 BC (Louvre, Paris)
|
142 |
+
|
143 |
+
Achilles and Ajax playing the board game petteia, black-figure oinochoe, c. 530 BC (Capitoline Museums, Rome)
|
144 |
+
|
145 |
+
Head of Achilles depicted on a 4th-century BC coin from Kremaste, Phthia. Reverse: Thetis, wearing and holding the shield of Achilles with his AX monogram.
|
en/260.html.txt
ADDED
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
A year is the orbital period of a planetary body, for example, the Earth, moving in its orbit around the Sun. Due to the Earth's axial tilt, the course of a year sees the passing of the seasons, marked by change in weather, the hours of daylight, and, consequently, vegetation and soil fertility.
|
6 |
+
In temperate and subpolar regions around the planet, four seasons are generally recognized: spring, summer, autumn and winter. In tropical and subtropical regions, several geographical sectors do not present defined seasons; but in the seasonal tropics, the annual wet and dry seasons are recognized and tracked.
|
7 |
+
|
8 |
+
A calendar year is an approximation of the number of days of the Earth's orbital period, as counted in a given calendar. The Gregorian calendar, or modern calendar, presents its calendar year to be either a common year of 365 days or a leap year of 366 days, as do the Julian calendars; see below. For the Gregorian calendar, the average length of the calendar year (the mean year) across the complete leap cycle of 400 years is 365.2425 days. The ISO standard ISO 80000-3, Annex C, supports the symbol a (for Latin annus) to represent a year of either 365 or 366 days. In English, the abbreviations y and yr are commonly used.
|
9 |
+
|
10 |
+
In astronomy, the Julian year is a unit of time; it is defined as 365.25 days of exactly 86,400 seconds (SI base unit), totalling exactly 31,557,600 seconds in the Julian astronomical year.[1]
|
11 |
+
|
12 |
+
The word year is also used for periods loosely associated with, but not identical to, the calendar or astronomical year, such as the seasonal year, the fiscal year, the academic year, etc. Similarly, year can mean the orbital period of any planet; for example, a Martian year and a Venusian year are examples of the time a planet takes to transit one complete orbit. The term can also be used in reference to any long period or cycle, such as the Great Year.[2]
|
13 |
+
|
14 |
+
English year (via West Saxon ġēar (/jɛar/), Anglian ġēr) continues Proto-Germanic *jǣran (*jē₁ran).
|
15 |
+
Cognates are German Jahr, Old High German jār, Old Norse ár and Gothic jer, from the Proto-Indo-European noun *yeh₁r-om "year, season". Cognates also descended from the same Proto-Indo-European noun (with variation in suffix ablaut) are Avestan yārǝ "year", Greek ὥρα (hṓra) "year, season, period of time" (whence "hour"), Old Church Slavonic jarŭ, and Latin hornus "of this year".
|
16 |
+
|
17 |
+
Latin annus (a 2nd declension masculine noun; annum is the accusative singular; annī is genitive singular and nominative plural; annō the dative and ablative singular) is from a PIE noun *h₂et-no-, which also yielded Gothic aþn "year" (only the dative plural aþnam is attested).
|
18 |
+
|
19 |
+
Although most languages treat the word as thematic *yeh₁r-o-, there is evidence for an original derivation with an *-r/n suffix, *yeh₁-ro-. Both Indo-European words for year, *yeh₁-ro- and *h₂et-no-, would then be derived from verbal roots meaning "to go, move", *h₁ey- and *h₂et-, respectively (compare Vedic Sanskrit éti "goes", atasi "thou goest, wanderest").
|
20 |
+
A number of English words are derived from Latin annus, such as annual, annuity, anniversary, etc.; per annum means "each year", anno Domini means "in the year of the Lord".
|
21 |
+
|
22 |
+
The Greek word for "year", ἔτος, is cognate with Latin vetus "old", from the PIE word *wetos- "year", also preserved in this meaning in Sanskrit vat-sa-ras "year" and vat-sa- "yearling (calf)", the latter also reflected in Latin vitulus "bull calf", English wether "ram" (Old English weðer, Gothic wiþrus "lamb").
|
23 |
+
|
24 |
+
In some languages, it is common to count years by referencing to one season, as in "summers", or "winters", or "harvests". Examples include Chinese 年 "year", originally 秂, an ideographic compound of a person carrying a bundle of wheat denoting "harvest". Slavic besides godŭ "time period; year" uses lěto "summer; year".
|
25 |
+
|
26 |
+
In the International System of Quantities (ISO 80000-3), the year (symbol, a) is defined as either 365 days or 366 days.
|
27 |
+
|
28 |
+
Astronomical years do not have an integer number of days or lunar months. Any calendar that follows an astronomical year must have a system of intercalation such as leap years.
|
29 |
+
|
30 |
+
In the Julian calendar, the average (mean) length of a year is 365.25 days. In a non-leap year, there are 365 days, in a leap year there are 366 days. A leap year occurs every fourth year, or leap year, during which a leap day is intercalated into the month of February. The name "Leap Day" is applied to the added day.
|
31 |
+
|
32 |
+
The Revised Julian calendar, proposed in 1923 and used in some Eastern Orthodox Churches,
|
33 |
+
has 218 leap years every 900 years, for the average (mean) year length of 365.2422222 days, close to the length of the mean tropical year, 365.24219 days (relative error of 9·10−8).
|
34 |
+
In the year 2800 CE, the Gregorian and Revised Julian calendars will begin to differ by one calendar day.[3]
|
35 |
+
|
36 |
+
The Gregorian calendar attempts to cause the northward equinox to fall on or shortly before March 21 and hence it follows the northward equinox year, or tropical year.[4] Because 97 out of 400 years are leap years, the mean length of the Gregorian calendar year is 365.2425 days; with a relative error below one ppm (8·10−7) relative to the current length of the mean tropical year (365.24219 days) and even closer to the current March equinox year of 365.242374 days that it aims to match. It is estimated that by the year 4000 CE, the northward equinox will fall back by one day in the Gregorian calendar, not because of this difference, but due to the slowing of the Earth's rotation and the associated lengthening of the day.
|
37 |
+
|
38 |
+
Historically, lunisolar calendars intercalated entire leap months on an observational basis.
|
39 |
+
Lunisolar calendars have mostly fallen out of use except for liturgical reasons (Hebrew calendar, various Hindu calendars).
|
40 |
+
|
41 |
+
A modern adaptation of the historical Jalali calendar, known as the Solar Hijri calendar (1925), is a purely solar calendar with an irregular pattern of leap days based on observation (or astronomical computation), aiming to place new year (Nowruz) on the day of vernal equinox (for the time zone of Tehran), as opposed to using an algorithmic system of leap years.
|
42 |
+
|
43 |
+
A calendar era assigns a cardinal number to each sequential year, using a reference point in the past as the beginning of the era.
|
44 |
+
|
45 |
+
The worldwide standard is the Anno Domini, although some prefer the term Common Era because it has no explicit reference to Christianity. It was introduced in the 6th century and was intended to count years from the nativity of Jesus.[5]
|
46 |
+
|
47 |
+
The Anno Domini era is given the Latin abbreviation AD (for Anno Domini "in the year of the Lord"), or alternatively CE for "Common Era". Years before AD 1 are abbreviated BC for Before Christ or alternatively BCE for Before the Common Era.
|
48 |
+
Year numbers are based on inclusive counting, so that there is no "year zero".
|
49 |
+
In the modern alternative reckoning of Astronomical year numbering, positive numbers indicate years AD, the number 0 designates 1 BC, −1 designates 2 BC, and so on.
|
50 |
+
|
51 |
+
Financial and scientific calculations often use a 365-day calendar to simplify daily rates.
|
52 |
+
|
53 |
+
A fiscal year or financial year is a 12-month period used for calculating annual financial statements in businesses and other organizations. In many jurisdictions, regulations regarding accounting require such reports once per twelve months, but do not require that the twelve months constitute a calendar year.
|
54 |
+
|
55 |
+
For example, in Canada and India the fiscal year runs from April 1; in the United Kingdom it runs from April 1 for purposes of corporation tax and government financial statements, but from April 6 for purposes of personal taxation and payment of state benefits; in Australia it runs from July 1; while in the United States the fiscal year of the federal government runs from October 1.
|
56 |
+
|
57 |
+
An academic year is the annual period during which a student attends an educational institution. The academic year may be divided into academic terms, such as semesters or quarters. The school year in many countries starts in August or September and ends in May, June or July. In Israel the academic year begins around October or November, aligned with the second month of the Hebrew Calendar.
|
58 |
+
|
59 |
+
Some schools in the UK and USA divide the academic year into three roughly equal-length terms (called trimesters or quarters in the USA), roughly coinciding with autumn, winter, and spring. At some, a shortened summer session, sometimes considered part of the regular academic year, is attended by students on a voluntary or elective basis. Other schools break the year into two main semesters, a first (typically August through December) and a second semester (January through May). Each of these main semesters may be split in half by mid-term exams, and each of the halves is referred to as a quarter (or term in some countries). There may also be a voluntary summer session and/or a short January session.
|
60 |
+
|
61 |
+
Some other schools, including some in the United States, have four marking periods. Some schools in the United States, notably Boston Latin School, may divide the year into five or more marking periods. Some state in defense of this that there is perhaps a positive correlation between report frequency and academic achievement.
|
62 |
+
|
63 |
+
There are typically 180 days of teaching each year in schools in the US, excluding weekends and breaks, while there are 190 days for pupils in state schools in Canada, New Zealand and the United Kingdom, and 200 for pupils in Australia.
|
64 |
+
|
65 |
+
In India the academic year normally starts from June 1 and ends on May 31. Though schools start closing from mid-March, the actual academic closure is on May 31 and in Nepal it starts from July 15.[citation needed]
|
66 |
+
|
67 |
+
Schools and universities in Australia typically have academic years that roughly align with the calendar year (i.e., starting in February or March and ending in October to December), as the southern hemisphere experiences summer from December to February.
|
68 |
+
|
69 |
+
The Julian year, as used in astronomy and other sciences, is a time unit defined as exactly 365.25 days. This is the normal meaning of the unit "year" (symbol "a" from the Latin annus) used in various scientific contexts. The Julian century of 36525 days and the Julian millennium of 365250 days are used in astronomical calculations. Fundamentally, expressing a time interval in Julian years is a way to precisely specify how many days (not how many "real" years), for long time intervals where stating the number of days would be unwieldy and unintuitive. By convention, the Julian year is used in the computation of the distance covered by a light-year.
|
70 |
+
|
71 |
+
In the Unified Code for Units of Measure, the symbol, a (without subscript), always refers to the Julian year, aj, of exactly 31557600 seconds.
|
72 |
+
|
73 |
+
The SI multiplier prefixes may be applied to it to form ka (kiloannus), Ma (megaannus), etc.
|
74 |
+
|
75 |
+
Each of these three years can be loosely called an astronomical year.
|
76 |
+
|
77 |
+
The sidereal year is the time taken for the Earth to complete one revolution of its orbit, as measured against a fixed frame of reference (such as the fixed stars, Latin sidera, singular sidus). Its average duration is 365.256363004 days (365 d 6 h 9 min 9.76 s) (at the epoch J2000.0 = January 1, 2000, 12:00:00 TT).[6]
|
78 |
+
|
79 |
+
Today the mean tropical year is defined as the period of time for the mean ecliptic longitude of the Sun to increase by 360 degrees.[7] Since the Sun's ecliptic longitude is measured with respect to the equinox,[8] the tropical year comprises a complete cycle of the seasons; because of the biological and socio-economic importance of the seasons, the tropical year is the basis of most calendars. The modern definition of mean tropical year differs from the actual time between passages of, e.g., the northward equinox for several reasons explained below. Because of the Earth's axial precession, this year is about 20 minutes shorter than the sidereal year. The mean tropical year is approximately 365 days, 5 hours, 48 minutes, 45 seconds, using the modern definition.[9] (= 365.24219 days of 86400 SI seconds)
|
80 |
+
|
81 |
+
The anomalistic year is the time taken for the Earth to complete one revolution with respect to its apsides. The orbit of the Earth is elliptical; the extreme points, called apsides, are the perihelion, where the Earth is closest to the Sun (January 5, 07:48 UT in 2020), and the aphelion, where the Earth is farthest from the Sun (July 4 11:35 UT in 2020). The anomalistic year is usually defined as the time between perihelion passages. Its average duration is 365.259636 days (365 d 6 h 13 min 52.6 s) (at the epoch J2011.0).[10]
|
82 |
+
|
83 |
+
The draconic year, draconitic year, eclipse year, or ecliptic year is the time taken for the Sun (as seen from the Earth) to complete one revolution with respect to the same lunar node (a point where the Moon's orbit intersects the ecliptic). The year is associated with eclipses: these occur only when both the Sun and the Moon are near these nodes; so eclipses occur within about a month of every half eclipse year. Hence there are two eclipse seasons every eclipse year. The average duration of the eclipse year is
|
84 |
+
|
85 |
+
This term is sometimes erroneously used for the draconic or nodal period of lunar precession, that is the period of a complete revolution of the Moon's ascending node around the ecliptic: 18.612815932 Julian years (6798.331019 days; at the epoch J2000.0).
|
86 |
+
|
87 |
+
The full moon cycle is the time for the Sun (as seen from the Earth) to complete one revolution with respect to the perigee of the Moon's orbit. This period is associated with the apparent size of the full moon, and also with the varying duration of the synodic month. The duration of one full moon cycle is:
|
88 |
+
|
89 |
+
The lunar year comprises twelve full cycles of the phases of the Moon, as seen from Earth. It has a duration of approximately 354.37 days. Muslims use this for celebrating their Eids and for marking the start of the fasting month of Ramadan.
|
90 |
+
A Muslim calendar year is based on the lunar cycle.
|
91 |
+
|
92 |
+
The vague year, from annus vagus or wandering year, is an integral approximation to the year equaling 365 days, which wanders in relation to more exact years. Typically the vague year is divided into 12 schematic months of 30 days each plus 5 epagomenal days. The vague year was used in the calendars of Ethiopia, Ancient Egypt, Iran, Armenia and in Mesoamerica among the Aztecs and Maya.[11] It is still used by many Zoroastrian communities.
|
93 |
+
|
94 |
+
A heliacal year is the interval between the heliacal risings of a star. It differs from the sidereal year for stars away from the ecliptic due mainly to the precession of the equinoxes.
|
95 |
+
|
96 |
+
The Sothic year is the interval between heliacal risings of the star Sirius. It is currently less than the sidereal year and its duration is very close to the Julian year of 365.25 days.
|
97 |
+
|
98 |
+
The Gaussian year is the sidereal year for a planet of negligible mass (relative to the Sun) and unperturbed by other planets that is governed by the Gaussian gravitational constant. Such a planet would be slightly closer to the Sun than Earth's mean distance. Its length is:
|
99 |
+
|
100 |
+
The Besselian year is a tropical year that starts when the (fictitious) mean Sun reaches an ecliptic longitude of 280°. This is currently on or close to January 1. It is named after the 19th-century German astronomer and mathematician Friedrich Bessel. The following equation can be used to compute the current Besselian epoch (in years):[12]
|
101 |
+
|
102 |
+
The TT subscript indicates that for this formula, the Julian date should use the Terrestrial Time scale, or its predecessor, ephemeris time.
|
103 |
+
|
104 |
+
The exact length of an astronomical year changes over time.
|
105 |
+
|
106 |
+
Mean year lengths in this section are calculated for 2000, and differences in year lengths, compared to 2000, are given for past and future years. In the tables a day is 86,400 SI seconds long.[13][14][15][16]
|
107 |
+
|
108 |
+
An average Gregorian year is 365.2425 days (52.1775 weeks, 8765.82 hours, 525949.2 minutes or 31556952 seconds). For this calendar, a common year is 365 days (8760 hours, 525600 minutes or 31536000 seconds), and a leap year is 366 days (8784 hours, 527040 minutes or 31622400 seconds). The 400-year cycle of the Gregorian calendar has 146097 days and hence exactly 20871 weeks.
|
109 |
+
|
110 |
+
The Great Year, or equinoctial cycle, corresponds to a complete revolution of the equinoxes around the ecliptic. Its length is about 25,700 years.
|
111 |
+
|
112 |
+
The Galactic year is the time it takes Earth's Solar System to revolve once around the galactic center. It comprises roughly 230 million Earth years.[17]
|
113 |
+
|
114 |
+
A seasonal year is the time between successive recurrences of a seasonal event such as the flooding of a river, the migration of a species of bird, the flowering of a species of plant, the first frost, or the first scheduled game of a certain sport. All of these events can have wide variations of more than a month from year to year.
|
115 |
+
|
116 |
+
In the International System of Quantities the symbol for the year as a unit of time is a, taken from the Latin word annus.[18]
|
117 |
+
|
118 |
+
In English, the abbreviations "y" or "yr" are more commonly used in non-scientific literature, but also specifically in geology and paleontology, where "kyr, myr, byr" (thousands, millions, and billions of years, respectively) and similar abbreviations are used to denote intervals of time remote from the present.[18][19][20]
|
119 |
+
|
120 |
+
NIST SP811[21] and ISO 80000-3:2006[22] support the symbol a as the unit of time for a year. In English, the abbreviations y and yr are also used.[18][19][20]
|
121 |
+
|
122 |
+
The Unified Code for Units of Measure[23] disambiguates the varying symbologies of ISO 1000, ISO 2955 and ANSI X3.50[24] by using:
|
123 |
+
|
124 |
+
where:
|
125 |
+
|
126 |
+
The International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Geological Sciences have jointly recommended defining the annus, with symbol a, as the length of the tropical year in the year 2000:
|
127 |
+
|
128 |
+
This differs from the above definition of 365.25 days by about 20 parts per million. The joint document says that definitions such as the Julian year "bear an inherent, pre-programmed obsolescence because of the variability of Earth’s orbital movement", but then proposes using the length of the tropical year as of 2000 AD (specified down to the millisecond), which suffers from the same problem.[25][26] (The tropical year oscillates with time by more than a minute.)
|
129 |
+
|
130 |
+
The notation has proved controversial as it conflicts with an earlier convention among geoscientists to use a specifically for years ago, and y or yr for a one-year time period.[26]
|
131 |
+
|
132 |
+
For the following, there are alternative forms which elide the consecutive vowels, such as kilannus, megannus, etc. The exponents and exponential notations are typically used for calculating and in displaying calculations, and for conserving space, as in tables of data.
|
133 |
+
|
134 |
+
|
135 |
+
|
136 |
+
In astronomy, geology, and paleontology, the abbreviation yr for years and ya for years ago are sometimes used, combined with prefixes for thousand, million, or billion.[19][29] They are not SI units, using y to abbreviate the English "year", but following ambiguous international recommendations, use either the standard English first letters as prefixes (t, m, and b) or metric prefixes (k, M, and G) or variations on metric prefixes (k, m, g). In archaeology, dealing with more recent periods, normally expressed dates, e.g. "22,000 years ago" may be used as a more accessible equivalent of a Before Present ("BP") date.
|
137 |
+
|
138 |
+
These abbreviations include:
|
139 |
+
|
140 |
+
Use of mya and bya is deprecated in modern geophysics, the recommended usage being Ma and Ga for dates Before Present, but "m.y." for the duration of epochs.[19][20] This ad hoc distinction between "absolute" time and time intervals is somewhat controversial amongst members of the Geological Society of America.[31]
|
141 |
+
|
142 |
+
Note that on graphs, using ya units on the horizontal axis time flows from right to left, which may seem counter-intuitive. If the ya units are on the vertical axis, time flows from top to bottom which is probably easier to understand than conventional notation.[clarification needed]
|
en/2600.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/2601.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/2602.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/2603.html.txt
ADDED
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Europe:
|
2 |
+
|
3 |
+
Africa:
|
4 |
+
|
5 |
+
Siberia:
|
6 |
+
|
7 |
+
The Paleolithic or Palaeolithic or Palæolithic (/ˌpeɪl-, ˌpælioʊˈlɪθɪk/), also called the Old Stone Age, is a period in human prehistory distinguished by the original development of stone tools that covers c. 99% of the time period of human technological prehistory.[1] It extends from the earliest known use of stone tools by hominins c. 3.3 million years ago, to the end of the Pleistocene c. 11,650 cal BP.[2]
|
8 |
+
|
9 |
+
The Paleolithic Age in Europe preceded the Mesolithic Age, although the date of the transition varies geographically by several thousand years. During the Paleolithic Age, hominins grouped together in small societies such as bands and subsisted by gathering plants, fishing, and hunting or scavenging wild animals.[3] The Paleolithic Age is characterized by the use of knapped stone tools, although at the time humans also used wood and bone tools. Other organic commodities were adapted for use as tools, including leather and vegetable fibers; however, due to rapid decomposition, these have not survived to any great degree.
|
10 |
+
|
11 |
+
About 50,000 years ago a marked increase in the diversity of artifacts occurred. In Africa, bone artifacts and the first art appear in the archaeological record. The first evidence of human fishing is also noted, from artifacts in places such as Blombos cave in South Africa. Archaeologists classify artifacts of the last 50,000 years into many different categories, such as projectile points, engraving tools, knife blades, and drilling and piercing tools.
|
12 |
+
|
13 |
+
Humankind gradually evolved from early members of the genus Homo—such as Homo habilis, who used simple stone tools—into anatomically modern humans as well as behaviourally modern humans by the Upper Paleolithic.[4] During the end of the Paleolithic Age, specifically the Middle or Upper Paleolithic Age, humans began to produce the earliest works of art and to engage in religious or spiritual behavior such as burial and ritual.[5][page needed][6][need quotation to verify] Conditions during the Paleolithic Age went through a set of glacial and interglacial periods in which the climate periodically fluctuated between warm and cool temperatures. Archaeological and genetic data suggest that the source populations of Paleolithic humans survived in sparsely-wooded areas and dispersed through areas of high primary productivity while avoiding dense forest-cover.[7]
|
14 |
+
|
15 |
+
By c. 50,000 – c. 40,000 BP, the first humans set foot in Australia. By c. 45,000 BP, humans lived at 61°N latitude in Europe.[8] By c. 30,000 BP, Japan was reached, and by c. 27,000 BP humans were present in Siberia, above the Arctic Circle.[8] At the end of the Upper Paleolithic Age a group of humans crossed Beringia and quickly expanded throughout the Americas.[9]
|
16 |
+
|
17 |
+
The term "Palaeolithic" was coined by archaeologist John Lubbock in 1865.[10] It derives from Greek: παλαιός, palaios, "old"; and λίθος, lithos, "stone", meaning "old age of the stone" or "Old Stone Age".
|
18 |
+
|
19 |
+
|
20 |
+
|
21 |
+
The Paleolithic coincides almost exactly with the Pleistocene epoch of geologic time, which lasted from 2.6 million years ago to about 12,000 years ago.[11] This epoch experienced important geographic and climatic changes that affected human societies.
|
22 |
+
|
23 |
+
During the preceding Pliocene, continents had continued to drift from possibly as far as 250 km (160 mi) from their present locations to positions only 70 km (43 mi) from their current location. South America became linked to North America through the Isthmus of Panama, bringing a nearly complete end to South America's distinctive marsupial fauna. The formation of the isthmus had major consequences on global temperatures, because warm equatorial ocean currents were cut off, and the cold Arctic and Antarctic waters lowered temperatures in the now-isolated Atlantic Ocean.
|
24 |
+
|
25 |
+
Most of Central America formed during the Pliocene to connect the continents of North and South America, allowing fauna from these continents to leave their native habitats and colonize new areas.[12] Africa's collision with Asia created the Mediterranean, cutting off the remnants of the Tethys Ocean. During the Pleistocene, the modern continents were essentially at their present positions; the tectonic plates on which they sit have probably moved at most 100 km (62 mi) from each other since the beginning of the period.[13]
|
26 |
+
|
27 |
+
Climates during the Pliocene became cooler and drier, and seasonal, similar to modern climates. Ice sheets grew on Antarctica. The formation of an Arctic ice cap around 3 million years ago is signaled by an abrupt shift in oxygen isotope ratios and ice-rafted cobbles in the North Atlantic and North Pacific Ocean beds.[14] Mid-latitude glaciation probably began before the end of the epoch. The global cooling that occurred during the Pliocene may have spurred on the disappearance of forests and the spread of grasslands and savannas.[12]
|
28 |
+
The Pleistocene climate was characterized by repeated glacial cycles during which continental glaciers pushed to the 40th parallel in some places. Four major glacial events have been identified, as well as many minor intervening events. A major event is a general glacial excursion, termed a "glacial". Glacials are separated by "interglacials". During a glacial, the glacier experiences minor advances and retreats. The minor excursion is a "stadial"; times between stadials are "interstadials". Each glacial advance tied up huge volumes of water in continental ice sheets 1,500–3,000 m (4,900–9,800 ft) deep, resulting in temporary sea level drops of 100 m (330 ft) or more over the entire surface of the Earth. During interglacial times, such as at present, drowned coastlines were common, mitigated by isostatic or other emergent motion of some regions.
|
29 |
+
|
30 |
+
The effects of glaciation were global. Antarctica was ice-bound throughout the Pleistocene and the preceding Pliocene. The Andes were covered in the south by the Patagonian ice cap. There were glaciers in New Zealand and Tasmania. The now decaying glaciers of Mount Kenya, Mount Kilimanjaro, and the Ruwenzori Range in east and central Africa were larger. Glaciers existed in the mountains of Ethiopia and to the west in the Atlas mountains. In the northern hemisphere, many glaciers fused into one. The Cordilleran Ice Sheet covered the North American northwest; the Laurentide covered the east. The Fenno-Scandian ice sheet covered northern Europe, including Great Britain; the Alpine ice sheet covered the Alps. Scattered domes stretched across Siberia and the Arctic shelf. The northern seas were frozen. During the late Upper Paleolithic (Latest Pleistocene) c. 18,000 BP, the Beringia land bridge between Asia and North America was blocked by ice,[13] which may have prevented early Paleo-Indians such as the Clovis culture from directly crossing Beringia to reach the Americas.
|
31 |
+
|
32 |
+
According to Mark Lynas (through collected data), the Pleistocene's overall climate could be characterized as a continuous El Niño with trade winds in the south Pacific weakening or heading east, warm air rising near Peru, warm water spreading from the west Pacific and the Indian Ocean to the east Pacific, and other El Niño markers.[15]
|
33 |
+
|
34 |
+
The Paleolithic is often held to finish at the end of the ice age (the end of the Pleistocene epoch), and Earth's climate became warmer. This may have caused or contributed to the extinction of the Pleistocene megafauna, although it is also possible that the late Pleistocene extinctions were (at least in part) caused by other factors such as disease and overhunting by humans.[16][17] New research suggests that the extinction of the woolly mammoth may have been caused by the combined effect of climatic change and human hunting.[17] Scientists suggest that climate change during the end of the Pleistocene caused the mammoths' habitat to shrink in size, resulting in a drop in population. The small populations were then hunted out by Paleolithic humans.[17] The global warming that occurred during the end of the Pleistocene and the beginning of the Holocene may have made it easier for humans to reach mammoth habitats that were previously frozen and inaccessible.[17] Small populations of woolly mammoths survived on isolated Arctic islands, Saint Paul Island and Wrangel Island, until c. 3700 BP and c. 1700 BP respectively. The Wrangel Island population became extinct around the same time the island was settled by prehistoric humans.[18] There is no evidence of prehistoric human presence on Saint Paul island (though early human settlements dating as far back as 6500 BP were found on the nearby Aleutian Islands).[19]
|
35 |
+
|
36 |
+
Nearly all of our knowledge of Paleolithic human culture and way of life comes from archaeology and ethnographic comparisons to modern hunter-gatherer cultures such as the !Kung San who live similarly to their Paleolithic predecessors.[21] The economy of a typical Paleolithic society was a hunter-gatherer economy.[22] Humans hunted wild animals for meat and gathered food, firewood, and materials for their tools, clothes, or shelters.[22]
|
37 |
+
|
38 |
+
Human population density was very low, around only one person per square mile.[3] This was most likely due to low body fat, infanticide, women regularly engaging in intense endurance exercise,[23] late weaning of infants, and a nomadic lifestyle.[3] Like contemporary hunter-gatherers, Paleolithic humans enjoyed an abundance of leisure time unparalleled in both Neolithic farming societies and modern industrial societies.[22][24] At the end of the Paleolithic, specifically the Middle or Upper Paleolithic, humans began to produce works of art such as cave paintings, rock art and jewellery and began to engage in religious behavior such as burial and ritual.[25]
|
39 |
+
|
40 |
+
At the beginning of the Paleolithic, hominins were found primarily in eastern Africa, east of the Great Rift Valley. Most known hominin fossils dating earlier than one million years before present are found in this area, particularly in Kenya, Tanzania, and Ethiopia.
|
41 |
+
|
42 |
+
By c. 2,000,000 – c. 1,500,000 BP, groups of hominins began leaving Africa and settling southern Europe and Asia. Southern Caucasus was occupied by c. 1,700,000 BP, and northern China was reached by c. 1,660,000 BP. By the end of the Lower Paleolithic, members of the hominin family were living in what is now China, western Indonesia, and, in Europe, around the Mediterranean and as far north as England, France, southern Germany, and Bulgaria. Their further northward expansion may have been limited by the lack of control of fire: studies of cave settlements in Europe indicate no regular use of fire prior to c. 400,000 – c. 300,000 BP.[26]
|
43 |
+
|
44 |
+
East Asian fossils from this period are typically placed in the genus Homo erectus. Very little fossil evidence is available at known Lower Paleolithic sites in Europe, but it is believed that hominins who inhabited these sites were likewise Homo erectus. There is no evidence of hominins in America, Australia, or almost anywhere in Oceania during this time period.
|
45 |
+
|
46 |
+
Fates of these early colonists, and their relationships to modern humans, are still subject to debate. According to current archaeological and genetic models, there were at least two notable expansion events subsequent to peopling of Eurasia c. 2,000,000 – c. 1,500,000 BP. Around 500,000 BP a group of early humans, frequently called Homo heidelbergensis, came to Europe from Africa and eventually evolved into Homo neanderthalensis (Neanderthals). In the Middle Paleolithic, Neanderthals were present in the region now occupied by Poland.
|
47 |
+
|
48 |
+
Both Homo erectus and Homo neanderthalensis became extinct by the end of the Paleolithic. Descended from Homo sapiens, the anatomically modern Homo sapiens sapiens emerged in eastern Africa c. 200,000 BP, left Africa around 50,000 BP, and expanded throughout the planet. Multiple hominid groups coexisted for some time in certain locations. Homo neanderthalensis were still found in parts of Eurasia c. 30,000 BP years, and engaged in an unknown degree of interbreeding with Homo sapiens sapiens. DNA studies also suggest an unknown degree of interbreeding between Homo sapiens sapiens and Homo sapiens denisova.[27]
|
49 |
+
|
50 |
+
Hominin fossils not belonging either to Homo neanderthalensis or to Homo sapiens species, found in the Altai Mountains and Indonesia, were radiocarbon dated to c. 30,000 – c. 40,000 BP and c. 17,000 BP respectively.
|
51 |
+
|
52 |
+
For the duration of the Paleolithic, human populations remained low, especially outside the equatorial region. The entire population of Europe between 16,000 and 11,000 BP likely averaged some 30,000 individuals, and between 40,000 and 16,000 BP, it was even lower at 4,000–6,000 individuals.[28]
|
53 |
+
|
54 |
+
Paleolithic humans made tools of stone, bone, and wood.[22] The early paleolithic hominins, Australopithecus, were the first users of stone tools. Excavations in Gona, Ethiopia have produced thousands of artifacts, and through radioisotopic dating and magnetostratigraphy, the sites can be firmly dated to 2.6 million years ago. Evidence shows these early hominins intentionally selected raw materials with good flaking qualities and chose appropriate sized stones for their needs to produce sharp-edged tools for cutting.[29]
|
55 |
+
|
56 |
+
The earliest Paleolithic stone tool industry, the Oldowan, began around 2.6 million years ago.[30] It contained tools such as choppers, burins, and stitching awls. It was completely replaced around 250,000 years ago by the more complex Acheulean industry, which was first conceived by Homo ergaster around 1.8–1.65 million years ago.[31] The Acheulean implements completely vanish from the archaeological record around 100,000 years ago and were replaced by more complex Middle Paleolithic tool kits such as the Mousterian and the Aterian industries.[32]
|
57 |
+
|
58 |
+
Lower Paleolithic humans used a variety of stone tools, including hand axes and choppers. Although they appear to have used hand axes often, there is disagreement about their use. Interpretations range from cutting and chopping tools, to digging implements, to flaking cores, to the use in traps, and as a purely ritual significance, perhaps in courting behavior. William H. Calvin has suggested that some hand axes could have served as "killer Frisbees" meant to be thrown at a herd of animals at a waterhole so as to stun one of them. There are no indications of hafting, and some artifacts are far too large for that. Thus, a thrown hand axe would not usually have penetrated deeply enough to cause very serious injuries. Nevertheless, it could have been an effective weapon for defense against predators. Choppers and scrapers were likely used for skinning and butchering scavenged animals and sharp-ended sticks were often obtained for digging up edible roots. Presumably, early humans used wooden spears as early as 5 million years ago to hunt small animals, much as their relatives, chimpanzees, have been observed to do in Senegal, Africa.[33] Lower Paleolithic humans constructed shelters, such as the possible wood hut at Terra Amata.
|
59 |
+
|
60 |
+
Fire was used by the Lower Paleolithic hominins Homo erectus and Homo ergaster as early as 300,000 to 1.5 million years ago and possibly even earlier by the early Lower Paleolithic (Oldowan) hominin Homo habilis or by robust Australopithecines such as Paranthropus.[3] However, the use of fire only became common in the societies of the following Middle Stone Age and Middle Paleolithic.[2] Use of fire reduced mortality rates and provided protection against predators.[34] Early hominins may have begun to cook their food as early as the Lower Paleolithic (c. 1.9 million years ago) or at the latest in the early Middle Paleolithic (c. 250,000 years ago).[35] Some scientists have hypothesized that hominins began cooking food to defrost frozen meat, which would help ensure their survival in cold regions.[35]
|
61 |
+
|
62 |
+
The Lower Paleolithic Homo erectus possibly invented rafts (c. 840,000 – c. 800,000 BP) to travel over large bodies of water, which may have allowed a group of Homo erectus to reach the island of Flores and evolve into the small hominin Homo floresiensis. However, this hypothesis is disputed within the anthropological community.[36][37] The possible use of rafts during the Lower Paleolithic may indicate that Lower Paleolithic hominins such as Homo erectus were more advanced than previously believed, and may have even spoken an early form of modern language.[36] Supplementary evidence from Neanderthal and modern human sites located around the Mediterranean Sea, such as Coa de sa Multa (c. 300,000 BP), has also indicated that both Middle and Upper Paleolithic humans used rafts to travel over large bodies of water (i.e. the Mediterranean Sea) for the purpose of colonizing other bodies of land.[36][38]
|
63 |
+
|
64 |
+
By around 200,000 BP, Middle Paleolithic stone tool manufacturing spawned a tool making technique known as the prepared-core technique, that was more elaborate than previous Acheulean techniques.[4] This technique increased efficiency by allowing the creation of more controlled and consistent flakes.[4] It allowed Middle Paleolithic humans to create stone tipped spears, which were the earliest composite tools, by hafting sharp, pointy stone flakes onto wooden shafts. In addition to improving tool making methods, the Middle Paleolithic also saw an improvement of the tools themselves that allowed access to a wider variety and amount of food sources. For example, microliths or small stone tools or points were invented around 70,000–65,000 BP and were essential to the invention of bows and spear throwers in the following Upper Paleolithic.[34]
|
65 |
+
|
66 |
+
Harpoons were invented and used for the first time during the late Middle Paleolithic (c. 90,000 BP); the invention of these devices brought fish into the human diets, which provided a hedge against starvation and a more abundant food supply.[38][39] Thanks to their technology and their advanced social structures, Paleolithic groups such as the Neanderthals—who had a Middle Paleolithic level of technology—appear to have hunted large game just as well as Upper Paleolithic modern humans.[40] and the Neanderthals in particular may have likewise hunted with projectile weapons.[41] Nonetheless, Neanderthal use of projectile weapons in hunting occurred very rarely (or perhaps never) and the Neanderthals hunted large game animals mostly by ambushing them and attacking them with mêlée weapons such as thrusting spears rather than attacking them from a distance with projectile weapons.[25][42]
|
67 |
+
|
68 |
+
During the Upper Paleolithic, further inventions were made, such as the net c. 22,000 or c. 29,000 BP)[34] bolas,[43] the spear thrower (c. 30,000 BP), the bow and arrow (c. 25,000 or c. 30,000 BP)[3] and the oldest example of ceramic art, the Venus of Dolní Věstonice (c. 29,000 – c. 25,000 BP).[3] Early dogs were domesticated, sometime between 30,000 and 14,000 BP, presumably to aid in hunting.[44] However, the earliest instances of successful domestication of dogs may be much more ancient than this. Evidence from canine DNA collected by Robert K. Wayne suggests that dogs may have been first domesticated in the late Middle Paleolithic around 100,000 BP or perhaps even earlier.[45]
|
69 |
+
|
70 |
+
Archaeological evidence from the Dordogne region of France demonstrates that members of the European early Upper Paleolithic culture known as the Aurignacian used calendars (c. 30,000 BP). This was a lunar calendar that was used to document the phases of the moon. Genuine solar calendars did not appear until the Neolithic.[46] Upper Paleolithic cultures were probably able to time the migration of game animals such as wild horses and deer.[47] This ability allowed humans to become efficient hunters and to exploit a wide variety of game animals.[47] Recent research indicates that the Neanderthals timed their hunts and the migrations of game animals long before the beginning of the Upper Paleolithic.[40]
|
71 |
+
|
72 |
+
The social organization of the earliest Paleolithic (Lower Paleolithic) societies remains largely unknown to scientists, though Lower Paleolithic hominins such as Homo habilis and Homo erectus are likely to have had more complex social structures than chimpanzee societies.[48] Late Oldowan/Early Acheulean humans such as Homo ergaster/Homo erectus may have been the first people to invent central campsites or home bases and incorporate them into their foraging and hunting strategies like contemporary hunter-gatherers, possibly as early as 1.7 million years ago;[4] however, the earliest solid evidence for the existence of home bases or central campsites (hearths and shelters) among humans only dates back to 500,000 years ago.[4]
|
73 |
+
|
74 |
+
Similarly, scientists disagree whether Lower Paleolithic humans were largely monogamous or polygynous.[48] In particular, the Provisional model suggests that bipedalism arose in pre-Paleolithic australopithecine societies as an adaptation to monogamous lifestyles; however, other researchers note that sexual dimorphism is more pronounced in Lower Paleolithic humans such as Homo erectus than in modern humans, who are less polygynous than other primates, which suggests that Lower Paleolithic humans had a largely polygynous lifestyle, because species that have the most pronounced sexual dimorphism tend more likely to be polygynous.[49]
|
75 |
+
|
76 |
+
Human societies from the Paleolithic to the early Neolithic farming tribes lived without states and organized governments. For most of the Lower Paleolithic, human societies were possibly more hierarchical than their Middle and Upper Paleolithic descendants, and probably were not grouped into bands,[50] though during the end of the Lower Paleolithic, the latest populations of the hominin Homo erectus may have begun living in small-scale (possibly egalitarian) bands similar to both Middle and Upper Paleolithic societies and modern hunter-gatherers.[50]
|
77 |
+
|
78 |
+
Middle Paleolithic societies, unlike Lower Paleolithic and early Neolithic ones, consisted of bands that ranged from 20–30 or 25–100 members and were usually nomadic.[3][50] These bands were formed by several families. Bands sometimes joined together into larger "macrobands" for activities such as acquiring mates and celebrations or where resources were abundant.[3] By the end of the Paleolithic era (c. 10,000 BP), people began to settle down into permanent locations, and began to rely on agriculture for sustenance in many locations. Much evidence exists that humans took part in long-distance trade between bands for rare commodities (such as ochre, which was often used for religious purposes such as ritual[51][46]) and raw materials, as early as 120,000 years ago in Middle Paleolithic.[25] Inter-band trade may have appeared during the Middle Paleolithic because trade between bands would have helped ensure their survival by allowing them to exchange resources and commodities such as raw materials during times of relative scarcity (i.e. famine, drought).[25] Like in modern hunter-gatherer societies, individuals in Paleolithic societies may have been subordinate to the band as a whole.[21][22] Both Neanderthals and modern humans took care of the elderly members of their societies during the Middle and Upper Paleolithic.[25]
|
79 |
+
|
80 |
+
Some sources claim that most Middle and Upper Paleolithic societies were possibly fundamentally egalitarian[3][22][38][52] and may have rarely or never engaged in organized violence between groups (i.e. war).[38][53][54][55]
|
81 |
+
Some Upper Paleolithic societies in resource-rich environments (such as societies in Sungir, in what is now Russia) may have had more complex and hierarchical organization (such as tribes with a pronounced hierarchy and a somewhat formal division of labor) and may have engaged in endemic warfare.[38][56] Some argue that there was no formal leadership during the Middle and Upper Paleolithic. Like contemporary egalitarian hunter-gatherers such as the Mbuti pygmies, societies may have made decisions by communal consensus decision making rather than by appointing permanent rulers such as chiefs and monarchs.[6] Nor was there a formal division of labor during the Paleolithic. Each member of the group was skilled at all tasks essential to survival, regardless of individual abilities. Theories to explain the apparent egalitarianism have arisen, notably the Marxist concept of primitive communism.[57][58] Christopher Boehm (1999) has hypothesized that egalitarianism may have evolved in Paleolithic societies because of a need to distribute resources such as food and meat equally to avoid famine and ensure a stable food supply.[59] Raymond C. Kelly speculates that the relative peacefulness of Middle and Upper Paleolithic societies resulted from a low population density, cooperative relationships between groups such as reciprocal exchange of commodities and collaboration on hunting expeditions, and because the invention of projectile weapons such as throwing spears provided less incentive for war, because they increased the damage done to the attacker and decreased the relative amount of territory attackers could gain.[55] However, other sources claim that most Paleolithic groups may have been larger, more complex, sedentary and warlike than most contemporary hunter-gatherer societies, due to occupying more resource-abundant areas than most modern hunter-gatherers who have been pushed into more marginal habitats by agricultural societies.[60]
|
82 |
+
|
83 |
+
Anthropologists have typically assumed that in Paleolithic societies, women were responsible for gathering wild plants and firewood, and men were responsible for hunting and scavenging dead animals.[3][38] However, analogies to existent hunter-gatherer societies such as the Hadza people and the Aboriginal Australians suggest that the sexual division of labor in the Paleolithic was relatively flexible. Men may have participated in gathering plants, firewood and insects, and women may have procured small game animals for consumption and assisted men in driving herds of large game animals (such as woolly mammoths and deer) off cliffs.[38][54] Additionally, recent research by anthropologist and archaeologist Steven Kuhn from the University of Arizona is argued to support that this division of labor did not exist prior to the Upper Paleolithic and was invented relatively recently in human pre-history.[61][62] Sexual division of labor may have been developed to allow humans to acquire food and other resources more efficiently.[62] Possibly there was approximate parity between men and women during the Middle and Upper Paleolithic, and that period may have been the most gender-equal time in human history.[53][63][64] Archaeological evidence from art and funerary rituals indicates that a number of individual women enjoyed seemingly high status in their communities, and it is likely that both sexes participated in decision making.[64] The earliest known Paleolithic shaman (c. 30,000 BP) was female.[65] Jared Diamond suggests that the status of women declined with the adoption of agriculture because women in farming societies typically have more pregnancies and are expected to do more demanding work than women in hunter-gatherer societies.[66] Like most contemporary hunter-gatherer societies, Paleolithic and the Mesolithic groups probably followed mostly matrilineal and ambilineal descent patterns; patrilineal descent patterns were probably rarer than in the Neolithic.[34][46]
|
84 |
+
|
85 |
+
Early examples of artistic expression, such as the Venus of Tan-Tan and the patterns found on elephant bones from Bilzingsleben in Thuringia, may have been produced by Acheulean tool users such as Homo erectus prior to the start of the Middle Paleolithic period. However, the earliest undisputed evidence of art during the Paleolithic comes from Middle Paleolithic/Middle Stone Age sites such as Blombos Cave–South Africa–in the form of bracelets,[67] beads,[68] rock art,[51] and ochre used as body paint and perhaps in ritual.[38][51] Undisputed evidence of art only becomes common in the Upper Paleolithic.[69]
|
86 |
+
|
87 |
+
Lower Paleolithic Acheulean tool users, according to Robert G. Bednarik, began to engage in symbolic behavior such as art around 850,000 BP. They decorated themselves with beads and collected exotic stones for aesthetic, rather than utilitarian qualities.[70] According to him, traces of the pigment ochre from late Lower Paleolithic Acheulean archaeological sites suggests that Acheulean societies, like later Upper Paleolithic societies, collected and used ochre to create rock art.[70] Nevertheless, it is also possible that the ochre traces found at Lower Paleolithic sites is naturally occurring.[71]
|
88 |
+
|
89 |
+
Upper Paleolithic humans produced works of art such as cave paintings, Venus figurines, animal carvings, and rock paintings.[72] Upper Paleolithic art can be divided into two broad categories: figurative art such as cave paintings that clearly depicts animals (or more rarely humans); and nonfigurative, which consists of shapes and symbols.[72] Cave paintings have been interpreted in a number of ways by modern archaeologists. The earliest explanation, by the prehistorian Abbe Breuil, interpreted the paintings as a form of magic designed to ensure a successful hunt.[73] However, this hypothesis fails to explain the existence of animals such as saber-toothed cats and lions, which were not hunted for food, and the existence of half-human, half-animal beings in cave paintings. The anthropologist David Lewis-Williams has suggested that Paleolithic cave paintings were indications of shamanistic practices, because the paintings of half-human, half-animal paintings and the remoteness of the caves are reminiscent of modern hunter-gatherer shamanistic practices.[73] Symbol-like images are more common in Paleolithic cave paintings than are depictions of animals or humans, and unique symbolic patterns might have been trademarks that represent different Upper Paleolithic ethnic groups.[72] Venus figurines have evoked similar controversy. Archaeologists and anthropologists have described the figurines as representations of goddesses, pornographic imagery, apotropaic amulets used for sympathetic magic, and even as self-portraits of women themselves.[38][74]
|
90 |
+
|
91 |
+
R. Dale Guthrie[75] has studied not only the most artistic and publicized paintings, but also a variety of lower-quality art and figurines, and he identifies a wide range of skill and ages among the artists. He also points out that the main themes in the paintings and other artifacts (powerful beasts, risky hunting scenes and the over-sexual representation of women) are to be expected in the fantasies of adolescent males during the Upper Paleolithic.
|
92 |
+
|
93 |
+
The "Venus" figurines have been theorized, not universally, as representing a mother goddess; the abundance of such female imagery has inspired the theory that religion and society in Paleolithic (and later Neolithic) cultures were primarily interested in, and may have been directed by, women. Adherents of the theory include archaeologist Marija Gimbutas and feminist scholar Merlin Stone, the author of the 1976 book When God Was a Woman.[76][77] Other explanations for the purpose of the figurines have been proposed, such as Catherine McCoid and LeRoy McDermott's hypothesis that they were self-portraits of woman artists[74] and R.Dale Gutrie's hypothesis that served as "stone age pornography".
|
94 |
+
|
95 |
+
The origins of music during the Paleolithic are unknown. The earliest forms of music probably did not use musical instruments other than the human voice or natural objects such as rocks. This early music would not have left an archaeological footprint. Music may have developed from rhythmic sounds produced by daily chores, for example, cracking open nuts with stones. Maintaining a rhythm while working may have helped people to become more efficient at daily activities.[78] An alternative theory originally proposed by Charles Darwin explains that music may have begun as a hominin mating strategy. Bird and other animal species produce music such as calls to attract mates.[79] This hypothesis is generally less accepted than the previous hypothesis, but nonetheless provides a possible alternative.
|
96 |
+
|
97 |
+
Upper Paleolithic (and possibly Middle Paleolithic)[80] humans used flute-like bone pipes as musical instruments,[38][81] and music may have played a large role in the religious lives of Upper Paleolithic hunter-gatherers. As with modern hunter-gatherer societies, music may have been used in ritual or to help induce trances. In particular, it appears that animal skin drums may have been used in religious events by Upper Paleolithic shamans, as shown by the remains of drum-like instruments from some Upper Paleolithic graves of shamans and the ethnographic record of contemporary hunter-gatherer shamanic and ritual practices.[65][72]
|
98 |
+
|
99 |
+
According to James B. Harrod humankind first developed religious and spiritual beliefs during the Middle Paleolithic or Upper Paleolithic.[82] Controversial scholars of prehistoric religion and anthropology, James Harrod and Vincent W. Fallio, have recently proposed that religion and spirituality (and art) may have first arisen in Pre-Paleolithic chimpanzees[83] or Early Lower Paleolithic (Oldowan) societies.[84][85] According to Fallio, the common ancestor of chimpanzees and humans experienced altered states of consciousness and partook in ritual, and ritual was used in their societies to strengthen social bonding and group cohesion.[84]
|
100 |
+
|
101 |
+
Middle Paleolithic humans' use of burials at sites such as Krapina, Croatia (c. 130,000 BP) and Qafzeh, Israel (c. 100,000 BP) have led some anthropologists and archaeologists, such as Philip Lieberman, to believe that Middle Paleolithic humans may have possessed a belief in an afterlife and a "concern for the dead that transcends daily life".[5] Cut marks on Neanderthal bones from various sites, such as Combe-Grenal and Abri Moula in France, suggest that the Neanderthals—like some contemporary human cultures—may have practiced ritual defleshing for (presumably) religious reasons. According to recent archaeological findings from Homo heidelbergensis sites in Atapuerca, humans may have begun burying their dead much earlier, during the late Lower Paleolithic; but this theory is widely questioned in the scientific community.
|
102 |
+
|
103 |
+
Likewise, some scientists have proposed that Middle Paleolithic societies such as Neanderthal societies may also have practiced the earliest form of totemism or animal worship, in addition to their (presumably religious) burial of the dead. In particular, Emil Bächler suggested (based on archaeological evidence from Middle Paleolithic caves) that a bear cult was widespread among Middle Paleolithic Neanderthals.[86] A claim that evidence was found for Middle Paleolithic animal worship c. 70,000 BCE originates from the Tsodilo Hills in the African Kalahari desert has been denied by the original investigators of the site.[87] Animal cults in the Upper Paleolithic, such as the bear cult, may have had their origins in these hypothetical Middle Paleolithic animal cults.[88] Animal worship during the Upper Paleolithic was intertwined with hunting rites.[88] For instance, archaeological evidence from art and bear remains reveals that the bear cult apparently involved a type of sacrificial bear ceremonialism, in which a bear was shot with arrows, finished off by a shot or thrust in the lungs, and ritually worshipped near a clay bear statue covered by a bear fur with the skull and the body of the bear buried separately.[88] Barbara Ehrenreich controversially theorizes that the sacrificial hunting rites of the Upper Paleolithic (and by extension Paleolithic cooperative big-game hunting) gave rise to war or warlike raiding during the following Epipaleolithic and Mesolithic or late Upper Paleolithic.[54]
|
104 |
+
|
105 |
+
The existence of anthropomorphic images and half-human, half-animal images in the Upper Paleolithic may further indicate that Upper Paleolithic humans were the first people to believe in a pantheon of gods or supernatural beings,[89] though such images may instead indicate shamanistic practices similar to those of contemporary tribal societies.[73] The earliest known undisputed burial of a shaman (and by extension the earliest undisputed evidence of shamans and shamanic practices) dates back to the early Upper Paleolithic era (c. 30,000 BP) in what is now the Czech Republic.[65] However, during the early Upper Paleolithic it was probably more common for all members of the band to participate equally and fully in religious ceremonies, in contrast to the religious traditions of later periods when religious authorities and part-time ritual specialists such as shamans, priests and medicine men were relatively common and integral to religious life.[22] Additionally, it is also possible that Upper Paleolithic religions, like contemporary and historical animistic and polytheistic religions, believed in the existence of a single creator deity in addition to other supernatural beings such as animistic spirits.[90]
|
106 |
+
|
107 |
+
Religion was possibly apotropaic; specifically, it may have involved sympathetic magic.[38] The Venus figurines, which are abundant in the Upper Paleolithic archaeological record, provide an example of possible Paleolithic sympathetic magic, as they may have been used for ensuring success in hunting and to bring about fertility of the land and women.[3] The Upper Paleolithic Venus figurines have sometimes been explained as depictions of an earth goddess similar to Gaia, or as representations of a goddess who is the ruler or mother of the animals.[88][91] James Harrod has described them as representative of female (and male) shamanistic spiritual transformation processes.[92]
|
108 |
+
|
109 |
+
Paleolithic hunting and gathering people ate varying proportions of vegetables (including tubers and roots), fruit, seeds (including nuts and wild grass seeds) and insects, meat, fish, and shellfish.[94][95] However, there is little direct evidence of the relative proportions of plant and animal foods.[96] Although the term "paleolithic diet", without references to a specific timeframe or locale, is sometimes used with an implication that most humans shared a certain diet during the entire era, that is not entirely accurate. The Paleolithic was an extended period of time, during which multiple technological advances were made, many of which had impact on human dietary structure. For example, humans probably did not possess the control of fire until the Middle Paleolithic,[97] or tools necessary to engage in extensive fishing.[citation needed] On the other hand, both these technologies are generally agreed to have been widely available to humans by the end of the Paleolithic (consequently, allowing humans in some regions of the planet to rely heavily on fishing and hunting). In addition, the Paleolithic involved a substantial geographical expansion of human populations. During the Lower Paleolithic, ancestors of modern humans are thought to have been constrained to Africa east of the Great Rift Valley. During the Middle and Upper Paleolithic, humans greatly expanded their area of settlement, reaching ecosystems as diverse as New Guinea and Alaska, and adapting their diets to whatever local resources were available.
|
110 |
+
|
111 |
+
Another view is that until the Upper Paleolithic, humans were frugivores (fruit eaters) who supplemented their meals with carrion, eggs, and small prey such as baby birds and mussels, and only on rare occasions managed to kill and consume big game such as antelopes.[98] This view is supported by studies of higher apes, particularly chimpanzees. Chimpanzees are the closest to humans genetically, sharing more than 96% of their DNA code with humans, and their digestive tract is functionally very similar to that of humans.[99] Chimpanzees are primarily frugivores, but they could and would consume and digest animal flesh, given the opportunity. In general, their actual diet in the wild is about 95% plant-based, with the remaining 5% filled with insects, eggs, and baby animals.[100][101] In some ecosystems, however, chimpanzees are predatory, forming parties to hunt monkeys.[102] Some comparative studies of human and higher primate digestive tracts do suggest that humans have evolved to obtain greater amounts of calories from sources such as animal foods, allowing them to shrink the size of the gastrointestinal tract relative to body mass and to increase the brain mass instead.[103][104]
|
112 |
+
|
113 |
+
Anthropologists have diverse opinions about the proportions of plant and animal foods consumed. Just as with still existing hunters and gatherers, there were many varied "diets" in different groups, and also varying through this vast amount of time. Some paleolithic hunter-gatherers consumed a significant amount of meat and possibly obtained most of their food from hunting,[105] while others are shown as a primarily plant-based diet,.[61] Most, if not all, are believed to have been opportunistic omnivores.[106] One hypothesis is that carbohydrate tubers (plant underground storage organs) may have been eaten in high amounts by pre-agricultural humans.[107][108][109][110] It is thought that the Paleolithic diet included as much as 1.65–1.9 kg (3.6–4.2 lb) per day of fruit and vegetables.[111] The relative proportions of plant and animal foods in the diets of Paleolithic people often varied between regions, with more meat being necessary in colder regions (which weren't populated by anatomically modern humans until c. 30,000 – c. 50,000 BP).[112] It is generally agreed that many modern hunting and fishing tools, such as fish hooks, nets, bows, and poisons, weren't introduced until the Upper Paleolithic and possibly even Neolithic.[34] The only hunting tools widely available to humans during any significant part of the Paleolithic were hand-held spears and harpoons. There's evidence of Paleolithic people killing and eating seals and elands as far as c. 100,000 BP. On the other hand, buffalo bones found in African caves from the same period are typically of very young or very old individuals, and there's no evidence that pigs, elephants, or rhinos were hunted by humans at the time.[113]
|
114 |
+
|
115 |
+
Paleolithic peoples suffered less famine and malnutrition than the Neolithic farming tribes that followed them.[21][114] This was partly because Paleolithic hunter-gatherers accessed a wider variety of natural foods, which allowed them a more nutritious diet and a decreased risk of famine.[21][23][66] Many of the famines experienced by Neolithic (and some modern) farmers were caused or amplified by their dependence on a small number of crops.[21][23][66] It is thought that wild foods can have a significantly different nutritional profile than cultivated foods.[115] The greater amount of meat obtained by hunting big game animals in Paleolithic diets than Neolithic diets may have also allowed Paleolithic hunter-gatherers to enjoy a more nutritious diet than Neolithic agriculturalists.[114] It has been argued that the shift from hunting and gathering to agriculture resulted in an increasing focus on a limited variety of foods, with meat likely taking a back seat to plants.[116] It is also unlikely that Paleolithic hunter-gatherers were affected by modern diseases of affluence such as type 2 diabetes, coronary heart disease, and cerebrovascular disease, because they ate mostly lean meats and plants and frequently engaged in intense physical activity,[117][118] and because the average lifespan was shorter than the age of common onset of these conditions.[119][120]
|
116 |
+
|
117 |
+
Large-seeded legumes were part of the human diet long before the Neolithic Revolution, as evident from archaeobotanical finds from the Mousterian layers of Kebara Cave, in Israel.[121] There is evidence suggesting that Paleolithic societies were gathering wild cereals for food use at least as early as 30,000 years ago.[122] However, seeds—such as grains and beans—were rarely eaten and never in large quantities on a daily basis.[123] Recent archaeological evidence also indicates that winemaking may have originated in the Paleolithic, when early humans drank the juice of naturally fermented wild grapes from animal-skin pouches.[93] Paleolithic humans consumed animal organ meats, including the livers, kidneys, and brains. Upper Paleolithic cultures appear to have had significant knowledge about plants and herbs and may have, albeit very rarely, practiced rudimentary forms of horticulture.[124] In particular, bananas and tubers may have been cultivated as early as 25,000 BP in southeast Asia.[60] Late Upper Paleolithic societies also appear to have occasionally practiced pastoralism and animal husbandry, presumably for dietary reasons. For instance, some European late Upper Paleolithic cultures domesticated and raised reindeer, presumably for their meat or milk, as early as 14,000 BP.[44] Humans also probably consumed hallucinogenic plants during the Paleolithic.[3] The Aboriginal Australians have been consuming a variety of native animal and plant foods, called bushfood, for an estimated 60,000 years, since the Middle Paleolithic.
|
118 |
+
|
119 |
+
In February 2019, scientists reported evidence, based on isotope studies, that at least some Neanderthals may have eaten meat.[125][126][127] People during the Middle Paleolithic, such as the Neanderthals and Middle Paleolithic Homo sapiens in Africa, began to catch shellfish for food as revealed by shellfish cooking in Neanderthal sites in Italy about 110,000 years ago and in Middle Paleolithic Homo sapiens sites at Pinnacle Point, Africa around 164,000 BP.[38][128] Although fishing only became common during the Upper Paleolithic,[38][129] fish have been part of human diets long before the dawn of the Upper Paleolithic and have certainly been consumed by humans since at least the Middle Paleolithic.[47] For example, the Middle Paleolithic Homo sapiens in the region now occupied by the Democratic Republic of the Congo hunted large 6 ft (1.8 m)-long catfish with specialized barbed fishing points as early as 90,000 years ago.[38][47] The invention of fishing allowed some Upper Paleolithic and later hunter-gatherer societies to become sedentary or semi-nomadic, which altered their social structures.[81] Example societies are the Lepenski Vir as well as some contemporary hunter-gatherers, such as the Tlingit. In some instances (at least the Tlingit), they developed social stratification, slavery, and complex social structures such as chiefdoms.[34]
|
120 |
+
|
121 |
+
Anthropologists such as Tim White suggest that cannibalism was common in human societies prior to the beginning of the Upper Paleolithic, based on the large amount of “butchered human" bones found in Neanderthal and other Lower/Middle Paleolithic sites.[130] Cannibalism in the Lower and Middle Paleolithic may have occurred because of food shortages.[131] However, it may have been for religious reasons, and would coincide with the development of religious practices thought to have occurred during the Upper Paleolithic.[88][132] Nonetheless, it remains possible that Paleolithic societies never practiced cannibalism, and that the damage to recovered human bones was either the result of excarnation or predation by carnivores such as saber-toothed cats, lions, and hyenas.[88]
|
122 |
+
|
123 |
+
A modern-day diet known as the Paleolithic diet exists, based on restricting consumption to the foods presumed to be available to anatomically modern humans prior to the advent of settled agriculture.[133]
|
en/2604.html.txt
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A politician is a person active in party politics, or a person holding or seeking an office in government. Politicians propose, support and create laws or policies that govern the land and, by extension, its people. Broadly speaking, a "politician" can be anyone who seeks to achieve political power in any bureaucratic institution.
|
4 |
+
|
5 |
+
Politicians are people who are politically active, especially in party politics. Positions range from local offices to executive, legislative, and judicial offices of regional and national governments.[1][2] Some elected law enforcement officers, such as sheriffs, are considered politicians.[3][4]
|
6 |
+
|
7 |
+
Politicians are known for their rhetoric, as in speeches or campaign advertisements. They are especially known for using common themes that allow them to develop their political positions in terms familiar to the voters.[5] Politicians of necessity become expert users of the media.[6] Politicians in the 19th century made heavy use of newspapers, magazines, and pamphlets, as well as posters.[7] In the 20th century, they branched into radio and television, making television commercials the single most expensive part of an election campaign.[8] In the 21st century, they have become increasingly involved with the social media based on the Internet and smartphones.[9]
|
8 |
+
|
9 |
+
Rumor has always played a major role in politics, with negative rumors about an opponent typically more effective than positive rumors about one's own side.[10]
|
10 |
+
|
11 |
+
Once elected, the politician becomes a government official and has to deal with a permanent bureaucracy of non-politicians. Historically, there has been a subtle conflict between the long-term goals of each side.[11] In patronage-based systems, such as the United States and Canada in the 19th century, winning politicians replace the bureaucracy with local politicians who formed their base of support, the "spoils system". Civil service reform was initiated to eliminate the corruption of government services that were involved.[12] However, in many less developed countries, the spoils system is in full-scale operation today.[13]
|
12 |
+
|
13 |
+
Mattozzi and Merlo argue that there are two main career paths which are typically followed by politicians in modern democracies. First, come the career politicians. They are politicians who work in the political sector until retirement. Second, are the "political careerists". These are politicians who gain a reputation for expertise in controlling certain bureaucracies, then leave politics for a well-paid career in the private sector making use of their political contacts.[14]
|
14 |
+
|
15 |
+
The personal histories of politicians have been frequently studied, as it is presumed that their experiences and characteristics shape their beliefs and behaviors. There are four pathways by which a politician's biography could influence their leadership style and abilities. The first is that biography may influence one's core beliefs, which are used to shape a worldview. The second is that politicians' skills and competence are influenced by personal experience. The areas of skill and competence can define where they devote resources and attention as a leader. The third pathway is that biographical attributes may define and shape political incentives. A leader's previous profession, for example, could be viewed as higher importance, causing a disproportionate investment of leadership resources to ensure the growth and health of that profession, including former colleagues. Other examples beside profession include the politician's innate characteristics, such as race or gender. The fourth pathway is how a politician's biography affects their public perception, which can, in turn, affect their leadership style. Female politicians, for example, may use different strategies to attract the same level of respect given to male politicians.[15]
|
16 |
+
|
17 |
+
Numerous scholars have studied the characteristics of politicians, comparing those at the local and national levels, and comparing the more liberal or the more conservative ones, and comparing the more successful and less successful in terms of elections.[16] In recent years, special attention has focused on the distinctive career path of women politicians.[17] For example, there are studies of the "Supermadre" model in Latin American politics.[18]
|
18 |
+
|
19 |
+
Many politicians have the knack to remember thousands of names and faces and recall personal anecdotes about their constituents—it is an advantage in the job, rather like being seven-foot tall for a basketball player. United States Presidents George W. Bush and Bill Clinton were renowned for their memories.[19][20]
|
20 |
+
|
21 |
+
Many critics attack politicians for being out of touch with the public. Areas of friction include the manner in which politicians speak, which has been described as being overly formal and filled with many euphemistic and metaphorical expressions and commonly perceived as an attempt to "obscure, mislead, and confuse".[21]
|
22 |
+
|
23 |
+
In the popular image, politicians are thought of as clueless, selfish, incompetent and corrupt, taking money in exchange for goods or services, rather than working for the general public good.[22] Politicians in many countries are regarded as the "most hated professionals".[23]
|
en/2605.html.txt
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A politician is a person active in party politics, or a person holding or seeking an office in government. Politicians propose, support and create laws or policies that govern the land and, by extension, its people. Broadly speaking, a "politician" can be anyone who seeks to achieve political power in any bureaucratic institution.
|
4 |
+
|
5 |
+
Politicians are people who are politically active, especially in party politics. Positions range from local offices to executive, legislative, and judicial offices of regional and national governments.[1][2] Some elected law enforcement officers, such as sheriffs, are considered politicians.[3][4]
|
6 |
+
|
7 |
+
Politicians are known for their rhetoric, as in speeches or campaign advertisements. They are especially known for using common themes that allow them to develop their political positions in terms familiar to the voters.[5] Politicians of necessity become expert users of the media.[6] Politicians in the 19th century made heavy use of newspapers, magazines, and pamphlets, as well as posters.[7] In the 20th century, they branched into radio and television, making television commercials the single most expensive part of an election campaign.[8] In the 21st century, they have become increasingly involved with the social media based on the Internet and smartphones.[9]
|
8 |
+
|
9 |
+
Rumor has always played a major role in politics, with negative rumors about an opponent typically more effective than positive rumors about one's own side.[10]
|
10 |
+
|
11 |
+
Once elected, the politician becomes a government official and has to deal with a permanent bureaucracy of non-politicians. Historically, there has been a subtle conflict between the long-term goals of each side.[11] In patronage-based systems, such as the United States and Canada in the 19th century, winning politicians replace the bureaucracy with local politicians who formed their base of support, the "spoils system". Civil service reform was initiated to eliminate the corruption of government services that were involved.[12] However, in many less developed countries, the spoils system is in full-scale operation today.[13]
|
12 |
+
|
13 |
+
Mattozzi and Merlo argue that there are two main career paths which are typically followed by politicians in modern democracies. First, come the career politicians. They are politicians who work in the political sector until retirement. Second, are the "political careerists". These are politicians who gain a reputation for expertise in controlling certain bureaucracies, then leave politics for a well-paid career in the private sector making use of their political contacts.[14]
|
14 |
+
|
15 |
+
The personal histories of politicians have been frequently studied, as it is presumed that their experiences and characteristics shape their beliefs and behaviors. There are four pathways by which a politician's biography could influence their leadership style and abilities. The first is that biography may influence one's core beliefs, which are used to shape a worldview. The second is that politicians' skills and competence are influenced by personal experience. The areas of skill and competence can define where they devote resources and attention as a leader. The third pathway is that biographical attributes may define and shape political incentives. A leader's previous profession, for example, could be viewed as higher importance, causing a disproportionate investment of leadership resources to ensure the growth and health of that profession, including former colleagues. Other examples beside profession include the politician's innate characteristics, such as race or gender. The fourth pathway is how a politician's biography affects their public perception, which can, in turn, affect their leadership style. Female politicians, for example, may use different strategies to attract the same level of respect given to male politicians.[15]
|
16 |
+
|
17 |
+
Numerous scholars have studied the characteristics of politicians, comparing those at the local and national levels, and comparing the more liberal or the more conservative ones, and comparing the more successful and less successful in terms of elections.[16] In recent years, special attention has focused on the distinctive career path of women politicians.[17] For example, there are studies of the "Supermadre" model in Latin American politics.[18]
|
18 |
+
|
19 |
+
Many politicians have the knack to remember thousands of names and faces and recall personal anecdotes about their constituents—it is an advantage in the job, rather like being seven-foot tall for a basketball player. United States Presidents George W. Bush and Bill Clinton were renowned for their memories.[19][20]
|
20 |
+
|
21 |
+
Many critics attack politicians for being out of touch with the public. Areas of friction include the manner in which politicians speak, which has been described as being overly formal and filled with many euphemistic and metaphorical expressions and commonly perceived as an attempt to "obscure, mislead, and confuse".[21]
|
22 |
+
|
23 |
+
In the popular image, politicians are thought of as clueless, selfish, incompetent and corrupt, taking money in exchange for goods or services, rather than working for the general public good.[22] Politicians in many countries are regarded as the "most hated professionals".[23]
|
en/2606.html.txt
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Hominina
|
4 |
+
|
5 |
+
Australopithecina or Hominina is a subtribe in the tribe Hominini. The members of the subtribe are generally Australopithecus (cladistically including the genera Homo, Paranthropus,[2] and Kenyanthropus), and it typically includes the earlier Ardipithecus, Orrorin, Sahelanthropus, and Graecopithecus. All these related species are now sometimes collectively termed australopithecines or homininians.[3][4] They are the extinct, close relatives of humans and, with the extant genus Homo, comprise the human clade. Members of the human clade, i.e. the Hominini after the split from the chimpanzees, are now called Hominina[5] (see Hominidae; terms "hominids" and hominins).
|
6 |
+
|
7 |
+
While none of the groups normally directly assigned to this group survived, the australopithecines do not appear to be literally extinct (in the sense of having no living descendants) as the genera Kenyanthropus, Paranthropus and Homo probably emerged as sister of a late Australopithecus species such as A. africanus and/or A. sediba.
|
8 |
+
|
9 |
+
The terms australopithecine, et al., come from a former classification as members of a distinct subfamily, the Australopithecinae.[6] Members of Australopithecus are sometimes referred to as the "gracile australopithecines", while Paranthropus are called the "robust australopithecines".[7][8]
|
10 |
+
|
11 |
+
The australopithecines occurred in the Plio-Pleistocene era and were bipedal, and they were dentally similar to humans, but with a brain size not much larger than that of modern apes, with lesser encephalization than in the genus Homo.[9] Humans (genus Homo) may have descended from australopithecine ancestors and the genera Ardipithecus, Orrorin, Sahelanthropus, and Graecopithecus are the possible ancestors of the australopithecines.[8]
|
12 |
+
|
13 |
+
Phylogeny of subtribe Australopithecina according to Briggs & Crowther 2008, p. 124.
|
14 |
+
|
15 |
+
The post-cranial remains of australopithecines show they were adapted to bipedal locomotion, but did not walk identically to humans. They have a high brachial index (forearm/upper arm ratio) when compared to other hominins, and they exhibit greater sexual dimorphism than members of Homo or Pan but less so than Gorilla or Pongo. It is thought that they averaged heights of 1.2–1.5 metres (3.9–4.9 ft) and weighed between 30 and 55 kilograms (66 and 121 lb). The brain size may have been 350 cc to 600 cc. The postcanines (the teeth behind the canines) were relatively large, and had more enamel compared to contemporary apes and humans, whereas the incisors and canines were relatively small, and there was little difference between the males' and females' canines compared to modern apes.[8]
|
16 |
+
|
17 |
+
Most scientists maintain that one of the australopithecine species evolved into the genus Homo in Africa around two million years ago. However, there is no consensus on which species:
|
18 |
+
|
19 |
+
"Determining which species of australopithecine (if any) is ancestral to the genus Homo is a question that is a top priority for many paleoanthropologists, but one that will likely elude any conclusive answers for years to come. Nearly every possible species has been suggested as a likely candidate, but none are overwhelmingly convincing. Presently, it appears that A. garhi has the potential to occupy this coveted place in paleoanthropology, but the lack of fossil evidence is a serious problem. Another problem presents itself in the fact that it has been very difficult to assess which hominid [now "hominin"] represents the first member of the genus Homo. Without knowing this, it is not possible to determine which species of australopithecine may have been ancestral to Homo."[8]
|
20 |
+
|
21 |
+
Marc Verhaegen has argued that an australopithecine species could have also been ancestral to the genus Pan (i.e. chimpanzees).[10]
|
22 |
+
|
23 |
+
A minority held viewpoint among palaeoanthropologists is that australopithecines moved outside Africa. A notable proponent of this theory is Jens Lorenz Franzen, formerly Head of Paleoanthropology at the Research Institute Senckenberg. Franzen argues that robust australopithecines had reached not only Indonesia, as Meganthropus, but also China:
|
24 |
+
|
25 |
+
"In this way we arrive at the conclusion that the recognition of australopithecines in Asia would not confuse but could help to clarify the early evolution of hominids ["hominins"] on that continent. This concept would explain the scanty remains from Java and China as relic of an Asian offshoot of an early radiation of Australopithecus, which was followed much later by an [African] immigration of Homo erectus, and finally became extinct after a period of coexistence."[11]
|
26 |
+
|
27 |
+
In 1957, an Early Pleistocene Chinese fossil tooth of unknown province was described as resembling P. robustus. Three fossilized molars from Jianshi, China (Longgudong Cave) were later identified as belonging to an Australopithecus species (Gao, 1975). However further examination questioned this interpretation; Zhang (1984) argued the Jianshi teeth and unidentified tooth belong to H. erectus. Liu et al. (2010) also dispute the Jianshi-australopithecine link and argue the Jianshi molars fall within the range of Homo erectus:
|
28 |
+
|
29 |
+
"No marked difference in dental crown shape is shown between the Jianshi hominin and other Chinese Homo erectus, and there is also no evidence in support of the Jianshi hominin's closeness to Australopithecus."
|
30 |
+
|
31 |
+
But, Wolpoff (1999) notes that in China "persistent claims of australopithecine or australopithecine-like remains continue".
|
en/2607.html.txt
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Hominina
|
4 |
+
|
5 |
+
Australopithecina or Hominina is a subtribe in the tribe Hominini. The members of the subtribe are generally Australopithecus (cladistically including the genera Homo, Paranthropus,[2] and Kenyanthropus), and it typically includes the earlier Ardipithecus, Orrorin, Sahelanthropus, and Graecopithecus. All these related species are now sometimes collectively termed australopithecines or homininians.[3][4] They are the extinct, close relatives of humans and, with the extant genus Homo, comprise the human clade. Members of the human clade, i.e. the Hominini after the split from the chimpanzees, are now called Hominina[5] (see Hominidae; terms "hominids" and hominins).
|
6 |
+
|
7 |
+
While none of the groups normally directly assigned to this group survived, the australopithecines do not appear to be literally extinct (in the sense of having no living descendants) as the genera Kenyanthropus, Paranthropus and Homo probably emerged as sister of a late Australopithecus species such as A. africanus and/or A. sediba.
|
8 |
+
|
9 |
+
The terms australopithecine, et al., come from a former classification as members of a distinct subfamily, the Australopithecinae.[6] Members of Australopithecus are sometimes referred to as the "gracile australopithecines", while Paranthropus are called the "robust australopithecines".[7][8]
|
10 |
+
|
11 |
+
The australopithecines occurred in the Plio-Pleistocene era and were bipedal, and they were dentally similar to humans, but with a brain size not much larger than that of modern apes, with lesser encephalization than in the genus Homo.[9] Humans (genus Homo) may have descended from australopithecine ancestors and the genera Ardipithecus, Orrorin, Sahelanthropus, and Graecopithecus are the possible ancestors of the australopithecines.[8]
|
12 |
+
|
13 |
+
Phylogeny of subtribe Australopithecina according to Briggs & Crowther 2008, p. 124.
|
14 |
+
|
15 |
+
The post-cranial remains of australopithecines show they were adapted to bipedal locomotion, but did not walk identically to humans. They have a high brachial index (forearm/upper arm ratio) when compared to other hominins, and they exhibit greater sexual dimorphism than members of Homo or Pan but less so than Gorilla or Pongo. It is thought that they averaged heights of 1.2–1.5 metres (3.9–4.9 ft) and weighed between 30 and 55 kilograms (66 and 121 lb). The brain size may have been 350 cc to 600 cc. The postcanines (the teeth behind the canines) were relatively large, and had more enamel compared to contemporary apes and humans, whereas the incisors and canines were relatively small, and there was little difference between the males' and females' canines compared to modern apes.[8]
|
16 |
+
|
17 |
+
Most scientists maintain that one of the australopithecine species evolved into the genus Homo in Africa around two million years ago. However, there is no consensus on which species:
|
18 |
+
|
19 |
+
"Determining which species of australopithecine (if any) is ancestral to the genus Homo is a question that is a top priority for many paleoanthropologists, but one that will likely elude any conclusive answers for years to come. Nearly every possible species has been suggested as a likely candidate, but none are overwhelmingly convincing. Presently, it appears that A. garhi has the potential to occupy this coveted place in paleoanthropology, but the lack of fossil evidence is a serious problem. Another problem presents itself in the fact that it has been very difficult to assess which hominid [now "hominin"] represents the first member of the genus Homo. Without knowing this, it is not possible to determine which species of australopithecine may have been ancestral to Homo."[8]
|
20 |
+
|
21 |
+
Marc Verhaegen has argued that an australopithecine species could have also been ancestral to the genus Pan (i.e. chimpanzees).[10]
|
22 |
+
|
23 |
+
A minority held viewpoint among palaeoanthropologists is that australopithecines moved outside Africa. A notable proponent of this theory is Jens Lorenz Franzen, formerly Head of Paleoanthropology at the Research Institute Senckenberg. Franzen argues that robust australopithecines had reached not only Indonesia, as Meganthropus, but also China:
|
24 |
+
|
25 |
+
"In this way we arrive at the conclusion that the recognition of australopithecines in Asia would not confuse but could help to clarify the early evolution of hominids ["hominins"] on that continent. This concept would explain the scanty remains from Java and China as relic of an Asian offshoot of an early radiation of Australopithecus, which was followed much later by an [African] immigration of Homo erectus, and finally became extinct after a period of coexistence."[11]
|
26 |
+
|
27 |
+
In 1957, an Early Pleistocene Chinese fossil tooth of unknown province was described as resembling P. robustus. Three fossilized molars from Jianshi, China (Longgudong Cave) were later identified as belonging to an Australopithecus species (Gao, 1975). However further examination questioned this interpretation; Zhang (1984) argued the Jianshi teeth and unidentified tooth belong to H. erectus. Liu et al. (2010) also dispute the Jianshi-australopithecine link and argue the Jianshi molars fall within the range of Homo erectus:
|
28 |
+
|
29 |
+
"No marked difference in dental crown shape is shown between the Jianshi hominin and other Chinese Homo erectus, and there is also no evidence in support of the Jianshi hominin's closeness to Australopithecus."
|
30 |
+
|
31 |
+
But, Wolpoff (1999) notes that in China "persistent claims of australopithecine or australopithecine-like remains continue".
|
en/2608.html.txt
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Hominina
|
4 |
+
|
5 |
+
Australopithecina or Hominina is a subtribe in the tribe Hominini. The members of the subtribe are generally Australopithecus (cladistically including the genera Homo, Paranthropus,[2] and Kenyanthropus), and it typically includes the earlier Ardipithecus, Orrorin, Sahelanthropus, and Graecopithecus. All these related species are now sometimes collectively termed australopithecines or homininians.[3][4] They are the extinct, close relatives of humans and, with the extant genus Homo, comprise the human clade. Members of the human clade, i.e. the Hominini after the split from the chimpanzees, are now called Hominina[5] (see Hominidae; terms "hominids" and hominins).
|
6 |
+
|
7 |
+
While none of the groups normally directly assigned to this group survived, the australopithecines do not appear to be literally extinct (in the sense of having no living descendants) as the genera Kenyanthropus, Paranthropus and Homo probably emerged as sister of a late Australopithecus species such as A. africanus and/or A. sediba.
|
8 |
+
|
9 |
+
The terms australopithecine, et al., come from a former classification as members of a distinct subfamily, the Australopithecinae.[6] Members of Australopithecus are sometimes referred to as the "gracile australopithecines", while Paranthropus are called the "robust australopithecines".[7][8]
|
10 |
+
|
11 |
+
The australopithecines occurred in the Plio-Pleistocene era and were bipedal, and they were dentally similar to humans, but with a brain size not much larger than that of modern apes, with lesser encephalization than in the genus Homo.[9] Humans (genus Homo) may have descended from australopithecine ancestors and the genera Ardipithecus, Orrorin, Sahelanthropus, and Graecopithecus are the possible ancestors of the australopithecines.[8]
|
12 |
+
|
13 |
+
Phylogeny of subtribe Australopithecina according to Briggs & Crowther 2008, p. 124.
|
14 |
+
|
15 |
+
The post-cranial remains of australopithecines show they were adapted to bipedal locomotion, but did not walk identically to humans. They have a high brachial index (forearm/upper arm ratio) when compared to other hominins, and they exhibit greater sexual dimorphism than members of Homo or Pan but less so than Gorilla or Pongo. It is thought that they averaged heights of 1.2–1.5 metres (3.9–4.9 ft) and weighed between 30 and 55 kilograms (66 and 121 lb). The brain size may have been 350 cc to 600 cc. The postcanines (the teeth behind the canines) were relatively large, and had more enamel compared to contemporary apes and humans, whereas the incisors and canines were relatively small, and there was little difference between the males' and females' canines compared to modern apes.[8]
|
16 |
+
|
17 |
+
Most scientists maintain that one of the australopithecine species evolved into the genus Homo in Africa around two million years ago. However, there is no consensus on which species:
|
18 |
+
|
19 |
+
"Determining which species of australopithecine (if any) is ancestral to the genus Homo is a question that is a top priority for many paleoanthropologists, but one that will likely elude any conclusive answers for years to come. Nearly every possible species has been suggested as a likely candidate, but none are overwhelmingly convincing. Presently, it appears that A. garhi has the potential to occupy this coveted place in paleoanthropology, but the lack of fossil evidence is a serious problem. Another problem presents itself in the fact that it has been very difficult to assess which hominid [now "hominin"] represents the first member of the genus Homo. Without knowing this, it is not possible to determine which species of australopithecine may have been ancestral to Homo."[8]
|
20 |
+
|
21 |
+
Marc Verhaegen has argued that an australopithecine species could have also been ancestral to the genus Pan (i.e. chimpanzees).[10]
|
22 |
+
|
23 |
+
A minority held viewpoint among palaeoanthropologists is that australopithecines moved outside Africa. A notable proponent of this theory is Jens Lorenz Franzen, formerly Head of Paleoanthropology at the Research Institute Senckenberg. Franzen argues that robust australopithecines had reached not only Indonesia, as Meganthropus, but also China:
|
24 |
+
|
25 |
+
"In this way we arrive at the conclusion that the recognition of australopithecines in Asia would not confuse but could help to clarify the early evolution of hominids ["hominins"] on that continent. This concept would explain the scanty remains from Java and China as relic of an Asian offshoot of an early radiation of Australopithecus, which was followed much later by an [African] immigration of Homo erectus, and finally became extinct after a period of coexistence."[11]
|
26 |
+
|
27 |
+
In 1957, an Early Pleistocene Chinese fossil tooth of unknown province was described as resembling P. robustus. Three fossilized molars from Jianshi, China (Longgudong Cave) were later identified as belonging to an Australopithecus species (Gao, 1975). However further examination questioned this interpretation; Zhang (1984) argued the Jianshi teeth and unidentified tooth belong to H. erectus. Liu et al. (2010) also dispute the Jianshi-australopithecine link and argue the Jianshi molars fall within the range of Homo erectus:
|
28 |
+
|
29 |
+
"No marked difference in dental crown shape is shown between the Jianshi hominin and other Chinese Homo erectus, and there is also no evidence in support of the Jianshi hominin's closeness to Australopithecus."
|
30 |
+
|
31 |
+
But, Wolpoff (1999) notes that in China "persistent claims of australopithecine or australopithecine-like remains continue".
|
en/2609.html.txt
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In linguistics, homonyms, broadly defined, are words which are homographs (words that share the same spelling, regardless of pronunciation) or homophones (words that share the same pronunciation, regardless of spelling), or both.[1] For example, according to this definition, the words row (propel with oars), row (argument) and row (a linear arrangement) are homonyms, as are the words see (vision) and sea (body of water).
|
2 |
+
|
3 |
+
A more restrictive or technical definition sees homonyms as words that are simultaneously homographs and homophones[1] – that is to say they have identical pronunciation and spelling, whilst maintaining different meanings. Examples are the pair stalk (part of a plant) and stalk (follow/harass a person) and the pair left (past tense of leave) and left (opposite of right).
|
4 |
+
|
5 |
+
A distinction is sometimes made between true homonyms, which are unrelated in origin, such as skate (glide on ice) and skate (the fish), and polysemous homonyms, or polysemes, which have a shared origin, such as mouth (of a river) and mouth (of an animal).[2][3]
|
6 |
+
|
7 |
+
The relationship between a set of homonyms is called homonymy, and the associated adjective is homonymous.
|
8 |
+
|
9 |
+
The adjective "homonymous" can additionally be used wherever two items share the same name,[4][5][clarification needed] independent of how closely they are or are not related in terms of their meaning or etymology.
|
10 |
+
|
11 |
+
The word homonym comes from the Greek ὁμώνυμος (homonymos), meaning "having the same name",[6] which is the conjunction of ὁμός (homos), "common, same,
|
12 |
+
similar "[7] and ὄνομα (onoma) meaning "name".[8] Thus, it refers to two or more distinct concepts sharing the "same name" or signifier. Note: for the h sound, see rough breathing and smooth breathing.
|
13 |
+
|
14 |
+
Several similar linguistic concepts are related to homonymy. These include:
|
15 |
+
|
16 |
+
A further example of a homonym, which is both a homophone and a homograph, is fluke. Fluke can mean:
|
17 |
+
|
18 |
+
These meanings represent at least three etymologically separate lexemes, but share the one form, fluke.*[11] Note that fluke is also a capitonym, in that Fluke Corporation (commonly referred to as simply "Fluke") is a manufacturer of industrial testing equipment.
|
19 |
+
|
20 |
+
Similarly, a river bank, a savings bank, a bank of switches, and a bank shot in the game of pool share a common spelling and pronunciation, but differ in meaning.
|
21 |
+
|
22 |
+
The words bow and bough are examples where there are two meanings associated with a single pronunciation and spelling (the weapon and the knot); two meanings with two different pronunciations (the knot and the act of bending at the waist), and two distinct meanings sharing the same sound but different spellings (bow, the act of bending at the waist, and bough, the branch of a tree). In addition, it has several related but distinct meanings – a bent line is sometimes called a 'bowed' line, reflecting its similarity to the weapon. Even according to the most restrictive definitions, various pairs of sounds and meanings of bow, Bow and bough are homonyms, homographs, homophones, heteronyms, heterographs, capitonyms and are polysemous.
|
23 |
+
|
24 |
+
The words there, their, and they're are examples of three words that are of a singular pronunciation, have different spellings and vastly different meanings. These three words are commonly misused (or misspelled if you want to look at it that way).
|
25 |
+
|
26 |
+
Homonymy can lead to communicative conflicts and thus trigger lexical (onomasiological) change.[12] This is known as homonymic conflict.
|
27 |
+
|
en/261.html.txt
ADDED
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
A year is the orbital period of a planetary body, for example, the Earth, moving in its orbit around the Sun. Due to the Earth's axial tilt, the course of a year sees the passing of the seasons, marked by change in weather, the hours of daylight, and, consequently, vegetation and soil fertility.
|
6 |
+
In temperate and subpolar regions around the planet, four seasons are generally recognized: spring, summer, autumn and winter. In tropical and subtropical regions, several geographical sectors do not present defined seasons; but in the seasonal tropics, the annual wet and dry seasons are recognized and tracked.
|
7 |
+
|
8 |
+
A calendar year is an approximation of the number of days of the Earth's orbital period, as counted in a given calendar. The Gregorian calendar, or modern calendar, presents its calendar year to be either a common year of 365 days or a leap year of 366 days, as do the Julian calendars; see below. For the Gregorian calendar, the average length of the calendar year (the mean year) across the complete leap cycle of 400 years is 365.2425 days. The ISO standard ISO 80000-3, Annex C, supports the symbol a (for Latin annus) to represent a year of either 365 or 366 days. In English, the abbreviations y and yr are commonly used.
|
9 |
+
|
10 |
+
In astronomy, the Julian year is a unit of time; it is defined as 365.25 days of exactly 86,400 seconds (SI base unit), totalling exactly 31,557,600 seconds in the Julian astronomical year.[1]
|
11 |
+
|
12 |
+
The word year is also used for periods loosely associated with, but not identical to, the calendar or astronomical year, such as the seasonal year, the fiscal year, the academic year, etc. Similarly, year can mean the orbital period of any planet; for example, a Martian year and a Venusian year are examples of the time a planet takes to transit one complete orbit. The term can also be used in reference to any long period or cycle, such as the Great Year.[2]
|
13 |
+
|
14 |
+
English year (via West Saxon ġēar (/jɛar/), Anglian ġēr) continues Proto-Germanic *jǣran (*jē₁ran).
|
15 |
+
Cognates are German Jahr, Old High German jār, Old Norse ár and Gothic jer, from the Proto-Indo-European noun *yeh₁r-om "year, season". Cognates also descended from the same Proto-Indo-European noun (with variation in suffix ablaut) are Avestan yārǝ "year", Greek ὥρα (hṓra) "year, season, period of time" (whence "hour"), Old Church Slavonic jarŭ, and Latin hornus "of this year".
|
16 |
+
|
17 |
+
Latin annus (a 2nd declension masculine noun; annum is the accusative singular; annī is genitive singular and nominative plural; annō the dative and ablative singular) is from a PIE noun *h₂et-no-, which also yielded Gothic aþn "year" (only the dative plural aþnam is attested).
|
18 |
+
|
19 |
+
Although most languages treat the word as thematic *yeh₁r-o-, there is evidence for an original derivation with an *-r/n suffix, *yeh₁-ro-. Both Indo-European words for year, *yeh₁-ro- and *h₂et-no-, would then be derived from verbal roots meaning "to go, move", *h₁ey- and *h₂et-, respectively (compare Vedic Sanskrit éti "goes", atasi "thou goest, wanderest").
|
20 |
+
A number of English words are derived from Latin annus, such as annual, annuity, anniversary, etc.; per annum means "each year", anno Domini means "in the year of the Lord".
|
21 |
+
|
22 |
+
The Greek word for "year", ἔτος, is cognate with Latin vetus "old", from the PIE word *wetos- "year", also preserved in this meaning in Sanskrit vat-sa-ras "year" and vat-sa- "yearling (calf)", the latter also reflected in Latin vitulus "bull calf", English wether "ram" (Old English weðer, Gothic wiþrus "lamb").
|
23 |
+
|
24 |
+
In some languages, it is common to count years by referencing to one season, as in "summers", or "winters", or "harvests". Examples include Chinese 年 "year", originally 秂, an ideographic compound of a person carrying a bundle of wheat denoting "harvest". Slavic besides godŭ "time period; year" uses lěto "summer; year".
|
25 |
+
|
26 |
+
In the International System of Quantities (ISO 80000-3), the year (symbol, a) is defined as either 365 days or 366 days.
|
27 |
+
|
28 |
+
Astronomical years do not have an integer number of days or lunar months. Any calendar that follows an astronomical year must have a system of intercalation such as leap years.
|
29 |
+
|
30 |
+
In the Julian calendar, the average (mean) length of a year is 365.25 days. In a non-leap year, there are 365 days, in a leap year there are 366 days. A leap year occurs every fourth year, or leap year, during which a leap day is intercalated into the month of February. The name "Leap Day" is applied to the added day.
|
31 |
+
|
32 |
+
The Revised Julian calendar, proposed in 1923 and used in some Eastern Orthodox Churches,
|
33 |
+
has 218 leap years every 900 years, for the average (mean) year length of 365.2422222 days, close to the length of the mean tropical year, 365.24219 days (relative error of 9·10−8).
|
34 |
+
In the year 2800 CE, the Gregorian and Revised Julian calendars will begin to differ by one calendar day.[3]
|
35 |
+
|
36 |
+
The Gregorian calendar attempts to cause the northward equinox to fall on or shortly before March 21 and hence it follows the northward equinox year, or tropical year.[4] Because 97 out of 400 years are leap years, the mean length of the Gregorian calendar year is 365.2425 days; with a relative error below one ppm (8·10−7) relative to the current length of the mean tropical year (365.24219 days) and even closer to the current March equinox year of 365.242374 days that it aims to match. It is estimated that by the year 4000 CE, the northward equinox will fall back by one day in the Gregorian calendar, not because of this difference, but due to the slowing of the Earth's rotation and the associated lengthening of the day.
|
37 |
+
|
38 |
+
Historically, lunisolar calendars intercalated entire leap months on an observational basis.
|
39 |
+
Lunisolar calendars have mostly fallen out of use except for liturgical reasons (Hebrew calendar, various Hindu calendars).
|
40 |
+
|
41 |
+
A modern adaptation of the historical Jalali calendar, known as the Solar Hijri calendar (1925), is a purely solar calendar with an irregular pattern of leap days based on observation (or astronomical computation), aiming to place new year (Nowruz) on the day of vernal equinox (for the time zone of Tehran), as opposed to using an algorithmic system of leap years.
|
42 |
+
|
43 |
+
A calendar era assigns a cardinal number to each sequential year, using a reference point in the past as the beginning of the era.
|
44 |
+
|
45 |
+
The worldwide standard is the Anno Domini, although some prefer the term Common Era because it has no explicit reference to Christianity. It was introduced in the 6th century and was intended to count years from the nativity of Jesus.[5]
|
46 |
+
|
47 |
+
The Anno Domini era is given the Latin abbreviation AD (for Anno Domini "in the year of the Lord"), or alternatively CE for "Common Era". Years before AD 1 are abbreviated BC for Before Christ or alternatively BCE for Before the Common Era.
|
48 |
+
Year numbers are based on inclusive counting, so that there is no "year zero".
|
49 |
+
In the modern alternative reckoning of Astronomical year numbering, positive numbers indicate years AD, the number 0 designates 1 BC, −1 designates 2 BC, and so on.
|
50 |
+
|
51 |
+
Financial and scientific calculations often use a 365-day calendar to simplify daily rates.
|
52 |
+
|
53 |
+
A fiscal year or financial year is a 12-month period used for calculating annual financial statements in businesses and other organizations. In many jurisdictions, regulations regarding accounting require such reports once per twelve months, but do not require that the twelve months constitute a calendar year.
|
54 |
+
|
55 |
+
For example, in Canada and India the fiscal year runs from April 1; in the United Kingdom it runs from April 1 for purposes of corporation tax and government financial statements, but from April 6 for purposes of personal taxation and payment of state benefits; in Australia it runs from July 1; while in the United States the fiscal year of the federal government runs from October 1.
|
56 |
+
|
57 |
+
An academic year is the annual period during which a student attends an educational institution. The academic year may be divided into academic terms, such as semesters or quarters. The school year in many countries starts in August or September and ends in May, June or July. In Israel the academic year begins around October or November, aligned with the second month of the Hebrew Calendar.
|
58 |
+
|
59 |
+
Some schools in the UK and USA divide the academic year into three roughly equal-length terms (called trimesters or quarters in the USA), roughly coinciding with autumn, winter, and spring. At some, a shortened summer session, sometimes considered part of the regular academic year, is attended by students on a voluntary or elective basis. Other schools break the year into two main semesters, a first (typically August through December) and a second semester (January through May). Each of these main semesters may be split in half by mid-term exams, and each of the halves is referred to as a quarter (or term in some countries). There may also be a voluntary summer session and/or a short January session.
|
60 |
+
|
61 |
+
Some other schools, including some in the United States, have four marking periods. Some schools in the United States, notably Boston Latin School, may divide the year into five or more marking periods. Some state in defense of this that there is perhaps a positive correlation between report frequency and academic achievement.
|
62 |
+
|
63 |
+
There are typically 180 days of teaching each year in schools in the US, excluding weekends and breaks, while there are 190 days for pupils in state schools in Canada, New Zealand and the United Kingdom, and 200 for pupils in Australia.
|
64 |
+
|
65 |
+
In India the academic year normally starts from June 1 and ends on May 31. Though schools start closing from mid-March, the actual academic closure is on May 31 and in Nepal it starts from July 15.[citation needed]
|
66 |
+
|
67 |
+
Schools and universities in Australia typically have academic years that roughly align with the calendar year (i.e., starting in February or March and ending in October to December), as the southern hemisphere experiences summer from December to February.
|
68 |
+
|
69 |
+
The Julian year, as used in astronomy and other sciences, is a time unit defined as exactly 365.25 days. This is the normal meaning of the unit "year" (symbol "a" from the Latin annus) used in various scientific contexts. The Julian century of 36525 days and the Julian millennium of 365250 days are used in astronomical calculations. Fundamentally, expressing a time interval in Julian years is a way to precisely specify how many days (not how many "real" years), for long time intervals where stating the number of days would be unwieldy and unintuitive. By convention, the Julian year is used in the computation of the distance covered by a light-year.
|
70 |
+
|
71 |
+
In the Unified Code for Units of Measure, the symbol, a (without subscript), always refers to the Julian year, aj, of exactly 31557600 seconds.
|
72 |
+
|
73 |
+
The SI multiplier prefixes may be applied to it to form ka (kiloannus), Ma (megaannus), etc.
|
74 |
+
|
75 |
+
Each of these three years can be loosely called an astronomical year.
|
76 |
+
|
77 |
+
The sidereal year is the time taken for the Earth to complete one revolution of its orbit, as measured against a fixed frame of reference (such as the fixed stars, Latin sidera, singular sidus). Its average duration is 365.256363004 days (365 d 6 h 9 min 9.76 s) (at the epoch J2000.0 = January 1, 2000, 12:00:00 TT).[6]
|
78 |
+
|
79 |
+
Today the mean tropical year is defined as the period of time for the mean ecliptic longitude of the Sun to increase by 360 degrees.[7] Since the Sun's ecliptic longitude is measured with respect to the equinox,[8] the tropical year comprises a complete cycle of the seasons; because of the biological and socio-economic importance of the seasons, the tropical year is the basis of most calendars. The modern definition of mean tropical year differs from the actual time between passages of, e.g., the northward equinox for several reasons explained below. Because of the Earth's axial precession, this year is about 20 minutes shorter than the sidereal year. The mean tropical year is approximately 365 days, 5 hours, 48 minutes, 45 seconds, using the modern definition.[9] (= 365.24219 days of 86400 SI seconds)
|
80 |
+
|
81 |
+
The anomalistic year is the time taken for the Earth to complete one revolution with respect to its apsides. The orbit of the Earth is elliptical; the extreme points, called apsides, are the perihelion, where the Earth is closest to the Sun (January 5, 07:48 UT in 2020), and the aphelion, where the Earth is farthest from the Sun (July 4 11:35 UT in 2020). The anomalistic year is usually defined as the time between perihelion passages. Its average duration is 365.259636 days (365 d 6 h 13 min 52.6 s) (at the epoch J2011.0).[10]
|
82 |
+
|
83 |
+
The draconic year, draconitic year, eclipse year, or ecliptic year is the time taken for the Sun (as seen from the Earth) to complete one revolution with respect to the same lunar node (a point where the Moon's orbit intersects the ecliptic). The year is associated with eclipses: these occur only when both the Sun and the Moon are near these nodes; so eclipses occur within about a month of every half eclipse year. Hence there are two eclipse seasons every eclipse year. The average duration of the eclipse year is
|
84 |
+
|
85 |
+
This term is sometimes erroneously used for the draconic or nodal period of lunar precession, that is the period of a complete revolution of the Moon's ascending node around the ecliptic: 18.612815932 Julian years (6798.331019 days; at the epoch J2000.0).
|
86 |
+
|
87 |
+
The full moon cycle is the time for the Sun (as seen from the Earth) to complete one revolution with respect to the perigee of the Moon's orbit. This period is associated with the apparent size of the full moon, and also with the varying duration of the synodic month. The duration of one full moon cycle is:
|
88 |
+
|
89 |
+
The lunar year comprises twelve full cycles of the phases of the Moon, as seen from Earth. It has a duration of approximately 354.37 days. Muslims use this for celebrating their Eids and for marking the start of the fasting month of Ramadan.
|
90 |
+
A Muslim calendar year is based on the lunar cycle.
|
91 |
+
|
92 |
+
The vague year, from annus vagus or wandering year, is an integral approximation to the year equaling 365 days, which wanders in relation to more exact years. Typically the vague year is divided into 12 schematic months of 30 days each plus 5 epagomenal days. The vague year was used in the calendars of Ethiopia, Ancient Egypt, Iran, Armenia and in Mesoamerica among the Aztecs and Maya.[11] It is still used by many Zoroastrian communities.
|
93 |
+
|
94 |
+
A heliacal year is the interval between the heliacal risings of a star. It differs from the sidereal year for stars away from the ecliptic due mainly to the precession of the equinoxes.
|
95 |
+
|
96 |
+
The Sothic year is the interval between heliacal risings of the star Sirius. It is currently less than the sidereal year and its duration is very close to the Julian year of 365.25 days.
|
97 |
+
|
98 |
+
The Gaussian year is the sidereal year for a planet of negligible mass (relative to the Sun) and unperturbed by other planets that is governed by the Gaussian gravitational constant. Such a planet would be slightly closer to the Sun than Earth's mean distance. Its length is:
|
99 |
+
|
100 |
+
The Besselian year is a tropical year that starts when the (fictitious) mean Sun reaches an ecliptic longitude of 280°. This is currently on or close to January 1. It is named after the 19th-century German astronomer and mathematician Friedrich Bessel. The following equation can be used to compute the current Besselian epoch (in years):[12]
|
101 |
+
|
102 |
+
The TT subscript indicates that for this formula, the Julian date should use the Terrestrial Time scale, or its predecessor, ephemeris time.
|
103 |
+
|
104 |
+
The exact length of an astronomical year changes over time.
|
105 |
+
|
106 |
+
Mean year lengths in this section are calculated for 2000, and differences in year lengths, compared to 2000, are given for past and future years. In the tables a day is 86,400 SI seconds long.[13][14][15][16]
|
107 |
+
|
108 |
+
An average Gregorian year is 365.2425 days (52.1775 weeks, 8765.82 hours, 525949.2 minutes or 31556952 seconds). For this calendar, a common year is 365 days (8760 hours, 525600 minutes or 31536000 seconds), and a leap year is 366 days (8784 hours, 527040 minutes or 31622400 seconds). The 400-year cycle of the Gregorian calendar has 146097 days and hence exactly 20871 weeks.
|
109 |
+
|
110 |
+
The Great Year, or equinoctial cycle, corresponds to a complete revolution of the equinoxes around the ecliptic. Its length is about 25,700 years.
|
111 |
+
|
112 |
+
The Galactic year is the time it takes Earth's Solar System to revolve once around the galactic center. It comprises roughly 230 million Earth years.[17]
|
113 |
+
|
114 |
+
A seasonal year is the time between successive recurrences of a seasonal event such as the flooding of a river, the migration of a species of bird, the flowering of a species of plant, the first frost, or the first scheduled game of a certain sport. All of these events can have wide variations of more than a month from year to year.
|
115 |
+
|
116 |
+
In the International System of Quantities the symbol for the year as a unit of time is a, taken from the Latin word annus.[18]
|
117 |
+
|
118 |
+
In English, the abbreviations "y" or "yr" are more commonly used in non-scientific literature, but also specifically in geology and paleontology, where "kyr, myr, byr" (thousands, millions, and billions of years, respectively) and similar abbreviations are used to denote intervals of time remote from the present.[18][19][20]
|
119 |
+
|
120 |
+
NIST SP811[21] and ISO 80000-3:2006[22] support the symbol a as the unit of time for a year. In English, the abbreviations y and yr are also used.[18][19][20]
|
121 |
+
|
122 |
+
The Unified Code for Units of Measure[23] disambiguates the varying symbologies of ISO 1000, ISO 2955 and ANSI X3.50[24] by using:
|
123 |
+
|
124 |
+
where:
|
125 |
+
|
126 |
+
The International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Geological Sciences have jointly recommended defining the annus, with symbol a, as the length of the tropical year in the year 2000:
|
127 |
+
|
128 |
+
This differs from the above definition of 365.25 days by about 20 parts per million. The joint document says that definitions such as the Julian year "bear an inherent, pre-programmed obsolescence because of the variability of Earth’s orbital movement", but then proposes using the length of the tropical year as of 2000 AD (specified down to the millisecond), which suffers from the same problem.[25][26] (The tropical year oscillates with time by more than a minute.)
|
129 |
+
|
130 |
+
The notation has proved controversial as it conflicts with an earlier convention among geoscientists to use a specifically for years ago, and y or yr for a one-year time period.[26]
|
131 |
+
|
132 |
+
For the following, there are alternative forms which elide the consecutive vowels, such as kilannus, megannus, etc. The exponents and exponential notations are typically used for calculating and in displaying calculations, and for conserving space, as in tables of data.
|
133 |
+
|
134 |
+
|
135 |
+
|
136 |
+
In astronomy, geology, and paleontology, the abbreviation yr for years and ya for years ago are sometimes used, combined with prefixes for thousand, million, or billion.[19][29] They are not SI units, using y to abbreviate the English "year", but following ambiguous international recommendations, use either the standard English first letters as prefixes (t, m, and b) or metric prefixes (k, M, and G) or variations on metric prefixes (k, m, g). In archaeology, dealing with more recent periods, normally expressed dates, e.g. "22,000 years ago" may be used as a more accessible equivalent of a Before Present ("BP") date.
|
137 |
+
|
138 |
+
These abbreviations include:
|
139 |
+
|
140 |
+
Use of mya and bya is deprecated in modern geophysics, the recommended usage being Ma and Ga for dates Before Present, but "m.y." for the duration of epochs.[19][20] This ad hoc distinction between "absolute" time and time intervals is somewhat controversial amongst members of the Geological Society of America.[31]
|
141 |
+
|
142 |
+
Note that on graphs, using ya units on the horizontal axis time flows from right to left, which may seem counter-intuitive. If the ya units are on the vertical axis, time flows from top to bottom which is probably easier to understand than conventional notation.[clarification needed]
|
en/2610.html.txt
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In linguistics, homonyms, broadly defined, are words which are homographs (words that share the same spelling, regardless of pronunciation) or homophones (words that share the same pronunciation, regardless of spelling), or both.[1] For example, according to this definition, the words row (propel with oars), row (argument) and row (a linear arrangement) are homonyms, as are the words see (vision) and sea (body of water).
|
2 |
+
|
3 |
+
A more restrictive or technical definition sees homonyms as words that are simultaneously homographs and homophones[1] – that is to say they have identical pronunciation and spelling, whilst maintaining different meanings. Examples are the pair stalk (part of a plant) and stalk (follow/harass a person) and the pair left (past tense of leave) and left (opposite of right).
|
4 |
+
|
5 |
+
A distinction is sometimes made between true homonyms, which are unrelated in origin, such as skate (glide on ice) and skate (the fish), and polysemous homonyms, or polysemes, which have a shared origin, such as mouth (of a river) and mouth (of an animal).[2][3]
|
6 |
+
|
7 |
+
The relationship between a set of homonyms is called homonymy, and the associated adjective is homonymous.
|
8 |
+
|
9 |
+
The adjective "homonymous" can additionally be used wherever two items share the same name,[4][5][clarification needed] independent of how closely they are or are not related in terms of their meaning or etymology.
|
10 |
+
|
11 |
+
The word homonym comes from the Greek ὁμώνυμος (homonymos), meaning "having the same name",[6] which is the conjunction of ὁμός (homos), "common, same,
|
12 |
+
similar "[7] and ὄνομα (onoma) meaning "name".[8] Thus, it refers to two or more distinct concepts sharing the "same name" or signifier. Note: for the h sound, see rough breathing and smooth breathing.
|
13 |
+
|
14 |
+
Several similar linguistic concepts are related to homonymy. These include:
|
15 |
+
|
16 |
+
A further example of a homonym, which is both a homophone and a homograph, is fluke. Fluke can mean:
|
17 |
+
|
18 |
+
These meanings represent at least three etymologically separate lexemes, but share the one form, fluke.*[11] Note that fluke is also a capitonym, in that Fluke Corporation (commonly referred to as simply "Fluke") is a manufacturer of industrial testing equipment.
|
19 |
+
|
20 |
+
Similarly, a river bank, a savings bank, a bank of switches, and a bank shot in the game of pool share a common spelling and pronunciation, but differ in meaning.
|
21 |
+
|
22 |
+
The words bow and bough are examples where there are two meanings associated with a single pronunciation and spelling (the weapon and the knot); two meanings with two different pronunciations (the knot and the act of bending at the waist), and two distinct meanings sharing the same sound but different spellings (bow, the act of bending at the waist, and bough, the branch of a tree). In addition, it has several related but distinct meanings – a bent line is sometimes called a 'bowed' line, reflecting its similarity to the weapon. Even according to the most restrictive definitions, various pairs of sounds and meanings of bow, Bow and bough are homonyms, homographs, homophones, heteronyms, heterographs, capitonyms and are polysemous.
|
23 |
+
|
24 |
+
The words there, their, and they're are examples of three words that are of a singular pronunciation, have different spellings and vastly different meanings. These three words are commonly misused (or misspelled if you want to look at it that way).
|
25 |
+
|
26 |
+
Homonymy can lead to communicative conflicts and thus trigger lexical (onomasiological) change.[12] This is known as homonymic conflict.
|
27 |
+
|
en/2611.html.txt
ADDED
@@ -0,0 +1,218 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Humans (Homo sapiens) are highly intelligent primates that have become the dominant species on Earth. They are the only extant members of the subtribe Hominina and together with chimpanzees, gorillas, and orangutans, they are part of the family Hominidae (the great apes, or hominids). Humans are terrestrial animals, characterized by their erect posture and bipedal locomotion; high manual dexterity and heavy tool use compared to other animals; open-ended and complex language use compared to other animal communications; larger, more complex brains than other primates; and highly advanced and organized societies.[3][4]
|
6 |
+
|
7 |
+
Early hominins—particularly the australopithecines, whose brains and anatomy are in many ways more similar to ancestral non-human apes—are less often referred to as "human" than hominins of the genus Homo.[5] Several of these hominins used fire, occupied much of Eurasia, and the lineage that later that gave rise to Homo sapiens is thought to have diverged in Africa from other known hominins around 500,000 years ago, with the earliest fossil evidence of Homo sapiens appearing (also in Africa) around 300,000 years ago.[6] The oldest early H. sapiens fossils were found in Jebel Irhoud, Morocco dating to about 315,000 years ago.[7][8][9][10][11] As of 2017[update], the oldest known skeleton of an anatomically modern Homo sapiens is the Omo-Kibish I, dated to about 196,000 years ago and discovered in southern Ethiopia.[12][13][14] Humans began to exhibit evidence of behavioral modernity at least by about 100,000–70,000 years ago[15][16][17][18][19][20] and (according to recent evidence) as far back as around 300,000 years ago, in the Middle Stone Age.[21][22][23] In several waves of migration, H. sapiens ventured out of Africa and populated most of the world.[24][25]
|
8 |
+
|
9 |
+
The spread of the large and increasing population of humans has profoundly affected much of the biosphere and millions of species worldwide. Advantages that explain this evolutionary success include a larger brain with a well-developed neocortex, prefrontal cortex and temporal lobes, which enable advanced abstract reasoning, language, problem solving, sociality, and culture through social learning. Humans use tools more frequently and effectively than any other animal: they are the only extant species to build fires, cook food, clothe themselves, and create and use numerous other technologies and arts.
|
10 |
+
|
11 |
+
Humans uniquely use such systems of symbolic communication as language and art to express themselves and exchange ideas, and also organize themselves into purposeful groups. Humans create complex social structures composed of many cooperating and competing groups, from families and kinship networks to political states. Social interactions between humans have established an extremely wide variety of values,[26] social norms, and rituals, which together undergird human society. Curiosity and the human desire to understand and influence the environment and to explain and manipulate phenomena (or events) have motivated humanity's development of science, philosophy, mythology, religion, and numerous other fields of knowledge.
|
12 |
+
|
13 |
+
Though most of human existence has been sustained by hunting and gathering in band societies,[27] many human societies transitioned to sedentary agriculture approximately 10,000 years ago,[28] domesticating plants and animals, thus enabling the growth of civilization. These human societies subsequently expanded, establishing various forms of government, religion, and culture around the world, and unifying people within regions to form states and empires. The rapid advancement of scientific and medical understanding in the 19th and 20th centuries permitted the development of fuel-driven technologies and increased lifespans, causing the human population to rise exponentially. The global human population was estimated to be near 7.8 billion in 2019.[29]
|
14 |
+
|
15 |
+
In common usage, the word "human" generally refers to the only extant species of the genus Homo—anatomically and behaviorally modern Homo sapiens.
|
16 |
+
|
17 |
+
In scientific terms, the meanings of "hominid" and "hominin" have changed during the recent decades with advances in the discovery and study of the fossil ancestors of modern humans. The previously clear boundary between humans and apes has blurred, resulting in biologists now acknowledging the hominids as encompassing multiple species, and Homo and close relatives since the split from chimpanzees as the only hominins. There is also a distinction between anatomically modern humans and Archaic Homo sapiens, the earliest fossil members of the species.
|
18 |
+
|
19 |
+
The English adjective human is a Middle English loanword from Old French humain, ultimately from Latin hūmānus, the adjective form of homō "man." The word's use as a noun (with a plural: humans) dates to the 16th century.[30] The native English term man can refer to the species generally (a synonym for humanity) as well as to human males, or individuals of either sex (though this latter form is less common in contemporary English).[31]
|
20 |
+
|
21 |
+
The species binomial "Homo sapiens" was coined by Carl Linnaeus in his 18th-century work Systema Naturae.[32] The generic name "Homo" is a learned 18th-century derivation from Latin homō "man," ultimately "earthly being" (Old Latin hemō a cognate to Old English guma "man", from PIE dʰǵʰemon-, meaning "earth" or "ground").[33] The species-name "sapiens" means "wise" or "sapient". Note that the Latin word homo refers to humans of either sex, and that "sapiens" is the singular form (while there is no such word as "sapien").[34]
|
22 |
+
|
23 |
+
The genus Homo evolved and diverged from other hominins in Africa, after the human clade split from the chimpanzee lineage of the hominids (great apes) branch of the primates. Modern humans, defined as the species Homo sapiens or specifically to the single extant subspecies Homo sapiens sapiens, proceeded to colonize all the continents and larger islands, arriving in Eurasia 125,000–60,000 years ago,[35][36] Australia around 40,000 years ago, the Americas around 15,000 years ago, and remote islands such as Hawaii, Easter Island, Madagascar, and New Zealand between the years 300 and 1280.[37][38]
|
24 |
+
|
25 |
+
The closest living relatives of humans are chimpanzees and bonobos (genus Pan)[39][40] and gorillas (genus Gorilla).[41] With the sequencing of human and chimpanzee genomes, current estimates of similarity between human and chimpanzee DNA sequences range between 95% and 99%.[41][42][43] By using the technique called a molecular clock which estimates the time required for the number of divergent mutations to accumulate between two lineages, the approximate date for the split between lineages can be calculated. The gibbons (family Hylobatidae) and orangutans (genus Pongo) were the first groups to split from the line leading to the humans, then gorillas (genus Gorilla) followed by the chimpanzee (genus Pan). The splitting date between human and chimpanzee lineages is placed around 4–8 million years ago during the late Miocene epoch.[44][45] During this split, chromosome 2 was formed from two other chromosomes, leaving humans with only 23 pairs of chromosomes, compared to 24 for the other apes.[46]
|
26 |
+
|
27 |
+
There is little fossil evidence for the divergence of the gorilla, chimpanzee and hominin lineages.[47][48] The earliest fossils that have been proposed as members of the hominin lineage are Sahelanthropus tchadensis dating from 7 million years ago, Orrorin tugenensis dating from 5.7 million years ago, and Ardipithecus kadabba dating to 5.6 million years ago. Each of these species has been argued to be a bipedal ancestor of later hominins, but all such claims are contested. It is also possible that any one of the three is an ancestor of another branch of African apes, or is an ancestor shared between hominins and other African Hominoidea (apes). The question of the relation between these early fossil species and the hominin lineage is still to be resolved. From these early species the australopithecines arose around 4 million years ago diverged into robust (also called Paranthropus) and gracile branches,[49] possibly one of which (such as A. garhi, dating to 2.5 million years ago) is a direct ancestor of the genus Homo.[50]
|
28 |
+
|
29 |
+
The earliest members of the genus Homo are Homo habilis which evolved around 2.8 million years ago.[51] Homo habilis has been considered the first species for which there is clear evidence of the use of stone tools. More recently, however, in 2015, stone tools, perhaps predating Homo habilis, have been discovered in northwestern Kenya that have been dated to 3.3 million years old.[52] Nonetheless, the brains of Homo habilis were about the same size as that of a chimpanzee, and their main adaptation was bipedalism as an adaptation to terrestrial living. During the next million years a process of encephalization began, and with the arrival of Homo erectus in the fossil record, cranial capacity had doubled. Homo erectus were the first of the hominina to leave Africa, and these species spread through Africa, Asia, and Europe between 1.3 to 1.8 million years ago. One population of H. erectus, also sometimes classified as a separate species Homo ergaster, stayed in Africa and evolved into Homo sapiens. It is believed that these species were the first to use fire and complex tools. The earliest transitional fossils between H. ergaster/erectus and archaic humans are from Africa such as Homo rhodesiensis, but seemingly transitional forms are also found at Dmanisi, Georgia. These descendants of African H. erectus spread through Eurasia from c. 500,000 years ago evolving into H. antecessor, H. heidelbergensis and H. neanderthalensis. Fossils of anatomically modern humans from date from the Middle Paleolithic at about 200,000 years ago such as the Omo remains of Ethiopia, and the fossils of Herto sometimes classified as Homo sapiens idaltu also from Ethiopia. Earlier remains, now classified as early Homo sapiens, such as the Jebel Irhoud remains from Morocco and the Florisbad Skull from South Africa are also now known to date to about 300,000 and 259,000 years old respectively.[53][6][54][55][56][57] Later fossils of archaic Homo sapiens from Skhul in Israel and Southern Europe begin around 90,000 years ago.[58]
|
30 |
+
|
31 |
+
Human evolution is characterized by a number of morphological, developmental, physiological, and behavioral changes that have taken place since the split between the last common ancestor of humans and chimpanzees. The most significant of these adaptations are 1. bipedalism, 2. increased brain size, 3. lengthened ontogeny (gestation and infancy), 4. decreased sexual dimorphism (neoteny). The relationship between all these changes is the subject of ongoing debate.[59] Other significant morphological changes included the evolution of a power and precision grip, a change first occurring in H. erectus.[60]
|
32 |
+
|
33 |
+
Bipedalism is the basic adaption of the hominin line, and it is considered the main cause behind a suite of skeletal changes shared by all bipedal hominins. The earliest bipedal hominin is considered to be either Sahelanthropus[61] or Orrorin, with Ardipithecus, a full bipedal,[62] coming somewhat later.[citation needed] The knuckle walkers, the gorilla and chimpanzee, diverged around the same time, and either Sahelanthropus or Orrorin may be humans' last shared ancestor with those animals.[citation needed] The early bipedals eventually evolved into the australopithecines and later the genus Homo.[citation needed] There are several theories of the adaptational value of bipedalism. It is possible that bipedalism was favored because it freed up the hands for reaching and carrying food, because it saved energy during locomotion, because it enabled long-distance running and hunting, or as a strategy for avoiding hyperthermia by reducing the surface exposed to direct sun.[citation needed]
|
34 |
+
|
35 |
+
The human species developed a much larger brain than that of other primates—typically 1,330 cm3 (81 cu in) in modern humans, over twice the size of that of a chimpanzee or gorilla.[63] The pattern of encephalization started with Homo habilis which at approximately 600 cm3 (37 cu in) had a brain slightly larger than chimpanzees, and continued with Homo erectus (800–1,100 cm3 (49–67 cu in)), and reached a maximum in Neanderthals with an average size of 1,200–1,900 cm3 (73–116 cu in), larger even than Homo sapiens (but less encephalized).[64] The pattern of human postnatal brain growth differs from that of other apes (heterochrony), and allows for extended periods of social learning and language acquisition in juvenile humans. However, the differences between the structure of human brains and those of other apes may be even more significant than differences in size.[65][66][67][68] The increase in volume over time has affected different areas within the brain unequally—the temporal lobes, which contain centers for language processing have increased disproportionately, as has the prefrontal cortex which has been related to complex decision making and moderating social behavior.[63] Encephalization has been tied to an increasing emphasis on meat in the diet,[69][70] or with the development of cooking,[71] and it has been proposed [72] that intelligence increased as a response to an increased necessity for solving social problems as human society became more complex.
|
36 |
+
|
37 |
+
The reduced degree of sexual dimorphism is primarily visible in the reduction of the male canine tooth relative to other ape species (except gibbons). Another important physiological change related to sexuality in humans was the evolution of hidden estrus. Humans are the only ape in which the female is intermittently fertile year round, and in which no special signals of fertility are produced by the body (such as genital swelling during estrus). Nonetheless humans retain a degree of sexual dimorphism in the distribution of body hair and subcutaneous fat, and in the overall size, males being around 25% larger than females. These changes taken together have been interpreted as a result of an increased emphasis on pair bonding as a possible solution to the requirement for increased parental investment due to the prolonged infancy of offspring.[citation needed]
|
38 |
+
|
39 |
+
By the beginning of the Upper Paleolithic period (50,000 BP), and likely significantly earlier by 100–70,000 years ago[17][18][15][16][19][20] or possibly by about 300,000 years ago[23][22][21] behavioral modernity, including language, music and other cultural universals had developed.[73][74]
|
40 |
+
As early Homo sapiens dispersed, it encountered varieties of archaic humans both in Africa and in Eurasia,
|
41 |
+
in Eurasia notably Homo neanderthalensis.
|
42 |
+
Since 2010, evidence for gene flow between archaic and modern humans during the period of roughly 100,000 to 30,000 years ago has been discovered. This includes modern human admixture in Neanderthals, Neanderthal admixture in all modern humans outside Africa,
|
43 |
+
[75][76]
|
44 |
+
Denisova hominin admixture in Melanesians[77]
|
45 |
+
as well as admixture from unnamed archaic humans to some Sub-Saharan African populations.[78]
|
46 |
+
|
47 |
+
The "out of Africa" migration of Homo sapiens took place in at least two waves, the first around 130,000 to 100,000 years ago, the second (Southern Dispersal) around 70,000 to 50,000 years ago,[79][80][81]
|
48 |
+
resulting in the colonization of Australia around 65–50,000 years ago,[82][83][84]
|
49 |
+
This recent out of Africa migration derived from East African populations, which had become separated from populations migrating to Southern, Central and Western Africa at least 100,000 years earlier.[85] Modern humans subsequently spread globally, replacing archaic humans (either through competition or hybridization). They inhabited Eurasia and Oceania by 40,000 years ago, and the Americas at least 14,500 years ago.[86][87]
|
50 |
+
|
51 |
+
Until about 12,000 years ago (the beginning of the Holocene), all humans lived as hunter-gatherers, generally in small nomadic groups known as band societies, often in caves.
|
52 |
+
|
53 |
+
The Neolithic Revolution (the invention of agriculture) took place beginning about 10,000 years ago, first in the Fertile Crescent, spreading through large parts of the Old World over the following millennia, and independently in Mesoamerica about 6,000 years ago.
|
54 |
+
Access to food surplus led to the formation of permanent human settlements, the domestication of animals and the use of metal tools for the first time in history.
|
55 |
+
|
56 |
+
Agriculture and sedentary lifestyle led to the emergence of early civilizations (the development of urban development, complex society, social stratification and writing)
|
57 |
+
from about 5,000 years ago (the Bronze Age), first beginning in Mesopotamia.[88]
|
58 |
+
|
59 |
+
Few human populations progressed to historicity, with substantial parts of the world remaining in a Neolithic, Mesolithic or Upper Paleolithic stage of development until the advent of globalisation
|
60 |
+
and modernity initiated by European exploration and colonialism.
|
61 |
+
|
62 |
+
The Scientific Revolution, Technological Revolution and the Industrial Revolution brought such discoveries as imaging technology, major innovations in transport, such as the airplane and automobile; energy development, such as coal and electricity.[89] This correlates with population growth (especially in America)[90] and higher life expectancy, the World population rapidly increased numerous times in the 19th and 20th centuries as nearly 10% of the 100 billion people who ever lived lived in the past century.[91]
|
63 |
+
|
64 |
+
With the advent of the Information Age at the end of the 20th century, modern humans live in a world that has become increasingly globalized and interconnected. As of 2010[update], almost 2 billion humans are able to communicate with each other via the Internet,[92] and 3.3 billion by mobile phone subscriptions.[93] Although connection between humans has encouraged the growth of science, art, discussion, and technology, it has also led to culture clashes and the development and use of weapons of mass destruction.[citation needed] Human population growth and industrialisation has led to environmental destruction and pollution significantly contributing to the ongoing mass extinction of other forms of life called the Holocene extinction event,[94] which may be further accelerated by global warming in the future.[95]
|
65 |
+
|
66 |
+
Early human settlements were dependent on proximity to water and, depending on the lifestyle, other natural resources used for subsistence, such as populations of animal prey for hunting and arable land for growing crops and grazing livestock. However, modern humans have a great capacity for altering their habitats by means of technology, through irrigation, urban planning, construction, transport, manufacturing goods, deforestation and desertification,[96] but human settlements continue to be vulnerable to natural disasters, especially those placed in hazardous locations and characterized by lack of quality of construction.[97] Deliberate habitat alteration is often done with the goals of increasing material wealth, increasing thermal comfort, improving the amount of food available, improving aesthetics, or improving ease of access to resources or other human settlements. With the advent of large-scale trade and transport infrastructure, proximity to these resources has become unnecessary, and in many places, these factors are no longer a driving force behind the growth and decline of a population. Nonetheless, the manner in which a habitat is altered is often a major determinant in population change.[citation needed]
|
67 |
+
|
68 |
+
Technology has allowed humans to colonize six of the Earth's seven continents and adapt to virtually all climates. However the human population is not uniformly distributed on the Earth's surface, because the population density varies from one region to another and there are large areas almost completely uninhabited, like Antarctica.[98][99] Within the last century, humans have explored Antarctica, underwater environment, and outer space, although large-scale colonization of these environments is not yet feasible. With a population of over seven billion, humans are among the most numerous of the large mammals. Most humans (61%) live in Asia. The remainder live in the Americas (14%), Africa (14%), Europe (11%), and Oceania (0.5%).[100]
|
69 |
+
|
70 |
+
Human habitation within closed ecological systems in hostile environments, such as Antarctica and outer space, is expensive, typically limited in duration, and restricted to scientific, military, or industrial expeditions. Life in space has been very sporadic, with no more than thirteen humans in space at any given time.[101] Between 1969 and 1972, two humans at a time spent brief intervals on the Moon. As of July 2020, no other celestial body has been visited by humans, although there has been a continuous human presence in space since the launch of the initial crew to inhabit the International Space Station on 31 October 2000.[102] However, other celestial bodies have been visited by human-made objects.[103][104][105]
|
71 |
+
|
72 |
+
Since 1800, the human population has increased from one billion[106] to over seven billion.[107] The combined biomass of the carbon of all the humans on Earth in 2018 was estimated at ~ 60 million tons, about 10 times larger than that of all non-domesticated mammals.[108]
|
73 |
+
|
74 |
+
In 2004, some 2.5 billion out of 6.3 billion people (39.7%) lived in urban areas. In February 2008, the U.N. estimated that half the world's population would live in urban areas by the end of the year.[109] Problems for humans living in cities include various forms of pollution and crime,[110] especially in inner city and suburban slums. Both overall population numbers and the proportion residing in cities are expected to increase significantly in the coming decades.[111]
|
75 |
+
|
76 |
+
Humans have had a dramatic effect on the environment. Humans are apex predators, being rarely preyed upon by other species.[112] Currently, through land development, combustion of fossil fuels, and pollution, humans are thought to be the main contributor to global climate change.[113] If this continues at its current rate it is predicted that climate change will wipe out half of all plant and animal species over the next century.[114][115]
|
77 |
+
|
78 |
+
Most aspects of human physiology are closely homologous to corresponding aspects of animal physiology. The human body consists of the legs, the torso, the arms, the neck, and the head. An adult human body consists of about 100 trillion (1014) cells. The most commonly defined body systems in humans are the nervous, the cardiovascular, the circulatory, the digestive, the endocrine, the immune, the integumentary, the lymphatic, the musculoskeletal, the reproductive, the respiratory, and the urinary system.[116][117]
|
79 |
+
|
80 |
+
Humans, like most of the other apes, lack external tails, have several blood type systems, have opposable thumbs, and are sexually dimorphic. The comparatively minor anatomical differences between humans and chimpanzees are largely a result of human bipedalism and larger brain size. One difference is that humans have a far faster and more accurate throw than other animals. Humans are also among the best long-distance runners in the animal kingdom, but slower over short distances.[118][119] Humans' thinner body hair and more productive sweat glands help avoid heat exhaustion while running for long distances.[120]
|
81 |
+
|
82 |
+
As a consequence of bipedalism, human females have narrower birth canals. The construction of the human pelvis differs from other primates, as do the toes. A trade-off for these advantages of the modern human pelvis is that childbirth is more difficult and dangerous than in most mammals, especially given the larger head size of human babies compared to other primates. Human babies must turn around as they pass through the birth canal while other primates do not, which makes humans the only species where females usually require help from their conspecifics (other members of their own species) to reduce the risks of birthing. As a partial evolutionary solution, human fetuses are born less developed and more vulnerable. Chimpanzee babies are cognitively more developed than human babies until the age of six months, when the rapid development of human brains surpasses chimpanzees. Another difference between women and chimpanzee females is that women go through the menopause and become infertile decades before the end of their lives. All species of non-human apes are capable of giving birth until death. Menopause probably developed as it provides an evolutionary advantage of more caring time to relatives' young.[119]
|
83 |
+
|
84 |
+
Apart from bipedalism, humans differ from chimpanzees mostly in smelling, hearing, digesting proteins, brain size, and the ability of language. Humans' brains are about three times bigger than in chimpanzees. More importantly, the brain to body ratio is much higher in humans than in chimpanzees, and humans have a significantly more developed cerebral cortex, with a larger number of neurons. The mental abilities of humans are remarkable compared to other apes. Humans' ability of speech is unique among primates. Humans are able to create new and complex ideas, and to develop technology, which is unprecedented among other organisms on Earth.[119]
|
85 |
+
|
86 |
+
It is estimated that the worldwide average height for an adult human male is about 172 cm (5 ft 7 1⁄2 in),[citation needed] while the worldwide average height for adult human females is about 158 cm (5 ft 2 in).[citation needed] Shrinkage of stature may begin in middle age in some individuals, but tends to be typical in the extremely aged.[121] Through history human populations have universally become taller, probably as a consequence of better nutrition, healthcare, and living conditions.[122] The average mass of an adult human is 54–64 kg (119–141 lb) for females and 70–83 kg (154–183 lb) for males.[123][124] Like many other conditions, body weight and body type is influenced by both genetic susceptibility and environment and varies greatly among individuals. (see obesity)[125][126]
|
87 |
+
|
88 |
+
Humans have a density of hair follicles comparable to other apes. However, human body hair is vellus hair, most of which is so short and wispy as to be practically invisible. In contrast (and unusually among species), a follicle of terminal hair on the human scalp can grow for many years before falling out.[127][128] Humans have about 2 million sweat glands spread over their entire bodies, many more than chimpanzees, whose sweat glands are scarce and are mainly located on the palm of the hand and on the soles of the feet.[129] Humans have the largest number of eccrine sweat glands among species.
|
89 |
+
|
90 |
+
The dental formula of humans is: 2.1.2.32.1.2.3. Humans have proportionately shorter palates and much smaller teeth than other primates. They are the only primates to have short, relatively flush canine teeth. Humans have characteristically crowded teeth, with gaps from lost teeth usually closing up quickly in young individuals. Humans are gradually losing their third molars, with some individuals having them congenitally absent.[130]
|
91 |
+
|
92 |
+
Like all animals, humans are a diploid eukaryotic species. Each somatic cell has two sets of 23 chromosomes, each set received from one parent; gametes have only one set of chromosomes, which is a mixture of the two parental sets. Among the 23 pairs of chromosomes there are 22 pairs of autosomes and one pair of sex chromosomes. Like other mammals, humans have an XY sex-determination system, so that females have the sex chromosomes XX and males have XY.[131]
|
93 |
+
|
94 |
+
A rough and incomplete human genome was assembled as an average of a number of humans in 2003, and currently efforts are being made to achieve a sample of the genetic diversity of the species (see International HapMap Project). By present estimates, humans have approximately 22,000 genes.[132] The variation in human DNA is very small compared to other species, possibly suggesting a population bottleneck during the Late Pleistocene (around 100,000 years ago), in which the human population was reduced to a small number of breeding pairs.[133][134] Nucleotide diversity is based on single mutations called single nucleotide polymorphisms (SNPs). The nucleotide diversity between humans is about 0.1%, i.e. 1 difference per 1,000 base pairs.[135][136] A difference of 1 in 1,000 nucleotides between two humans chosen at random amounts to about 3 million nucleotide differences, since the human genome has about 3 billion nucleotides. Most of these single nucleotide polymorphisms (SNPs) are neutral but some (about 3 to 5%) are functional and influence phenotypic differences between humans through alleles.[citation needed]
|
95 |
+
|
96 |
+
By comparing the parts of the genome that are not under natural selection and which therefore accumulate mutations at a fairly steady rate, it is possible to reconstruct a genetic tree incorporating the entire human species since the last shared ancestor. Each time a certain mutation (SNP) appears in an individual and is passed on to his or her descendants, a haplogroup is formed including all of the descendants of the individual who will also carry that mutation. By comparing mitochondrial DNA, which is inherited only from the mother, geneticists have concluded that the last female common ancestor whose genetic marker is found in all modern humans, the so-called mitochondrial Eve, must have lived around 90,000 to 200,000 years ago.[137][138][139]
|
97 |
+
|
98 |
+
Human accelerated regions, first described in August 2006,[140][141] are a set of 49 segments of the human genome that are conserved throughout vertebrate evolution but are strikingly different in humans. They are named according to their degree of difference between humans and their nearest animal relative (chimpanzees) (HAR1 showing the largest degree of human-chimpanzee differences). Found by scanning through genomic databases of multiple species, some of these highly mutated areas may contribute to human-specific traits.[citation needed]
|
99 |
+
|
100 |
+
The forces of natural selection have continued to operate on human populations, with evidence that certain regions of the genome display directional selection in the past 15,000 years.[142]
|
101 |
+
|
102 |
+
As with other mammals, human reproduction takes place by internal fertilization via sexual intercourse. During this process, the male inserts his erect penis into the female's vagina and ejaculates semen, which contains sperm. The sperm travels through the vagina and cervix into the uterus or Fallopian tubes for fertilization of the ovum. Upon fertilization and implantation, gestation then occurs within the female's uterus.
|
103 |
+
|
104 |
+
The zygote divides inside the female's uterus to become an embryo, which over a period of 38 weeks (9 months) of gestation becomes a fetus. After this span of time, the fully grown fetus is birthed from the woman's body and breathes independently as an infant for the first time. At this point, most modern cultures recognize the baby as a person entitled to the full protection of the law, though some jurisdictions extend various levels of personhood earlier to human fetuses while they remain in the uterus.
|
105 |
+
|
106 |
+
Compared with other species, human childbirth is dangerous. Painful labors lasting 24 hours or more are not uncommon and sometimes lead to the death of the mother, the child or both.[143] This is because of both the relatively large fetal head circumference and the mother's relatively narrow pelvis.[144][145] The chances of a successful labor increased significantly during the 20th century in wealthier countries with the advent of new medical technologies. In contrast, pregnancy and natural childbirth remain hazardous ordeals in developing regions of the world, with maternal death rates approximately 100 times greater than in developed countries.[146]
|
107 |
+
|
108 |
+
In developed countries, infants are typically 3–4 kg (7–9 lb) in weight and 50–60 cm (20–24 in) in height at birth.[147][failed verification] However, low birth weight is common in developing countries, and contributes to the high levels of infant mortality in these regions.[148] Helpless at birth, humans continue to grow for some years, typically reaching sexual maturity at 12 to 15 years of age. Females continue to develop physically until around the age of 18, whereas male development continues until around age 21. The human life span can be split into a number of stages: infancy, childhood, adolescence, young adulthood, adulthood and old age. The lengths of these stages, however, have varied across cultures and time periods. Compared to other primates, humans experience an unusually rapid growth spurt during adolescence, where the body grows 25% in size. Chimpanzees, for example, grow only 14%, with no pronounced spurt.[149] The presence of the growth spurt is probably necessary to keep children physically small until they are psychologically mature. Humans are one of the few species in which females undergo menopause. It has been proposed that menopause increases a woman's overall reproductive success by allowing her to invest more time and resources in her existing offspring, and in turn their children (the grandmother hypothesis), rather than by continuing to bear children into old age.[150][151]
|
109 |
+
|
110 |
+
Evidence-based studies indicate that the life span of an individual depends on two major factors, genetics and lifestyle choices.[152] For various reasons, including biological/genetic causes,[153] women live on average about four years longer than men—as of 2013[update] the global average life expectancy at birth of a girl is estimated at 70.2 years compared to 66.1 for a boy.[154] There are significant geographical variations in human life expectancy, mostly correlated with economic development—for example life expectancy at birth in Hong Kong is 84.8 years for girls and 78.9 for boys, while in Swaziland, primarily because of AIDS, it is 31.3 years for both sexes.[155] The developed world is generally aging, with the median age around 40 years. In the developing world the median age is between 15 and 20 years. While one in five Europeans is 60 years of age or older, only one in twenty Africans is 60 years of age or older.[156] The number of centenarians (humans of age 100 years or older) in the world was estimated by the United Nations at 210,000 in 2002.[157] Jeanne Calment is widely believed to have reached the age of 122;[158] higher ages have been claimed but are unsubstantiated.
|
111 |
+
|
112 |
+
Humans are omnivorous, capable of consuming a wide variety of plant and animal material.[159][160] Varying with available food sources in regions of habitation, and also varying with cultural and religious norms, human groups have adopted a range of diets, from purely vegan to primarily carnivorous. In some cases, dietary restrictions in humans can lead to deficiency diseases; however, stable human groups have adapted to many dietary patterns through both genetic specialization and cultural conventions to use nutritionally balanced food sources.[161] The human diet is prominently reflected in human culture, and has led to the development of food science.
|
113 |
+
|
114 |
+
Until the development of agriculture approximately 10,000 years ago, Homo sapiens employed a hunter-gatherer method as their sole means of food collection. This involved combining stationary food sources (such as fruits, grains, tubers, and mushrooms, insect larvae and aquatic mollusks) with wild game, which must be hunted and killed in order to be consumed.[162] It has been proposed that humans have used fire to prepare and cook food since the time of Homo erectus.[163] Around ten thousand years ago, humans developed agriculture,[164] which substantially altered their diet. This change in diet may also have altered human biology; with the spread of dairy farming providing a new and rich source of food, leading to the evolution of the ability to digest lactose in some adults.[165][166] Agriculture led to increased populations, the development of cities, and because of increased population density, the wider spread of infectious diseases. The types of food consumed, and the way in which they are prepared, have varied widely by time, location, and culture.
|
115 |
+
|
116 |
+
In general, humans can survive for two to eight weeks without food, depending on stored body fat. Survival without water is usually limited to three or four days. About 36 million humans die every year from causes directly or indirectly related to starvation.[167] Childhood malnutrition is also common and contributes to the global burden of disease.[168] However global food distribution is not even, and obesity among some human populations has increased rapidly, leading to health complications and increased mortality in some developed, and a few developing countries. Worldwide over one billion people are obese,[169] while in the United States 35% of people are obese, leading to this being described as an "obesity epidemic."[170] Obesity is caused by consuming more calories than are expended, so excessive weight gain is usually caused by an energy-dense diet.[169]
|
117 |
+
|
118 |
+
No two humans—not even monozygotic twins—are genetically identical. Genes and environment influence human biological variation in visible characteristics, physiology, disease susceptibility and mental abilities. The exact influence of genes and environment on certain traits is not well understood.[171][172]
|
119 |
+
|
120 |
+
Most current genetic and archaeological evidence supports a recent single origin of modern humans in East Africa,[173] with first migrations placed at 60,000 years ago. Compared to the great apes, human gene sequences—even among African populations—are remarkably homogeneous.[174] On average, genetic similarity between any two humans is 99.5%-99.9%.[175][176][177][178][179][180] There is about 2–3 times more genetic diversity within the wild chimpanzee population than in the entire human gene pool.[181][182][183]
|
121 |
+
|
122 |
+
The human body's ability to adapt to different environmental stresses is remarkable, allowing humans to acclimatize to a wide variety of temperatures, humidity, and altitudes. As a result, humans are a cosmopolitan species found in almost all regions of the world, including tropical rainforest, arid desert, extremely cold arctic regions, and heavily polluted cities. Most other species are confined to a few geographical areas by their limited adaptability.[184]
|
123 |
+
|
124 |
+
There is biological variation in the human species—with traits such as blood type, genetic diseases, cranial features, facial features, organ systems, eye color, hair color and texture, height and build, and skin color varying across the globe. Human body types vary substantially. The typical height of an adult human is between 1.4 and 1.9 m (4 ft 7 in and 6 ft 3 in), although this varies significantly depending, among other things, on sex, ethnic origin,[185][186] and even family bloodlines. Body size is partly determined by genes and is also significantly influenced by environmental factors such as diet, exercise, and sleep patterns, especially as an influence in childhood. Adult height for each sex in a particular ethnic group approximately follows a normal distribution. Those aspects of genetic variation that give clues to human evolutionary history, or are relevant to medical research, have received particular attention. For example, the genes that allow adult humans to digest lactose are present in high frequencies in populations that have long histories of cattle domestication, suggesting natural selection having favored that gene in populations that depend on cow milk. Some hereditary diseases such as sickle cell anemia are frequent in populations where malaria has been endemic throughout history—it is believed that the same gene gives increased resistance to malaria among those who are unaffected carriers of the gene. Similarly, populations that have for a long time inhabited specific climates, such as arctic or tropical regions or high altitudes, tend to have developed specific phenotypes that are beneficial for conserving energy in those environments—short stature and stocky build in cold regions, tall and lanky in hot regions, and with high lung capacities at high altitudes. Some populations have evolved highly unique adaptations to very specific environmental conditions, such as those advantageous to ocean-dwelling lifestyles and freediving in the Bajau.[187] Skin color tends to vary clinally, with darker skin mostly around the equator—where the added protection from the sun's ultraviolet radiation is thought to give an evolutionary advantage—and lighter skin tones closer to the poles.[188][189][190][191]
|
125 |
+
|
126 |
+
The hue of human skin and hair is determined by the presence of pigments called melanins. Human skin color can range from darkest brown to lightest peach, or even nearly white or colorless in cases of albinism.[183] Human hair ranges in color from white to red to blond to brown to black, which is most frequent.[192] Hair color depends on the amount of melanin (an effective sun blocking pigment) in the skin and hair, with hair melanin concentrations in hair fading with increased age, leading to grey or even white hair. Most researchers believe that skin darkening is an adaptation that evolved as protection against ultraviolet solar radiation, which also helps balancing folate, which is destroyed by ultraviolet radiation. Light skin pigmentation protects against depletion of vitamin D, which requires sunlight to make.[193] Skin pigmentation of contemporary humans is clinally distributed across the planet, and in general correlates with the level of ultraviolet radiation in a particular geographic area. Human skin also has a capacity to darken (tan) in response to exposure to ultraviolet radiation.[194][195][196]
|
127 |
+
|
128 |
+
Within the human species, the greatest degree of genetic variation exists between males and females. While the nucleotide genetic variation of individuals of the same sex across global populations is no greater than 0.1%-0.5%, the genetic difference between males and females is between 1% and 2%. The genetic difference between sexes contributes to anatomical, hormonal, neural, and physiological differences between men and women, although the exact degree and nature of social and environmental influences on sexes are not completely understood. Males on average are 15% heavier and 15 cm (6 in) taller than females. There is a difference between body types, body organs and systems, hormonal levels, sensory systems, and muscle mass between sexes. On average, men have about 40–50% more upper body strength and 20–30% more lower body strength than women. Women generally have a higher body fat percentage than men. Women have lighter skin than men of the same population; this has been explained by a higher need for vitamin D (which is synthesized by sunlight) in females during pregnancy and lactation. As there are chromosomal differences between females and males, some X and Y chromosome related conditions and disorders only affect either men or women. Other conditional differences between males and females are not related to sex chromosomes. Even after allowing for body weight and volume, the male voice is usually an octave deeper than the female voice. Women have a longer life span in almost every population around the world.[197][198][199][200][201][202][203][204][205]
|
129 |
+
|
130 |
+
Males typically have larger tracheae and branching bronchi, with about 30% greater lung volume per unit body mass. They have larger hearts, 10% higher red blood cell count, and higher hemoglobin, hence greater oxygen-carrying capacity. They also have higher circulating clotting factors (vitamin K, prothrombin and platelets). These differences lead to faster healing of wounds and higher peripheral pain tolerance.[206] Females typically have more white blood cells (stored and circulating), more granulocytes and B and T lymphocytes. Additionally, they produce more antibodies at a faster rate than males. Hence they develop fewer infectious diseases and these continue for shorter periods.[206] Ethologists argue that females, interacting with other females and multiple offspring in social groups, have experienced such traits as a selective advantage.[207][208][209][210][211] According to Daly and Wilson, "The sexes differ more in human beings than in monogamous mammals, but much less than in extremely polygamous mammals."[212] But given that sexual dimorphism in the closest relatives of humans is much greater than among humans, the human clade must be considered to be characterized by decreasing sexual dimorphism, probably due to less competitive mating patterns. One proposed explanation is that human sexuality has developed more in common with its close relative the bonobo, which exhibits similar sexual dimorphism, is polygynandrous and uses recreational sex to reinforce social bonds and reduce aggression.[213]
|
131 |
+
|
132 |
+
Humans of the same sex are 99.5% genetically identical. There is relatively little variation between human geographical populations, and most of the variation that occurs is at the individual level.[183][214][215] Of the 0.1%-0.5% of human genetic differentiation, 85% exists within any randomly chosen local population, be they Italians, Koreans, or Kurds. Two randomly chosen Koreans may be genetically almost as different as a Korean and an Italian. Genetic data shows that no matter how population groups are defined, two people from the same population group are almost as different from each other as two people from any two different population groups.[183][216][217][218]
|
133 |
+
|
134 |
+
Current genetic research has demonstrated that human populations native to the African continent are the most genetically diverse.[219] Human genetic diversity decreases in native populations with migratory distance from Africa, and this is thought to be the result of bottlenecks during human migration.[220][221] Humans have lived in Africa for the longest time, which has allowed accumulation of a higher diversity of genetic mutations in these populations. Only part of Africa's population migrated out of the continent into Eurasia, bringing just part of the original African genetic variety with them. Non-African populations, however, acquired new genetic inputs from local admixture with archaic populations, and thus have much larger amounts of variation from Neanderthals and Denisovans than is found in Africa.[222] African populations also harbour the highest number of private genetic variants, or those not found in other places of the world. While many of the common variants found in populations outside of Africa are also found on the African continent, there are still large numbers which are private to these regions, especially Oceania and the Americas.[183][222] Furthermore, recent studies have found that populations in sub-Saharan Africa, and particularly West Africa, have ancestral genetic variation which predates modern humans and has been lost in most non-African populations. This ancestry is thought to originate from admixture with an unknown archaic hominin that diverged before the split of Neanderthals and modern humans.[223][224]
|
135 |
+
|
136 |
+
Geographical distribution of human variation is complex and constantly shifts through time which reflects complicated human evolutionary history. Most human biological variation is clinally distributed and blends gradually from one area to the next. Groups of people around the world have different frequencies of polymorphic genes. Furthermore, different traits are often non-concordant and each have different clinal distribution. Adaptability varies both from person to person and from population to population. The most efficient adaptive responses are found in geographical populations where the environmental stimuli are the strongest (e.g. Tibetans are highly adapted to high altitudes). The clinal geographic genetic variation is further complicated by the migration and mixing between human populations which has been occurring since prehistoric times.[183][225][226][227][228][229]
|
137 |
+
|
138 |
+
Human variation is highly non-concordant: many of the genes do not cluster together and are not inherited together. Skin and hair color are mostly not correlated to height, weight, or athletic ability. Humans do not share the same patterns of variation through geography. Skin colour generally varies with latitude and certain people are tall or have brown hair. There is a statistical correlation between particular features in a population, but different features might not be expressed or inherited together. Thus, genes which code for some physical traits—such as skin color, hair color, or height—represent a minuscule portion of the human genome and might not correlate with genetic affinity. Dark-skinned populations that are found in Africa, Australia, and South Asia are not closely related to each other.[196][228][229][230][231][232] Even within the same region, physical phenotype can be unrelated to genetic affinity. Despite pygmy populations of South East Asia (Andamanese) having similar physical features with African pygmy populations such as short stature, dark skin, and curly hair, they are not genetically closely related to these populations.[233] Genetic variants affecting superficial anatomical features such as skin color—from a genetic perspective, are highly meaningless—they involve a few hundred of the billions of nucleotides in a person's DNA.[234] Individuals with the same morphology do not necessarily cluster with each other by lineage, and a given lineage does not include only individuals with the same trait complex.[183][217][235]
|
139 |
+
|
140 |
+
Due to practices of group endogamy, allele frequencies cluster locally around kin groups and lineages, or by national, ethnic, cultural and linguistic boundaries, giving a detailed degree of correlation between genetic clusters and population groups when considering many alleles simultaneously. Despite this, genetic boundaries around local populations do not biologically mark off any fully discrete groups of humans. Much of human variation is continuous, often with no clear points of demarcation.[235][236][227][228][237][238][239][240][241][242]
|
141 |
+
|
142 |
+
The human brain, the focal point of the central nervous system in humans, controls the peripheral nervous system. In addition to controlling "lower," involuntary, or primarily autonomic activities such as respiration and digestion, it is also the locus of "higher" order functioning such as thought, reasoning, and abstraction.[243] These cognitive processes constitute the mind, and, along with their behavioral consequences, are studied in the field of psychology.
|
143 |
+
|
144 |
+
Generally regarded as more capable of these higher order activities, the human brain is believed to be more intelligent in general than that of any other known species. While some non-human species are capable of creating structures and using simple tools—mostly through instinct and mimicry—human technology is vastly more complex, and is constantly evolving and improving through time.
|
145 |
+
|
146 |
+
Humans are generally diurnal. The average sleep requirement is between seven and nine hours per day for an adult and nine to ten hours per day for a child; elderly people usually sleep for six to seven hours. Having less sleep than this is common among humans, even though sleep deprivation can have negative health effects. A sustained restriction of adult sleep to four hours per day has been shown to correlate with changes in physiology and mental state, including reduced memory, fatigue, aggression, and bodily discomfort.[244] During sleep humans dream. In dreaming humans experience sensory images and sounds, in a sequence which the dreamer usually perceives more as an apparent participant than as an observer. Dreaming is stimulated by the pons and mostly occurs during the REM phase of sleep.
|
147 |
+
|
148 |
+
Humans are one of the relatively few species to have sufficient self-awareness to recognize themselves in a mirror.[245] Around 18 months most human children are aware that the mirror image is not another person.[246]
|
149 |
+
|
150 |
+
The human brain perceives the external world through the senses, and each individual human is influenced greatly by his or her experiences, leading to subjective views of existence and the passage of time. Humans are variously said to possess consciousness, self-awareness, and a mind, which correspond roughly to the mental processes of thought. These are said to possess qualities such as self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment. The extent to which the mind constructs or experiences the outer world is a matter of debate, as are the definitions and validity of many of the terms used above.
|
151 |
+
|
152 |
+
The physical aspects of the mind and brain, and by extension of the nervous system, are studied in the field of neurology, the more behavioral in the field of psychology, and a sometimes loosely defined area between in the field of psychiatry, which treats mental illness and behavioral disorders. Psychology does not necessarily refer to the brain or nervous system, and can be framed purely in terms of phenomenological or information processing theories of the mind. Increasingly, however, an understanding of brain functions is being included in psychological theory and practice, particularly in areas such as artificial intelligence, neuropsychology, and cognitive neuroscience.
|
153 |
+
|
154 |
+
The nature of thought is central to psychology and related fields. Cognitive psychology studies cognition, the mental processes' underlying behavior. It uses information processing as a framework for understanding the mind. Perception, learning, problem solving, memory, attention, language and emotion are all well researched areas as well. Cognitive psychology is associated with a school of thought known as cognitivism, whose adherents argue for an information processing model of mental function, informed by positivism and experimental psychology. Techniques and models from cognitive psychology are widely applied and form the mainstay of psychological theories in many areas of both research and applied psychology. Largely focusing on the development of the human mind through the life span, developmental psychology seeks to understand how people come to perceive, understand, and act within the world and how these processes change as they age. This may focus on intellectual, cognitive, neural, social, or moral development. Psychologists have developed intelligence tests and the concept of intelligence quotient in order to assess the relative intelligence of human beings and study its distribution among population.[247]
|
155 |
+
|
156 |
+
Some philosophers divide consciousness into phenomenal consciousness, which is experience itself, and access consciousness, which is the processing of the things in experience.[248] Phenomenal consciousness is the state of being conscious, such as when they say "I am conscious." Access consciousness is being conscious of something in relation to abstract concepts, such as when one says "I am conscious of these words." Various forms of access consciousness include awareness, self-awareness, conscience, stream of consciousness, Husserl's phenomenology, and intentionality. The concept of phenomenal consciousness, in modern history, according to some, is closely related to the concept of qualia. Social psychology links sociology with psychology in their shared study of the nature and causes of human social interaction, with an emphasis on how people think towards each other and how they relate to each other. The behavior and mental processes, both human and non-human, can be described through animal cognition, ethology, evolutionary psychology, and comparative psychology as well. Human ecology is an academic discipline that investigates how humans and human societies interact with both their natural environment and the human social environment.
|
157 |
+
|
158 |
+
Human motivation is not yet wholly understood. From a psychological perspective, Maslow's hierarchy of needs is a well-established theory which can be defined as the process of satisfying certain needs in ascending order of complexity.[249] From a more general, philosophical perspective, human motivation can be defined as a commitment to, or withdrawal from, various goals requiring the application of human ability. Furthermore, incentive and preference are both factors, as are any perceived links between incentives and preferences. Volition may also be involved, in which case willpower is also a factor. Ideally, both motivation and volition ensure the selection, striving for, and realization of goals in an optimal manner, a function beginning in childhood and continuing throughout a lifetime in a process known as socialization.[250]
|
159 |
+
|
160 |
+
Happiness, or the state of being happy, is a human emotional condition. The definition of happiness is a common philosophical topic. Some people might define it as the best condition that a human can have—a condition of mental and physical health. Others define it as freedom from want and distress; consciousness of the good order of things; assurance of one's place in the universe or society.
|
161 |
+
|
162 |
+
Emotion has a significant influence on, or can even be said to control, human behavior, though historically many cultures and philosophers have for various reasons discouraged allowing this influence to go unchecked. Emotional experiences perceived as pleasant, such as love, admiration, or joy, contrast with those perceived as unpleasant, like hate, envy, or sorrow. The Stoics believed excessive emotion was harmful, while some Sufi teachers felt certain extreme emotions could yield a conceptual perfection, what is often translated as ecstasy.
|
163 |
+
|
164 |
+
In modern scientific thought, certain refined emotions are considered a complex neural trait innate in a variety of domesticated and non-domesticated mammals. These were commonly developed in reaction to superior survival mechanisms and intelligent interaction with each other and the environment; as such, refined emotion is not in all cases as discrete and separate from natural neural function as was once assumed. However, when humans function in civilized tandem, it has been noted that uninhibited acting on extreme emotion can lead to social disorder and crime.
|
165 |
+
|
166 |
+
For humans, sexuality has important social functions: it creates physical intimacy, bonds and hierarchies among individuals, besides ensuring biological reproduction. Sexual desire, or libido, is experienced as a bodily urge, often accompanied by strong emotions such as love, ecstasy and jealousy. The significance of sexuality in the human species is reflected in a number of physical features, among them hidden ovulation, the evolution of external scrotum and (among great apes) a relatively large penis suggesting sperm competition in humans, the absence of an os penis, permanent secondary sexual characteristics and the forming of pair bonds based on sexual attraction as a common social structure. Contrary to other primates that often advertise estrus through visible signs, human females do not have distinct or visible signs of ovulation, and they experience sexual desire outside of their fertile periods.
|
167 |
+
|
168 |
+
Human choices in acting on sexuality are commonly influenced by social norms, which vary between cultures. Restrictions are often determined by religious beliefs or social customs. People can fall anywhere along a continuous scale of sexual orientation,[251] although most humans are heterosexual.[252] There is considerably more evidence supporting nonsocial, biological causes of sexual orientation than social ones, especially for males.[252] Recent research, including neurology and genetics, suggests that other aspects of human sexuality are biologically influenced as well.[253]
|
169 |
+
|
170 |
+
Both the mother and the father provide care for human offspring, in contrast to other primates, where parental care is mostly restricted to mothers.[254]
|
171 |
+
|
172 |
+
Humanity's unprecedented set of intellectual skills were a key factor in the species' eventual technological advancement and concomitant domination of the biosphere.[259] Disregarding extinct hominids, humans are the only animals known to teach generalizable information,[260] innately deploy recursive embedding to generate and communicate complex concepts,[261] engage in the "folk physics" required for competent tool design,[262][263] or cook food in the wild.[264] Human traits and behaviors that are unusual (though not necessarily unique) among species, include starting fires,[265] phoneme structuring,[266] self-recognition in mirror tests,[267] habitual bipedal walking,[268] and vocal learning.[269] Scholars debate whether humans evolved the unusual trait of natural persistence hunting,[270][271] and also debate to what extent humans are the only animals with a theory of mind.[272] Even compared with other social animals, humans have an unusually high degree of flexibility in their facial expressions.[273] Humans are the only animals known to cry emotional tears.[274]
|
173 |
+
|
174 |
+
Humans are highly social beings and tend to live in large complex social groups. More than any other creature, humans are capable of using systems of communication for self-expression, the exchange of ideas, and organization, and as such have created complex social structures composed of many cooperating and competing groups. Human groups range from the size of families to nations. Social interactions between humans have established an extremely wide variety[clarification needed] of values, social norms, and rituals, which together form the basis of human society.
|
175 |
+
|
176 |
+
Culture is defined here as patterns of complex symbolic behavior, i.e. all behavior that is not innate but which has to be learned through social interaction with others; such as the use of distinctive material and symbolic systems, including language, ritual, social organization, traditions, beliefs and technology.
|
177 |
+
|
178 |
+
While many species communicate, language is unique to humans, a defining feature of humanity, and a cultural universal. Unlike the limited systems of other animals, human language is open—an infinite number of meanings can be produced by combining a limited number of symbols. Human language also has the capacity of displacement, using words to represent things and happenings that are not presently or locally occurring, but reside in the shared imagination of interlocutors.[130] Language differs from other forms of communication in that it is modality independent; the same meanings can be conveyed through different media, auditively in speech, visually by sign language or writing, and even through tactile media such as braille. Language is central to the communication between humans, and to the sense of identity that unites nations, cultures and ethnic groups. The invention of writing systems at least five thousand years ago allowed the preservation of language on material objects, and was a major technological advancement. The science of linguistics describes the structure and function of language and the relationship between languages. There are approximately six thousand different languages currently in use, including sign languages, and many thousands more that are extinct.[275]
|
179 |
+
|
180 |
+
The division of humans into male and female sexes has been marked culturally by a corresponding division of roles, norms, practices, dress, behavior, rights, duties, privileges, status, and power. Cultural differences by gender have often been believed to have arisen naturally out of a division of reproductive labor; the biological fact that women give birth led to their further cultural responsibility for nurturing and caring for children.[276] Gender roles have varied historically, and challenges to predominant gender norms have recurred in many societies.
|
181 |
+
|
182 |
+
All human societies organize, recognize and classify types of social relationships based on relations between parents and children (consanguinity), and relations through marriage (affinity). These kinds of relations are generally called kinship relations. In most societies kinship places mutual responsibilities and expectations of solidarity on the individuals that are so related, and those who recognize each other as kinsmen come to form networks through which other social institutions can be regulated. Among the many functions of kinship is the ability to form descent groups, groups of people sharing a common line of descent, which can function as political units such as clans. Another function is the way in which kinship unites families through marriage, forming kinship alliances between groups of wife-takers and wife-givers. Such alliances also often have important political and economical ramifications, and may result in the formation of political organization above the community level. Kinship relations often includes regulations for whom an individual should or shouldn't marry. All societies have rules of incest taboo, according to which marriage between certain kinds of kin relations are prohibited—such rules vary widely between cultures.[citation needed] Some societies also have rules of preferential marriage with certain kin relations, frequently with either cross or parallel cousins. Rules and norms for marriage and social behavior among kinsfolk is often reflected in the systems of kinship terminology in the various languages of the world. In many societies kinship relations can also be formed through forms of co-habitation, adoption, fostering, or companionship, which also tends to create relations of enduring solidarity (nurture kinship).
|
183 |
+
|
184 |
+
Humans often form ethnic groups, such groups tend to be larger than kinship networks and be organized around a common identity defined variously in terms of shared ancestry and history, shared cultural norms and language, or shared biological phenotype. Such ideologies of shared characteristics are often perpetuated in the form of powerful, compelling narratives that give legitimacy and continuity to the set of shared values. Ethnic groupings often correspond to some level of political organization such as the band, tribe, city state or nation. Although ethnic groups appear and disappear through history, members of ethnic groups often conceptualize their groups as having histories going back into the deep past. Such ideologies give ethnicity a powerful role in defining social identity and in constructing solidarity between members of an ethno-political unit. This unifying property of ethnicity has been closely tied to the rise of the nation state as the predominant form of political organization in the 19th and 20th centuries.[277][278][279][280][281][282]
|
185 |
+
|
186 |
+
Society is the system of organizations and institutions arising from interaction between humans. Within a society people can be divided into different groups according to their income, wealth, power, reputation, etc., but the structure of social stratification and the degree of social mobility differs, especially between modern and traditional societies.[283] A state is an organized political community occupying a definite territory, having an organized government, and possessing internal and external sovereignty. Recognition of the state's claim to independence by other states, enabling it to enter into international agreements, is often important to the establishment of its statehood. The "state" can also be defined in terms of domestic conditions, specifically, as conceptualized by Max Weber, "a state is a human community that (successfully) claims the monopoly of the 'legitimate' use of physical force within a given territory."[284]
|
187 |
+
|
188 |
+
Government can be defined as the political means of creating and enforcing laws; typically via a bureaucratic hierarchy. Politics is the process by which decisions are made within groups; this process often involves conflict as well as compromise. Although the term is generally applied to behavior within governments, politics is also observed in all human group interactions, including corporate, academic, and religious institutions. Many different political systems exist, as do many different ways of understanding them, and many definitions overlap. Examples of governments include monarchy, Communist state, military dictatorship, theocracy, and liberal democracy, the last of which is considered dominant today. All of these issues have a direct relationship with economics.
|
189 |
+
|
190 |
+
Trade is the voluntary exchange of goods and services, and is a form of economic activity. A mechanism that allows trade is called a market. Modern traders instead generally negotiate through a medium of exchange, such as money. As a result, buying can be separated from selling, or earning. Because of specialization and division of labor, most people concentrate on a small aspect of manufacturing or service, trading their labor for products. Trade exists between regions because different regions have an absolute or comparative advantage in the production of some tradable commodity, or because different regions' size allows for the benefits of mass production.
|
191 |
+
|
192 |
+
Economics is a social science which studies the production, distribution, trade, and consumption of goods and services. Economics focuses on measurable variables, and is broadly divided into two main branches: microeconomics, which deals with individual agents, such as households and businesses, and macroeconomics, which considers the economy as a whole, in which case it considers aggregate supply and demand for money, capital and commodities. Aspects receiving particular attention in economics are resource allocation, production, distribution, trade, and competition. Economic logic is increasingly applied to any problem that involves choice under scarcity or determining economic value.
|
193 |
+
|
194 |
+
War is a state of organized armed conflict between states or non-state actors. War is characterized by the use of lethal violence against others—whether between combatants or upon non-combatants—to achieve military goals through force. Lesser, often spontaneous conflicts, such as brawls, riots, revolts, and melees, are not considered to be warfare. Revolutions can be nonviolent or an organized and armed revolution which denotes a state of war. During the 20th century, it is estimated that between 167 and 188 million people died as a result of war.[285] A common definition defines war as a series of military campaigns between at least two opposing sides involving a dispute over sovereignty, territory, resources, religion, or other issues. A war between internal elements of a state is a civil war.
|
195 |
+
|
196 |
+
There have been a wide variety of rapidly advancing tactics throughout the history of war, ranging from conventional war to asymmetric warfare to total war and unconventional warfare. Techniques include hand to hand combat, the use of ranged weapons, naval warfare, and, more recently, air support. Military intelligence has often played a key role in determining victory and defeat. Propaganda, which often includes information, slanted opinion and disinformation, plays a key role both in maintaining unity within a warring group and in sowing discord among opponents. In modern warfare, soldiers and combat vehicles are used to control the land, warships the sea, and aircraft the sky. These fields have also overlapped in the forms of marines, paratroopers, aircraft carriers, and surface-to-air missiles, among others. Satellites in low Earth orbit have made outer space a factor in warfare as well through their use for detailed intelligence gathering; however, no known aggressive actions have been taken from space.
|
197 |
+
|
198 |
+
Stone tools were used by proto-humans at least 2.5 million years ago.[286] The controlled use of fire began around 1.5 million years ago. Since then, humans have made major advances, developing complex technology to create tools to aid their lives and allowing for other advancements in culture. Major leaps in technology include the discovery of agriculture—what is known as the Neolithic Revolution, and the invention of automated machines in the Industrial Revolution.
|
199 |
+
|
200 |
+
Archaeology attempts to tell the story of past or lost cultures in part by close examination of the artifacts they produced. Early humans left stone tools, pottery, and jewelry that are particular to various regions and times.
|
201 |
+
|
202 |
+
Throughout history, humans have altered their appearance by wearing clothing[287] and adornments, by trimming or shaving hair or by means of body modifications.
|
203 |
+
|
204 |
+
Body modification is the deliberate altering of the human body for any non-medical reason, such as aesthetics, sexual enhancement, a rite of passage, religious reasons, to display group membership or affiliation, to create body art, shock value, or self-expression.[288] In its most broad definition it includes plastic surgery, socially acceptable decoration (e.g. common ear piercing in many societies), and religious rites of passage (e.g. circumcision in a number of cultures).[288]
|
205 |
+
|
206 |
+
Philosophy is a discipline or field of study involving the investigation, analysis, and development of ideas at a general, abstract, or fundamental level. It is the discipline searching for a general understanding of reality, reasoning and values. Major fields of philosophy include logic, metaphysics, epistemology, philosophy of mind, and axiology (which includes ethics and aesthetics). Philosophy covers a very wide range of approaches, and is used to refer to a worldview, to a perspective on an issue, or to the positions argued for by a particular philosopher or school of philosophy.
|
207 |
+
|
208 |
+
Religion is generally defined as a belief system concerning the supernatural, sacred or divine, and practices, values, institutions and rituals associated with such belief. Some religions also have a moral code. The evolution and the history of the first religions have recently become areas of active scientific investigation.[289][290][291] However, in the course of its development, religion has taken on many forms that vary by culture and individual perspective. Some of the chief questions and issues religions are concerned with include life after death (commonly involving belief in an afterlife), the origin of life, the nature of the universe (religious cosmology) and its ultimate fate (eschatology), and what is moral or immoral. A common source for answers to these questions are beliefs in transcendent divine beings such as deities or a singular God, although not all religions are theistic. Spirituality, belief or involvement in matters of the soul or spirit, is one of the many different approaches humans take in trying to answer fundamental questions about humankind's place in the universe, the meaning of life, and the ideal way to live one's life. Though these topics have also been addressed by philosophy, and to some extent by science, spirituality is unique in that it focuses on mystical or supernatural concepts such as karma and God.
|
209 |
+
|
210 |
+
Although the exact level of religiosity can be hard to measure,[292] a majority of humans professes some variety of religious or spiritual belief, although many (in some countries a majority) are irreligious. This includes humans who have no religious beliefs or do not identify with any religion. Humanism is a philosophy which seeks to include all of humanity and all issues common to humans; it is usually non-religious. Most religions and spiritual beliefs are clearly distinct from science on both a philosophical and methodological level; the two are not generally considered mutually exclusive and a majority of humans hold a mix of both scientific and religious views. The distinction between philosophy and religion, on the other hand, is at times less clear, and the two are linked in such fields as the philosophy of religion and theology.
|
211 |
+
|
212 |
+
Humans have been producing art works for at least seventy-three thousand years.[293] Art may be defined as a form of cultural expression and the usage of narratives of liberation and exploration (i.e. art history, art criticism, and art theory) to mediate its boundaries. This distinction may be applied to objects or performances, current or historical, and its prestige extends to those who made, found, exhibit, or own them. In the modern use of the word, art is commonly understood to be the process or result of making material works that, from concept to creation, adhere to the "creative impulse" of human beings.
|
213 |
+
|
214 |
+
Music is a natural intuitive phenomenon based on the three distinct and interrelated organization structures of rhythm, harmony, and melody. Listening to music is perhaps the most common and universal form of entertainment, while learning and understanding it are popular disciplines.[citation needed] There are a wide variety of music genres and ethnic musics. Literature, the body of written—and possibly oral—works, especially creative ones, includes prose, poetry and drama, both fiction and non-fiction. Literature includes such genres as epic, legend, myth, ballad, and folklore.
|
215 |
+
|
216 |
+
Another unique aspect of human culture and thought is the development of complex methods for acquiring knowledge through observation, quantification, and verification.[294][295] The scientific method has been developed to acquire knowledge of the physical world and the rules, processes and principles of which it consists, and combined with mathematics it enables the prediction of complex patterns of causality and consequence.[citation needed] An understanding of mathematics is unique to humans, although other species of animal have some numerical cognition.[296]
|
217 |
+
|
218 |
+
All of science can be divided into three major branches, the formal sciences (e.g., logic and mathematics), which are concerned with formal systems, the applied sciences (e.g., engineering, medicine), which are focused on practical applications, and the empirical sciences, which are based on empirical observation and are in turn divided into natural sciences (e.g., physics, chemistry, biology) and social sciences (e.g., psychology, economics, sociology).[297] A pseudoscience is an activity or a teaching which is mistakenly regarded as being scientific by its major proponents.[298]
|
en/2612.html.txt
ADDED
@@ -0,0 +1,218 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Humans (Homo sapiens) are highly intelligent primates that have become the dominant species on Earth. They are the only extant members of the subtribe Hominina and together with chimpanzees, gorillas, and orangutans, they are part of the family Hominidae (the great apes, or hominids). Humans are terrestrial animals, characterized by their erect posture and bipedal locomotion; high manual dexterity and heavy tool use compared to other animals; open-ended and complex language use compared to other animal communications; larger, more complex brains than other primates; and highly advanced and organized societies.[3][4]
|
6 |
+
|
7 |
+
Early hominins—particularly the australopithecines, whose brains and anatomy are in many ways more similar to ancestral non-human apes—are less often referred to as "human" than hominins of the genus Homo.[5] Several of these hominins used fire, occupied much of Eurasia, and the lineage that later that gave rise to Homo sapiens is thought to have diverged in Africa from other known hominins around 500,000 years ago, with the earliest fossil evidence of Homo sapiens appearing (also in Africa) around 300,000 years ago.[6] The oldest early H. sapiens fossils were found in Jebel Irhoud, Morocco dating to about 315,000 years ago.[7][8][9][10][11] As of 2017[update], the oldest known skeleton of an anatomically modern Homo sapiens is the Omo-Kibish I, dated to about 196,000 years ago and discovered in southern Ethiopia.[12][13][14] Humans began to exhibit evidence of behavioral modernity at least by about 100,000–70,000 years ago[15][16][17][18][19][20] and (according to recent evidence) as far back as around 300,000 years ago, in the Middle Stone Age.[21][22][23] In several waves of migration, H. sapiens ventured out of Africa and populated most of the world.[24][25]
|
8 |
+
|
9 |
+
The spread of the large and increasing population of humans has profoundly affected much of the biosphere and millions of species worldwide. Advantages that explain this evolutionary success include a larger brain with a well-developed neocortex, prefrontal cortex and temporal lobes, which enable advanced abstract reasoning, language, problem solving, sociality, and culture through social learning. Humans use tools more frequently and effectively than any other animal: they are the only extant species to build fires, cook food, clothe themselves, and create and use numerous other technologies and arts.
|
10 |
+
|
11 |
+
Humans uniquely use such systems of symbolic communication as language and art to express themselves and exchange ideas, and also organize themselves into purposeful groups. Humans create complex social structures composed of many cooperating and competing groups, from families and kinship networks to political states. Social interactions between humans have established an extremely wide variety of values,[26] social norms, and rituals, which together undergird human society. Curiosity and the human desire to understand and influence the environment and to explain and manipulate phenomena (or events) have motivated humanity's development of science, philosophy, mythology, religion, and numerous other fields of knowledge.
|
12 |
+
|
13 |
+
Though most of human existence has been sustained by hunting and gathering in band societies,[27] many human societies transitioned to sedentary agriculture approximately 10,000 years ago,[28] domesticating plants and animals, thus enabling the growth of civilization. These human societies subsequently expanded, establishing various forms of government, religion, and culture around the world, and unifying people within regions to form states and empires. The rapid advancement of scientific and medical understanding in the 19th and 20th centuries permitted the development of fuel-driven technologies and increased lifespans, causing the human population to rise exponentially. The global human population was estimated to be near 7.8 billion in 2019.[29]
|
14 |
+
|
15 |
+
In common usage, the word "human" generally refers to the only extant species of the genus Homo—anatomically and behaviorally modern Homo sapiens.
|
16 |
+
|
17 |
+
In scientific terms, the meanings of "hominid" and "hominin" have changed during the recent decades with advances in the discovery and study of the fossil ancestors of modern humans. The previously clear boundary between humans and apes has blurred, resulting in biologists now acknowledging the hominids as encompassing multiple species, and Homo and close relatives since the split from chimpanzees as the only hominins. There is also a distinction between anatomically modern humans and Archaic Homo sapiens, the earliest fossil members of the species.
|
18 |
+
|
19 |
+
The English adjective human is a Middle English loanword from Old French humain, ultimately from Latin hūmānus, the adjective form of homō "man." The word's use as a noun (with a plural: humans) dates to the 16th century.[30] The native English term man can refer to the species generally (a synonym for humanity) as well as to human males, or individuals of either sex (though this latter form is less common in contemporary English).[31]
|
20 |
+
|
21 |
+
The species binomial "Homo sapiens" was coined by Carl Linnaeus in his 18th-century work Systema Naturae.[32] The generic name "Homo" is a learned 18th-century derivation from Latin homō "man," ultimately "earthly being" (Old Latin hemō a cognate to Old English guma "man", from PIE dʰǵʰemon-, meaning "earth" or "ground").[33] The species-name "sapiens" means "wise" or "sapient". Note that the Latin word homo refers to humans of either sex, and that "sapiens" is the singular form (while there is no such word as "sapien").[34]
|
22 |
+
|
23 |
+
The genus Homo evolved and diverged from other hominins in Africa, after the human clade split from the chimpanzee lineage of the hominids (great apes) branch of the primates. Modern humans, defined as the species Homo sapiens or specifically to the single extant subspecies Homo sapiens sapiens, proceeded to colonize all the continents and larger islands, arriving in Eurasia 125,000–60,000 years ago,[35][36] Australia around 40,000 years ago, the Americas around 15,000 years ago, and remote islands such as Hawaii, Easter Island, Madagascar, and New Zealand between the years 300 and 1280.[37][38]
|
24 |
+
|
25 |
+
The closest living relatives of humans are chimpanzees and bonobos (genus Pan)[39][40] and gorillas (genus Gorilla).[41] With the sequencing of human and chimpanzee genomes, current estimates of similarity between human and chimpanzee DNA sequences range between 95% and 99%.[41][42][43] By using the technique called a molecular clock which estimates the time required for the number of divergent mutations to accumulate between two lineages, the approximate date for the split between lineages can be calculated. The gibbons (family Hylobatidae) and orangutans (genus Pongo) were the first groups to split from the line leading to the humans, then gorillas (genus Gorilla) followed by the chimpanzee (genus Pan). The splitting date between human and chimpanzee lineages is placed around 4–8 million years ago during the late Miocene epoch.[44][45] During this split, chromosome 2 was formed from two other chromosomes, leaving humans with only 23 pairs of chromosomes, compared to 24 for the other apes.[46]
|
26 |
+
|
27 |
+
There is little fossil evidence for the divergence of the gorilla, chimpanzee and hominin lineages.[47][48] The earliest fossils that have been proposed as members of the hominin lineage are Sahelanthropus tchadensis dating from 7 million years ago, Orrorin tugenensis dating from 5.7 million years ago, and Ardipithecus kadabba dating to 5.6 million years ago. Each of these species has been argued to be a bipedal ancestor of later hominins, but all such claims are contested. It is also possible that any one of the three is an ancestor of another branch of African apes, or is an ancestor shared between hominins and other African Hominoidea (apes). The question of the relation between these early fossil species and the hominin lineage is still to be resolved. From these early species the australopithecines arose around 4 million years ago diverged into robust (also called Paranthropus) and gracile branches,[49] possibly one of which (such as A. garhi, dating to 2.5 million years ago) is a direct ancestor of the genus Homo.[50]
|
28 |
+
|
29 |
+
The earliest members of the genus Homo are Homo habilis which evolved around 2.8 million years ago.[51] Homo habilis has been considered the first species for which there is clear evidence of the use of stone tools. More recently, however, in 2015, stone tools, perhaps predating Homo habilis, have been discovered in northwestern Kenya that have been dated to 3.3 million years old.[52] Nonetheless, the brains of Homo habilis were about the same size as that of a chimpanzee, and their main adaptation was bipedalism as an adaptation to terrestrial living. During the next million years a process of encephalization began, and with the arrival of Homo erectus in the fossil record, cranial capacity had doubled. Homo erectus were the first of the hominina to leave Africa, and these species spread through Africa, Asia, and Europe between 1.3 to 1.8 million years ago. One population of H. erectus, also sometimes classified as a separate species Homo ergaster, stayed in Africa and evolved into Homo sapiens. It is believed that these species were the first to use fire and complex tools. The earliest transitional fossils between H. ergaster/erectus and archaic humans are from Africa such as Homo rhodesiensis, but seemingly transitional forms are also found at Dmanisi, Georgia. These descendants of African H. erectus spread through Eurasia from c. 500,000 years ago evolving into H. antecessor, H. heidelbergensis and H. neanderthalensis. Fossils of anatomically modern humans from date from the Middle Paleolithic at about 200,000 years ago such as the Omo remains of Ethiopia, and the fossils of Herto sometimes classified as Homo sapiens idaltu also from Ethiopia. Earlier remains, now classified as early Homo sapiens, such as the Jebel Irhoud remains from Morocco and the Florisbad Skull from South Africa are also now known to date to about 300,000 and 259,000 years old respectively.[53][6][54][55][56][57] Later fossils of archaic Homo sapiens from Skhul in Israel and Southern Europe begin around 90,000 years ago.[58]
|
30 |
+
|
31 |
+
Human evolution is characterized by a number of morphological, developmental, physiological, and behavioral changes that have taken place since the split between the last common ancestor of humans and chimpanzees. The most significant of these adaptations are 1. bipedalism, 2. increased brain size, 3. lengthened ontogeny (gestation and infancy), 4. decreased sexual dimorphism (neoteny). The relationship between all these changes is the subject of ongoing debate.[59] Other significant morphological changes included the evolution of a power and precision grip, a change first occurring in H. erectus.[60]
|
32 |
+
|
33 |
+
Bipedalism is the basic adaption of the hominin line, and it is considered the main cause behind a suite of skeletal changes shared by all bipedal hominins. The earliest bipedal hominin is considered to be either Sahelanthropus[61] or Orrorin, with Ardipithecus, a full bipedal,[62] coming somewhat later.[citation needed] The knuckle walkers, the gorilla and chimpanzee, diverged around the same time, and either Sahelanthropus or Orrorin may be humans' last shared ancestor with those animals.[citation needed] The early bipedals eventually evolved into the australopithecines and later the genus Homo.[citation needed] There are several theories of the adaptational value of bipedalism. It is possible that bipedalism was favored because it freed up the hands for reaching and carrying food, because it saved energy during locomotion, because it enabled long-distance running and hunting, or as a strategy for avoiding hyperthermia by reducing the surface exposed to direct sun.[citation needed]
|
34 |
+
|
35 |
+
The human species developed a much larger brain than that of other primates—typically 1,330 cm3 (81 cu in) in modern humans, over twice the size of that of a chimpanzee or gorilla.[63] The pattern of encephalization started with Homo habilis which at approximately 600 cm3 (37 cu in) had a brain slightly larger than chimpanzees, and continued with Homo erectus (800–1,100 cm3 (49–67 cu in)), and reached a maximum in Neanderthals with an average size of 1,200–1,900 cm3 (73–116 cu in), larger even than Homo sapiens (but less encephalized).[64] The pattern of human postnatal brain growth differs from that of other apes (heterochrony), and allows for extended periods of social learning and language acquisition in juvenile humans. However, the differences between the structure of human brains and those of other apes may be even more significant than differences in size.[65][66][67][68] The increase in volume over time has affected different areas within the brain unequally—the temporal lobes, which contain centers for language processing have increased disproportionately, as has the prefrontal cortex which has been related to complex decision making and moderating social behavior.[63] Encephalization has been tied to an increasing emphasis on meat in the diet,[69][70] or with the development of cooking,[71] and it has been proposed [72] that intelligence increased as a response to an increased necessity for solving social problems as human society became more complex.
|
36 |
+
|
37 |
+
The reduced degree of sexual dimorphism is primarily visible in the reduction of the male canine tooth relative to other ape species (except gibbons). Another important physiological change related to sexuality in humans was the evolution of hidden estrus. Humans are the only ape in which the female is intermittently fertile year round, and in which no special signals of fertility are produced by the body (such as genital swelling during estrus). Nonetheless humans retain a degree of sexual dimorphism in the distribution of body hair and subcutaneous fat, and in the overall size, males being around 25% larger than females. These changes taken together have been interpreted as a result of an increased emphasis on pair bonding as a possible solution to the requirement for increased parental investment due to the prolonged infancy of offspring.[citation needed]
|
38 |
+
|
39 |
+
By the beginning of the Upper Paleolithic period (50,000 BP), and likely significantly earlier by 100–70,000 years ago[17][18][15][16][19][20] or possibly by about 300,000 years ago[23][22][21] behavioral modernity, including language, music and other cultural universals had developed.[73][74]
|
40 |
+
As early Homo sapiens dispersed, it encountered varieties of archaic humans both in Africa and in Eurasia,
|
41 |
+
in Eurasia notably Homo neanderthalensis.
|
42 |
+
Since 2010, evidence for gene flow between archaic and modern humans during the period of roughly 100,000 to 30,000 years ago has been discovered. This includes modern human admixture in Neanderthals, Neanderthal admixture in all modern humans outside Africa,
|
43 |
+
[75][76]
|
44 |
+
Denisova hominin admixture in Melanesians[77]
|
45 |
+
as well as admixture from unnamed archaic humans to some Sub-Saharan African populations.[78]
|
46 |
+
|
47 |
+
The "out of Africa" migration of Homo sapiens took place in at least two waves, the first around 130,000 to 100,000 years ago, the second (Southern Dispersal) around 70,000 to 50,000 years ago,[79][80][81]
|
48 |
+
resulting in the colonization of Australia around 65–50,000 years ago,[82][83][84]
|
49 |
+
This recent out of Africa migration derived from East African populations, which had become separated from populations migrating to Southern, Central and Western Africa at least 100,000 years earlier.[85] Modern humans subsequently spread globally, replacing archaic humans (either through competition or hybridization). They inhabited Eurasia and Oceania by 40,000 years ago, and the Americas at least 14,500 years ago.[86][87]
|
50 |
+
|
51 |
+
Until about 12,000 years ago (the beginning of the Holocene), all humans lived as hunter-gatherers, generally in small nomadic groups known as band societies, often in caves.
|
52 |
+
|
53 |
+
The Neolithic Revolution (the invention of agriculture) took place beginning about 10,000 years ago, first in the Fertile Crescent, spreading through large parts of the Old World over the following millennia, and independently in Mesoamerica about 6,000 years ago.
|
54 |
+
Access to food surplus led to the formation of permanent human settlements, the domestication of animals and the use of metal tools for the first time in history.
|
55 |
+
|
56 |
+
Agriculture and sedentary lifestyle led to the emergence of early civilizations (the development of urban development, complex society, social stratification and writing)
|
57 |
+
from about 5,000 years ago (the Bronze Age), first beginning in Mesopotamia.[88]
|
58 |
+
|
59 |
+
Few human populations progressed to historicity, with substantial parts of the world remaining in a Neolithic, Mesolithic or Upper Paleolithic stage of development until the advent of globalisation
|
60 |
+
and modernity initiated by European exploration and colonialism.
|
61 |
+
|
62 |
+
The Scientific Revolution, Technological Revolution and the Industrial Revolution brought such discoveries as imaging technology, major innovations in transport, such as the airplane and automobile; energy development, such as coal and electricity.[89] This correlates with population growth (especially in America)[90] and higher life expectancy, the World population rapidly increased numerous times in the 19th and 20th centuries as nearly 10% of the 100 billion people who ever lived lived in the past century.[91]
|
63 |
+
|
64 |
+
With the advent of the Information Age at the end of the 20th century, modern humans live in a world that has become increasingly globalized and interconnected. As of 2010[update], almost 2 billion humans are able to communicate with each other via the Internet,[92] and 3.3 billion by mobile phone subscriptions.[93] Although connection between humans has encouraged the growth of science, art, discussion, and technology, it has also led to culture clashes and the development and use of weapons of mass destruction.[citation needed] Human population growth and industrialisation has led to environmental destruction and pollution significantly contributing to the ongoing mass extinction of other forms of life called the Holocene extinction event,[94] which may be further accelerated by global warming in the future.[95]
|
65 |
+
|
66 |
+
Early human settlements were dependent on proximity to water and, depending on the lifestyle, other natural resources used for subsistence, such as populations of animal prey for hunting and arable land for growing crops and grazing livestock. However, modern humans have a great capacity for altering their habitats by means of technology, through irrigation, urban planning, construction, transport, manufacturing goods, deforestation and desertification,[96] but human settlements continue to be vulnerable to natural disasters, especially those placed in hazardous locations and characterized by lack of quality of construction.[97] Deliberate habitat alteration is often done with the goals of increasing material wealth, increasing thermal comfort, improving the amount of food available, improving aesthetics, or improving ease of access to resources or other human settlements. With the advent of large-scale trade and transport infrastructure, proximity to these resources has become unnecessary, and in many places, these factors are no longer a driving force behind the growth and decline of a population. Nonetheless, the manner in which a habitat is altered is often a major determinant in population change.[citation needed]
|
67 |
+
|
68 |
+
Technology has allowed humans to colonize six of the Earth's seven continents and adapt to virtually all climates. However the human population is not uniformly distributed on the Earth's surface, because the population density varies from one region to another and there are large areas almost completely uninhabited, like Antarctica.[98][99] Within the last century, humans have explored Antarctica, underwater environment, and outer space, although large-scale colonization of these environments is not yet feasible. With a population of over seven billion, humans are among the most numerous of the large mammals. Most humans (61%) live in Asia. The remainder live in the Americas (14%), Africa (14%), Europe (11%), and Oceania (0.5%).[100]
|
69 |
+
|
70 |
+
Human habitation within closed ecological systems in hostile environments, such as Antarctica and outer space, is expensive, typically limited in duration, and restricted to scientific, military, or industrial expeditions. Life in space has been very sporadic, with no more than thirteen humans in space at any given time.[101] Between 1969 and 1972, two humans at a time spent brief intervals on the Moon. As of July 2020, no other celestial body has been visited by humans, although there has been a continuous human presence in space since the launch of the initial crew to inhabit the International Space Station on 31 October 2000.[102] However, other celestial bodies have been visited by human-made objects.[103][104][105]
|
71 |
+
|
72 |
+
Since 1800, the human population has increased from one billion[106] to over seven billion.[107] The combined biomass of the carbon of all the humans on Earth in 2018 was estimated at ~ 60 million tons, about 10 times larger than that of all non-domesticated mammals.[108]
|
73 |
+
|
74 |
+
In 2004, some 2.5 billion out of 6.3 billion people (39.7%) lived in urban areas. In February 2008, the U.N. estimated that half the world's population would live in urban areas by the end of the year.[109] Problems for humans living in cities include various forms of pollution and crime,[110] especially in inner city and suburban slums. Both overall population numbers and the proportion residing in cities are expected to increase significantly in the coming decades.[111]
|
75 |
+
|
76 |
+
Humans have had a dramatic effect on the environment. Humans are apex predators, being rarely preyed upon by other species.[112] Currently, through land development, combustion of fossil fuels, and pollution, humans are thought to be the main contributor to global climate change.[113] If this continues at its current rate it is predicted that climate change will wipe out half of all plant and animal species over the next century.[114][115]
|
77 |
+
|
78 |
+
Most aspects of human physiology are closely homologous to corresponding aspects of animal physiology. The human body consists of the legs, the torso, the arms, the neck, and the head. An adult human body consists of about 100 trillion (1014) cells. The most commonly defined body systems in humans are the nervous, the cardiovascular, the circulatory, the digestive, the endocrine, the immune, the integumentary, the lymphatic, the musculoskeletal, the reproductive, the respiratory, and the urinary system.[116][117]
|
79 |
+
|
80 |
+
Humans, like most of the other apes, lack external tails, have several blood type systems, have opposable thumbs, and are sexually dimorphic. The comparatively minor anatomical differences between humans and chimpanzees are largely a result of human bipedalism and larger brain size. One difference is that humans have a far faster and more accurate throw than other animals. Humans are also among the best long-distance runners in the animal kingdom, but slower over short distances.[118][119] Humans' thinner body hair and more productive sweat glands help avoid heat exhaustion while running for long distances.[120]
|
81 |
+
|
82 |
+
As a consequence of bipedalism, human females have narrower birth canals. The construction of the human pelvis differs from other primates, as do the toes. A trade-off for these advantages of the modern human pelvis is that childbirth is more difficult and dangerous than in most mammals, especially given the larger head size of human babies compared to other primates. Human babies must turn around as they pass through the birth canal while other primates do not, which makes humans the only species where females usually require help from their conspecifics (other members of their own species) to reduce the risks of birthing. As a partial evolutionary solution, human fetuses are born less developed and more vulnerable. Chimpanzee babies are cognitively more developed than human babies until the age of six months, when the rapid development of human brains surpasses chimpanzees. Another difference between women and chimpanzee females is that women go through the menopause and become infertile decades before the end of their lives. All species of non-human apes are capable of giving birth until death. Menopause probably developed as it provides an evolutionary advantage of more caring time to relatives' young.[119]
|
83 |
+
|
84 |
+
Apart from bipedalism, humans differ from chimpanzees mostly in smelling, hearing, digesting proteins, brain size, and the ability of language. Humans' brains are about three times bigger than in chimpanzees. More importantly, the brain to body ratio is much higher in humans than in chimpanzees, and humans have a significantly more developed cerebral cortex, with a larger number of neurons. The mental abilities of humans are remarkable compared to other apes. Humans' ability of speech is unique among primates. Humans are able to create new and complex ideas, and to develop technology, which is unprecedented among other organisms on Earth.[119]
|
85 |
+
|
86 |
+
It is estimated that the worldwide average height for an adult human male is about 172 cm (5 ft 7 1⁄2 in),[citation needed] while the worldwide average height for adult human females is about 158 cm (5 ft 2 in).[citation needed] Shrinkage of stature may begin in middle age in some individuals, but tends to be typical in the extremely aged.[121] Through history human populations have universally become taller, probably as a consequence of better nutrition, healthcare, and living conditions.[122] The average mass of an adult human is 54–64 kg (119–141 lb) for females and 70–83 kg (154–183 lb) for males.[123][124] Like many other conditions, body weight and body type is influenced by both genetic susceptibility and environment and varies greatly among individuals. (see obesity)[125][126]
|
87 |
+
|
88 |
+
Humans have a density of hair follicles comparable to other apes. However, human body hair is vellus hair, most of which is so short and wispy as to be practically invisible. In contrast (and unusually among species), a follicle of terminal hair on the human scalp can grow for many years before falling out.[127][128] Humans have about 2 million sweat glands spread over their entire bodies, many more than chimpanzees, whose sweat glands are scarce and are mainly located on the palm of the hand and on the soles of the feet.[129] Humans have the largest number of eccrine sweat glands among species.
|
89 |
+
|
90 |
+
The dental formula of humans is: 2.1.2.32.1.2.3. Humans have proportionately shorter palates and much smaller teeth than other primates. They are the only primates to have short, relatively flush canine teeth. Humans have characteristically crowded teeth, with gaps from lost teeth usually closing up quickly in young individuals. Humans are gradually losing their third molars, with some individuals having them congenitally absent.[130]
|
91 |
+
|
92 |
+
Like all animals, humans are a diploid eukaryotic species. Each somatic cell has two sets of 23 chromosomes, each set received from one parent; gametes have only one set of chromosomes, which is a mixture of the two parental sets. Among the 23 pairs of chromosomes there are 22 pairs of autosomes and one pair of sex chromosomes. Like other mammals, humans have an XY sex-determination system, so that females have the sex chromosomes XX and males have XY.[131]
|
93 |
+
|
94 |
+
A rough and incomplete human genome was assembled as an average of a number of humans in 2003, and currently efforts are being made to achieve a sample of the genetic diversity of the species (see International HapMap Project). By present estimates, humans have approximately 22,000 genes.[132] The variation in human DNA is very small compared to other species, possibly suggesting a population bottleneck during the Late Pleistocene (around 100,000 years ago), in which the human population was reduced to a small number of breeding pairs.[133][134] Nucleotide diversity is based on single mutations called single nucleotide polymorphisms (SNPs). The nucleotide diversity between humans is about 0.1%, i.e. 1 difference per 1,000 base pairs.[135][136] A difference of 1 in 1,000 nucleotides between two humans chosen at random amounts to about 3 million nucleotide differences, since the human genome has about 3 billion nucleotides. Most of these single nucleotide polymorphisms (SNPs) are neutral but some (about 3 to 5%) are functional and influence phenotypic differences between humans through alleles.[citation needed]
|
95 |
+
|
96 |
+
By comparing the parts of the genome that are not under natural selection and which therefore accumulate mutations at a fairly steady rate, it is possible to reconstruct a genetic tree incorporating the entire human species since the last shared ancestor. Each time a certain mutation (SNP) appears in an individual and is passed on to his or her descendants, a haplogroup is formed including all of the descendants of the individual who will also carry that mutation. By comparing mitochondrial DNA, which is inherited only from the mother, geneticists have concluded that the last female common ancestor whose genetic marker is found in all modern humans, the so-called mitochondrial Eve, must have lived around 90,000 to 200,000 years ago.[137][138][139]
|
97 |
+
|
98 |
+
Human accelerated regions, first described in August 2006,[140][141] are a set of 49 segments of the human genome that are conserved throughout vertebrate evolution but are strikingly different in humans. They are named according to their degree of difference between humans and their nearest animal relative (chimpanzees) (HAR1 showing the largest degree of human-chimpanzee differences). Found by scanning through genomic databases of multiple species, some of these highly mutated areas may contribute to human-specific traits.[citation needed]
|
99 |
+
|
100 |
+
The forces of natural selection have continued to operate on human populations, with evidence that certain regions of the genome display directional selection in the past 15,000 years.[142]
|
101 |
+
|
102 |
+
As with other mammals, human reproduction takes place by internal fertilization via sexual intercourse. During this process, the male inserts his erect penis into the female's vagina and ejaculates semen, which contains sperm. The sperm travels through the vagina and cervix into the uterus or Fallopian tubes for fertilization of the ovum. Upon fertilization and implantation, gestation then occurs within the female's uterus.
|
103 |
+
|
104 |
+
The zygote divides inside the female's uterus to become an embryo, which over a period of 38 weeks (9 months) of gestation becomes a fetus. After this span of time, the fully grown fetus is birthed from the woman's body and breathes independently as an infant for the first time. At this point, most modern cultures recognize the baby as a person entitled to the full protection of the law, though some jurisdictions extend various levels of personhood earlier to human fetuses while they remain in the uterus.
|
105 |
+
|
106 |
+
Compared with other species, human childbirth is dangerous. Painful labors lasting 24 hours or more are not uncommon and sometimes lead to the death of the mother, the child or both.[143] This is because of both the relatively large fetal head circumference and the mother's relatively narrow pelvis.[144][145] The chances of a successful labor increased significantly during the 20th century in wealthier countries with the advent of new medical technologies. In contrast, pregnancy and natural childbirth remain hazardous ordeals in developing regions of the world, with maternal death rates approximately 100 times greater than in developed countries.[146]
|
107 |
+
|
108 |
+
In developed countries, infants are typically 3–4 kg (7–9 lb) in weight and 50–60 cm (20–24 in) in height at birth.[147][failed verification] However, low birth weight is common in developing countries, and contributes to the high levels of infant mortality in these regions.[148] Helpless at birth, humans continue to grow for some years, typically reaching sexual maturity at 12 to 15 years of age. Females continue to develop physically until around the age of 18, whereas male development continues until around age 21. The human life span can be split into a number of stages: infancy, childhood, adolescence, young adulthood, adulthood and old age. The lengths of these stages, however, have varied across cultures and time periods. Compared to other primates, humans experience an unusually rapid growth spurt during adolescence, where the body grows 25% in size. Chimpanzees, for example, grow only 14%, with no pronounced spurt.[149] The presence of the growth spurt is probably necessary to keep children physically small until they are psychologically mature. Humans are one of the few species in which females undergo menopause. It has been proposed that menopause increases a woman's overall reproductive success by allowing her to invest more time and resources in her existing offspring, and in turn their children (the grandmother hypothesis), rather than by continuing to bear children into old age.[150][151]
|
109 |
+
|
110 |
+
Evidence-based studies indicate that the life span of an individual depends on two major factors, genetics and lifestyle choices.[152] For various reasons, including biological/genetic causes,[153] women live on average about four years longer than men—as of 2013[update] the global average life expectancy at birth of a girl is estimated at 70.2 years compared to 66.1 for a boy.[154] There are significant geographical variations in human life expectancy, mostly correlated with economic development—for example life expectancy at birth in Hong Kong is 84.8 years for girls and 78.9 for boys, while in Swaziland, primarily because of AIDS, it is 31.3 years for both sexes.[155] The developed world is generally aging, with the median age around 40 years. In the developing world the median age is between 15 and 20 years. While one in five Europeans is 60 years of age or older, only one in twenty Africans is 60 years of age or older.[156] The number of centenarians (humans of age 100 years or older) in the world was estimated by the United Nations at 210,000 in 2002.[157] Jeanne Calment is widely believed to have reached the age of 122;[158] higher ages have been claimed but are unsubstantiated.
|
111 |
+
|
112 |
+
Humans are omnivorous, capable of consuming a wide variety of plant and animal material.[159][160] Varying with available food sources in regions of habitation, and also varying with cultural and religious norms, human groups have adopted a range of diets, from purely vegan to primarily carnivorous. In some cases, dietary restrictions in humans can lead to deficiency diseases; however, stable human groups have adapted to many dietary patterns through both genetic specialization and cultural conventions to use nutritionally balanced food sources.[161] The human diet is prominently reflected in human culture, and has led to the development of food science.
|
113 |
+
|
114 |
+
Until the development of agriculture approximately 10,000 years ago, Homo sapiens employed a hunter-gatherer method as their sole means of food collection. This involved combining stationary food sources (such as fruits, grains, tubers, and mushrooms, insect larvae and aquatic mollusks) with wild game, which must be hunted and killed in order to be consumed.[162] It has been proposed that humans have used fire to prepare and cook food since the time of Homo erectus.[163] Around ten thousand years ago, humans developed agriculture,[164] which substantially altered their diet. This change in diet may also have altered human biology; with the spread of dairy farming providing a new and rich source of food, leading to the evolution of the ability to digest lactose in some adults.[165][166] Agriculture led to increased populations, the development of cities, and because of increased population density, the wider spread of infectious diseases. The types of food consumed, and the way in which they are prepared, have varied widely by time, location, and culture.
|
115 |
+
|
116 |
+
In general, humans can survive for two to eight weeks without food, depending on stored body fat. Survival without water is usually limited to three or four days. About 36 million humans die every year from causes directly or indirectly related to starvation.[167] Childhood malnutrition is also common and contributes to the global burden of disease.[168] However global food distribution is not even, and obesity among some human populations has increased rapidly, leading to health complications and increased mortality in some developed, and a few developing countries. Worldwide over one billion people are obese,[169] while in the United States 35% of people are obese, leading to this being described as an "obesity epidemic."[170] Obesity is caused by consuming more calories than are expended, so excessive weight gain is usually caused by an energy-dense diet.[169]
|
117 |
+
|
118 |
+
No two humans—not even monozygotic twins—are genetically identical. Genes and environment influence human biological variation in visible characteristics, physiology, disease susceptibility and mental abilities. The exact influence of genes and environment on certain traits is not well understood.[171][172]
|
119 |
+
|
120 |
+
Most current genetic and archaeological evidence supports a recent single origin of modern humans in East Africa,[173] with first migrations placed at 60,000 years ago. Compared to the great apes, human gene sequences—even among African populations—are remarkably homogeneous.[174] On average, genetic similarity between any two humans is 99.5%-99.9%.[175][176][177][178][179][180] There is about 2–3 times more genetic diversity within the wild chimpanzee population than in the entire human gene pool.[181][182][183]
|
121 |
+
|
122 |
+
The human body's ability to adapt to different environmental stresses is remarkable, allowing humans to acclimatize to a wide variety of temperatures, humidity, and altitudes. As a result, humans are a cosmopolitan species found in almost all regions of the world, including tropical rainforest, arid desert, extremely cold arctic regions, and heavily polluted cities. Most other species are confined to a few geographical areas by their limited adaptability.[184]
|
123 |
+
|
124 |
+
There is biological variation in the human species—with traits such as blood type, genetic diseases, cranial features, facial features, organ systems, eye color, hair color and texture, height and build, and skin color varying across the globe. Human body types vary substantially. The typical height of an adult human is between 1.4 and 1.9 m (4 ft 7 in and 6 ft 3 in), although this varies significantly depending, among other things, on sex, ethnic origin,[185][186] and even family bloodlines. Body size is partly determined by genes and is also significantly influenced by environmental factors such as diet, exercise, and sleep patterns, especially as an influence in childhood. Adult height for each sex in a particular ethnic group approximately follows a normal distribution. Those aspects of genetic variation that give clues to human evolutionary history, or are relevant to medical research, have received particular attention. For example, the genes that allow adult humans to digest lactose are present in high frequencies in populations that have long histories of cattle domestication, suggesting natural selection having favored that gene in populations that depend on cow milk. Some hereditary diseases such as sickle cell anemia are frequent in populations where malaria has been endemic throughout history—it is believed that the same gene gives increased resistance to malaria among those who are unaffected carriers of the gene. Similarly, populations that have for a long time inhabited specific climates, such as arctic or tropical regions or high altitudes, tend to have developed specific phenotypes that are beneficial for conserving energy in those environments—short stature and stocky build in cold regions, tall and lanky in hot regions, and with high lung capacities at high altitudes. Some populations have evolved highly unique adaptations to very specific environmental conditions, such as those advantageous to ocean-dwelling lifestyles and freediving in the Bajau.[187] Skin color tends to vary clinally, with darker skin mostly around the equator—where the added protection from the sun's ultraviolet radiation is thought to give an evolutionary advantage—and lighter skin tones closer to the poles.[188][189][190][191]
|
125 |
+
|
126 |
+
The hue of human skin and hair is determined by the presence of pigments called melanins. Human skin color can range from darkest brown to lightest peach, or even nearly white or colorless in cases of albinism.[183] Human hair ranges in color from white to red to blond to brown to black, which is most frequent.[192] Hair color depends on the amount of melanin (an effective sun blocking pigment) in the skin and hair, with hair melanin concentrations in hair fading with increased age, leading to grey or even white hair. Most researchers believe that skin darkening is an adaptation that evolved as protection against ultraviolet solar radiation, which also helps balancing folate, which is destroyed by ultraviolet radiation. Light skin pigmentation protects against depletion of vitamin D, which requires sunlight to make.[193] Skin pigmentation of contemporary humans is clinally distributed across the planet, and in general correlates with the level of ultraviolet radiation in a particular geographic area. Human skin also has a capacity to darken (tan) in response to exposure to ultraviolet radiation.[194][195][196]
|
127 |
+
|
128 |
+
Within the human species, the greatest degree of genetic variation exists between males and females. While the nucleotide genetic variation of individuals of the same sex across global populations is no greater than 0.1%-0.5%, the genetic difference between males and females is between 1% and 2%. The genetic difference between sexes contributes to anatomical, hormonal, neural, and physiological differences between men and women, although the exact degree and nature of social and environmental influences on sexes are not completely understood. Males on average are 15% heavier and 15 cm (6 in) taller than females. There is a difference between body types, body organs and systems, hormonal levels, sensory systems, and muscle mass between sexes. On average, men have about 40–50% more upper body strength and 20–30% more lower body strength than women. Women generally have a higher body fat percentage than men. Women have lighter skin than men of the same population; this has been explained by a higher need for vitamin D (which is synthesized by sunlight) in females during pregnancy and lactation. As there are chromosomal differences between females and males, some X and Y chromosome related conditions and disorders only affect either men or women. Other conditional differences between males and females are not related to sex chromosomes. Even after allowing for body weight and volume, the male voice is usually an octave deeper than the female voice. Women have a longer life span in almost every population around the world.[197][198][199][200][201][202][203][204][205]
|
129 |
+
|
130 |
+
Males typically have larger tracheae and branching bronchi, with about 30% greater lung volume per unit body mass. They have larger hearts, 10% higher red blood cell count, and higher hemoglobin, hence greater oxygen-carrying capacity. They also have higher circulating clotting factors (vitamin K, prothrombin and platelets). These differences lead to faster healing of wounds and higher peripheral pain tolerance.[206] Females typically have more white blood cells (stored and circulating), more granulocytes and B and T lymphocytes. Additionally, they produce more antibodies at a faster rate than males. Hence they develop fewer infectious diseases and these continue for shorter periods.[206] Ethologists argue that females, interacting with other females and multiple offspring in social groups, have experienced such traits as a selective advantage.[207][208][209][210][211] According to Daly and Wilson, "The sexes differ more in human beings than in monogamous mammals, but much less than in extremely polygamous mammals."[212] But given that sexual dimorphism in the closest relatives of humans is much greater than among humans, the human clade must be considered to be characterized by decreasing sexual dimorphism, probably due to less competitive mating patterns. One proposed explanation is that human sexuality has developed more in common with its close relative the bonobo, which exhibits similar sexual dimorphism, is polygynandrous and uses recreational sex to reinforce social bonds and reduce aggression.[213]
|
131 |
+
|
132 |
+
Humans of the same sex are 99.5% genetically identical. There is relatively little variation between human geographical populations, and most of the variation that occurs is at the individual level.[183][214][215] Of the 0.1%-0.5% of human genetic differentiation, 85% exists within any randomly chosen local population, be they Italians, Koreans, or Kurds. Two randomly chosen Koreans may be genetically almost as different as a Korean and an Italian. Genetic data shows that no matter how population groups are defined, two people from the same population group are almost as different from each other as two people from any two different population groups.[183][216][217][218]
|
133 |
+
|
134 |
+
Current genetic research has demonstrated that human populations native to the African continent are the most genetically diverse.[219] Human genetic diversity decreases in native populations with migratory distance from Africa, and this is thought to be the result of bottlenecks during human migration.[220][221] Humans have lived in Africa for the longest time, which has allowed accumulation of a higher diversity of genetic mutations in these populations. Only part of Africa's population migrated out of the continent into Eurasia, bringing just part of the original African genetic variety with them. Non-African populations, however, acquired new genetic inputs from local admixture with archaic populations, and thus have much larger amounts of variation from Neanderthals and Denisovans than is found in Africa.[222] African populations also harbour the highest number of private genetic variants, or those not found in other places of the world. While many of the common variants found in populations outside of Africa are also found on the African continent, there are still large numbers which are private to these regions, especially Oceania and the Americas.[183][222] Furthermore, recent studies have found that populations in sub-Saharan Africa, and particularly West Africa, have ancestral genetic variation which predates modern humans and has been lost in most non-African populations. This ancestry is thought to originate from admixture with an unknown archaic hominin that diverged before the split of Neanderthals and modern humans.[223][224]
|
135 |
+
|
136 |
+
Geographical distribution of human variation is complex and constantly shifts through time which reflects complicated human evolutionary history. Most human biological variation is clinally distributed and blends gradually from one area to the next. Groups of people around the world have different frequencies of polymorphic genes. Furthermore, different traits are often non-concordant and each have different clinal distribution. Adaptability varies both from person to person and from population to population. The most efficient adaptive responses are found in geographical populations where the environmental stimuli are the strongest (e.g. Tibetans are highly adapted to high altitudes). The clinal geographic genetic variation is further complicated by the migration and mixing between human populations which has been occurring since prehistoric times.[183][225][226][227][228][229]
|
137 |
+
|
138 |
+
Human variation is highly non-concordant: many of the genes do not cluster together and are not inherited together. Skin and hair color are mostly not correlated to height, weight, or athletic ability. Humans do not share the same patterns of variation through geography. Skin colour generally varies with latitude and certain people are tall or have brown hair. There is a statistical correlation between particular features in a population, but different features might not be expressed or inherited together. Thus, genes which code for some physical traits—such as skin color, hair color, or height—represent a minuscule portion of the human genome and might not correlate with genetic affinity. Dark-skinned populations that are found in Africa, Australia, and South Asia are not closely related to each other.[196][228][229][230][231][232] Even within the same region, physical phenotype can be unrelated to genetic affinity. Despite pygmy populations of South East Asia (Andamanese) having similar physical features with African pygmy populations such as short stature, dark skin, and curly hair, they are not genetically closely related to these populations.[233] Genetic variants affecting superficial anatomical features such as skin color—from a genetic perspective, are highly meaningless—they involve a few hundred of the billions of nucleotides in a person's DNA.[234] Individuals with the same morphology do not necessarily cluster with each other by lineage, and a given lineage does not include only individuals with the same trait complex.[183][217][235]
|
139 |
+
|
140 |
+
Due to practices of group endogamy, allele frequencies cluster locally around kin groups and lineages, or by national, ethnic, cultural and linguistic boundaries, giving a detailed degree of correlation between genetic clusters and population groups when considering many alleles simultaneously. Despite this, genetic boundaries around local populations do not biologically mark off any fully discrete groups of humans. Much of human variation is continuous, often with no clear points of demarcation.[235][236][227][228][237][238][239][240][241][242]
|
141 |
+
|
142 |
+
The human brain, the focal point of the central nervous system in humans, controls the peripheral nervous system. In addition to controlling "lower," involuntary, or primarily autonomic activities such as respiration and digestion, it is also the locus of "higher" order functioning such as thought, reasoning, and abstraction.[243] These cognitive processes constitute the mind, and, along with their behavioral consequences, are studied in the field of psychology.
|
143 |
+
|
144 |
+
Generally regarded as more capable of these higher order activities, the human brain is believed to be more intelligent in general than that of any other known species. While some non-human species are capable of creating structures and using simple tools—mostly through instinct and mimicry—human technology is vastly more complex, and is constantly evolving and improving through time.
|
145 |
+
|
146 |
+
Humans are generally diurnal. The average sleep requirement is between seven and nine hours per day for an adult and nine to ten hours per day for a child; elderly people usually sleep for six to seven hours. Having less sleep than this is common among humans, even though sleep deprivation can have negative health effects. A sustained restriction of adult sleep to four hours per day has been shown to correlate with changes in physiology and mental state, including reduced memory, fatigue, aggression, and bodily discomfort.[244] During sleep humans dream. In dreaming humans experience sensory images and sounds, in a sequence which the dreamer usually perceives more as an apparent participant than as an observer. Dreaming is stimulated by the pons and mostly occurs during the REM phase of sleep.
|
147 |
+
|
148 |
+
Humans are one of the relatively few species to have sufficient self-awareness to recognize themselves in a mirror.[245] Around 18 months most human children are aware that the mirror image is not another person.[246]
|
149 |
+
|
150 |
+
The human brain perceives the external world through the senses, and each individual human is influenced greatly by his or her experiences, leading to subjective views of existence and the passage of time. Humans are variously said to possess consciousness, self-awareness, and a mind, which correspond roughly to the mental processes of thought. These are said to possess qualities such as self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment. The extent to which the mind constructs or experiences the outer world is a matter of debate, as are the definitions and validity of many of the terms used above.
|
151 |
+
|
152 |
+
The physical aspects of the mind and brain, and by extension of the nervous system, are studied in the field of neurology, the more behavioral in the field of psychology, and a sometimes loosely defined area between in the field of psychiatry, which treats mental illness and behavioral disorders. Psychology does not necessarily refer to the brain or nervous system, and can be framed purely in terms of phenomenological or information processing theories of the mind. Increasingly, however, an understanding of brain functions is being included in psychological theory and practice, particularly in areas such as artificial intelligence, neuropsychology, and cognitive neuroscience.
|
153 |
+
|
154 |
+
The nature of thought is central to psychology and related fields. Cognitive psychology studies cognition, the mental processes' underlying behavior. It uses information processing as a framework for understanding the mind. Perception, learning, problem solving, memory, attention, language and emotion are all well researched areas as well. Cognitive psychology is associated with a school of thought known as cognitivism, whose adherents argue for an information processing model of mental function, informed by positivism and experimental psychology. Techniques and models from cognitive psychology are widely applied and form the mainstay of psychological theories in many areas of both research and applied psychology. Largely focusing on the development of the human mind through the life span, developmental psychology seeks to understand how people come to perceive, understand, and act within the world and how these processes change as they age. This may focus on intellectual, cognitive, neural, social, or moral development. Psychologists have developed intelligence tests and the concept of intelligence quotient in order to assess the relative intelligence of human beings and study its distribution among population.[247]
|
155 |
+
|
156 |
+
Some philosophers divide consciousness into phenomenal consciousness, which is experience itself, and access consciousness, which is the processing of the things in experience.[248] Phenomenal consciousness is the state of being conscious, such as when they say "I am conscious." Access consciousness is being conscious of something in relation to abstract concepts, such as when one says "I am conscious of these words." Various forms of access consciousness include awareness, self-awareness, conscience, stream of consciousness, Husserl's phenomenology, and intentionality. The concept of phenomenal consciousness, in modern history, according to some, is closely related to the concept of qualia. Social psychology links sociology with psychology in their shared study of the nature and causes of human social interaction, with an emphasis on how people think towards each other and how they relate to each other. The behavior and mental processes, both human and non-human, can be described through animal cognition, ethology, evolutionary psychology, and comparative psychology as well. Human ecology is an academic discipline that investigates how humans and human societies interact with both their natural environment and the human social environment.
|
157 |
+
|
158 |
+
Human motivation is not yet wholly understood. From a psychological perspective, Maslow's hierarchy of needs is a well-established theory which can be defined as the process of satisfying certain needs in ascending order of complexity.[249] From a more general, philosophical perspective, human motivation can be defined as a commitment to, or withdrawal from, various goals requiring the application of human ability. Furthermore, incentive and preference are both factors, as are any perceived links between incentives and preferences. Volition may also be involved, in which case willpower is also a factor. Ideally, both motivation and volition ensure the selection, striving for, and realization of goals in an optimal manner, a function beginning in childhood and continuing throughout a lifetime in a process known as socialization.[250]
|
159 |
+
|
160 |
+
Happiness, or the state of being happy, is a human emotional condition. The definition of happiness is a common philosophical topic. Some people might define it as the best condition that a human can have—a condition of mental and physical health. Others define it as freedom from want and distress; consciousness of the good order of things; assurance of one's place in the universe or society.
|
161 |
+
|
162 |
+
Emotion has a significant influence on, or can even be said to control, human behavior, though historically many cultures and philosophers have for various reasons discouraged allowing this influence to go unchecked. Emotional experiences perceived as pleasant, such as love, admiration, or joy, contrast with those perceived as unpleasant, like hate, envy, or sorrow. The Stoics believed excessive emotion was harmful, while some Sufi teachers felt certain extreme emotions could yield a conceptual perfection, what is often translated as ecstasy.
|
163 |
+
|
164 |
+
In modern scientific thought, certain refined emotions are considered a complex neural trait innate in a variety of domesticated and non-domesticated mammals. These were commonly developed in reaction to superior survival mechanisms and intelligent interaction with each other and the environment; as such, refined emotion is not in all cases as discrete and separate from natural neural function as was once assumed. However, when humans function in civilized tandem, it has been noted that uninhibited acting on extreme emotion can lead to social disorder and crime.
|
165 |
+
|
166 |
+
For humans, sexuality has important social functions: it creates physical intimacy, bonds and hierarchies among individuals, besides ensuring biological reproduction. Sexual desire, or libido, is experienced as a bodily urge, often accompanied by strong emotions such as love, ecstasy and jealousy. The significance of sexuality in the human species is reflected in a number of physical features, among them hidden ovulation, the evolution of external scrotum and (among great apes) a relatively large penis suggesting sperm competition in humans, the absence of an os penis, permanent secondary sexual characteristics and the forming of pair bonds based on sexual attraction as a common social structure. Contrary to other primates that often advertise estrus through visible signs, human females do not have distinct or visible signs of ovulation, and they experience sexual desire outside of their fertile periods.
|
167 |
+
|
168 |
+
Human choices in acting on sexuality are commonly influenced by social norms, which vary between cultures. Restrictions are often determined by religious beliefs or social customs. People can fall anywhere along a continuous scale of sexual orientation,[251] although most humans are heterosexual.[252] There is considerably more evidence supporting nonsocial, biological causes of sexual orientation than social ones, especially for males.[252] Recent research, including neurology and genetics, suggests that other aspects of human sexuality are biologically influenced as well.[253]
|
169 |
+
|
170 |
+
Both the mother and the father provide care for human offspring, in contrast to other primates, where parental care is mostly restricted to mothers.[254]
|
171 |
+
|
172 |
+
Humanity's unprecedented set of intellectual skills were a key factor in the species' eventual technological advancement and concomitant domination of the biosphere.[259] Disregarding extinct hominids, humans are the only animals known to teach generalizable information,[260] innately deploy recursive embedding to generate and communicate complex concepts,[261] engage in the "folk physics" required for competent tool design,[262][263] or cook food in the wild.[264] Human traits and behaviors that are unusual (though not necessarily unique) among species, include starting fires,[265] phoneme structuring,[266] self-recognition in mirror tests,[267] habitual bipedal walking,[268] and vocal learning.[269] Scholars debate whether humans evolved the unusual trait of natural persistence hunting,[270][271] and also debate to what extent humans are the only animals with a theory of mind.[272] Even compared with other social animals, humans have an unusually high degree of flexibility in their facial expressions.[273] Humans are the only animals known to cry emotional tears.[274]
|
173 |
+
|
174 |
+
Humans are highly social beings and tend to live in large complex social groups. More than any other creature, humans are capable of using systems of communication for self-expression, the exchange of ideas, and organization, and as such have created complex social structures composed of many cooperating and competing groups. Human groups range from the size of families to nations. Social interactions between humans have established an extremely wide variety[clarification needed] of values, social norms, and rituals, which together form the basis of human society.
|
175 |
+
|
176 |
+
Culture is defined here as patterns of complex symbolic behavior, i.e. all behavior that is not innate but which has to be learned through social interaction with others; such as the use of distinctive material and symbolic systems, including language, ritual, social organization, traditions, beliefs and technology.
|
177 |
+
|
178 |
+
While many species communicate, language is unique to humans, a defining feature of humanity, and a cultural universal. Unlike the limited systems of other animals, human language is open—an infinite number of meanings can be produced by combining a limited number of symbols. Human language also has the capacity of displacement, using words to represent things and happenings that are not presently or locally occurring, but reside in the shared imagination of interlocutors.[130] Language differs from other forms of communication in that it is modality independent; the same meanings can be conveyed through different media, auditively in speech, visually by sign language or writing, and even through tactile media such as braille. Language is central to the communication between humans, and to the sense of identity that unites nations, cultures and ethnic groups. The invention of writing systems at least five thousand years ago allowed the preservation of language on material objects, and was a major technological advancement. The science of linguistics describes the structure and function of language and the relationship between languages. There are approximately six thousand different languages currently in use, including sign languages, and many thousands more that are extinct.[275]
|
179 |
+
|
180 |
+
The division of humans into male and female sexes has been marked culturally by a corresponding division of roles, norms, practices, dress, behavior, rights, duties, privileges, status, and power. Cultural differences by gender have often been believed to have arisen naturally out of a division of reproductive labor; the biological fact that women give birth led to their further cultural responsibility for nurturing and caring for children.[276] Gender roles have varied historically, and challenges to predominant gender norms have recurred in many societies.
|
181 |
+
|
182 |
+
All human societies organize, recognize and classify types of social relationships based on relations between parents and children (consanguinity), and relations through marriage (affinity). These kinds of relations are generally called kinship relations. In most societies kinship places mutual responsibilities and expectations of solidarity on the individuals that are so related, and those who recognize each other as kinsmen come to form networks through which other social institutions can be regulated. Among the many functions of kinship is the ability to form descent groups, groups of people sharing a common line of descent, which can function as political units such as clans. Another function is the way in which kinship unites families through marriage, forming kinship alliances between groups of wife-takers and wife-givers. Such alliances also often have important political and economical ramifications, and may result in the formation of political organization above the community level. Kinship relations often includes regulations for whom an individual should or shouldn't marry. All societies have rules of incest taboo, according to which marriage between certain kinds of kin relations are prohibited—such rules vary widely between cultures.[citation needed] Some societies also have rules of preferential marriage with certain kin relations, frequently with either cross or parallel cousins. Rules and norms for marriage and social behavior among kinsfolk is often reflected in the systems of kinship terminology in the various languages of the world. In many societies kinship relations can also be formed through forms of co-habitation, adoption, fostering, or companionship, which also tends to create relations of enduring solidarity (nurture kinship).
|
183 |
+
|
184 |
+
Humans often form ethnic groups, such groups tend to be larger than kinship networks and be organized around a common identity defined variously in terms of shared ancestry and history, shared cultural norms and language, or shared biological phenotype. Such ideologies of shared characteristics are often perpetuated in the form of powerful, compelling narratives that give legitimacy and continuity to the set of shared values. Ethnic groupings often correspond to some level of political organization such as the band, tribe, city state or nation. Although ethnic groups appear and disappear through history, members of ethnic groups often conceptualize their groups as having histories going back into the deep past. Such ideologies give ethnicity a powerful role in defining social identity and in constructing solidarity between members of an ethno-political unit. This unifying property of ethnicity has been closely tied to the rise of the nation state as the predominant form of political organization in the 19th and 20th centuries.[277][278][279][280][281][282]
|
185 |
+
|
186 |
+
Society is the system of organizations and institutions arising from interaction between humans. Within a society people can be divided into different groups according to their income, wealth, power, reputation, etc., but the structure of social stratification and the degree of social mobility differs, especially between modern and traditional societies.[283] A state is an organized political community occupying a definite territory, having an organized government, and possessing internal and external sovereignty. Recognition of the state's claim to independence by other states, enabling it to enter into international agreements, is often important to the establishment of its statehood. The "state" can also be defined in terms of domestic conditions, specifically, as conceptualized by Max Weber, "a state is a human community that (successfully) claims the monopoly of the 'legitimate' use of physical force within a given territory."[284]
|
187 |
+
|
188 |
+
Government can be defined as the political means of creating and enforcing laws; typically via a bureaucratic hierarchy. Politics is the process by which decisions are made within groups; this process often involves conflict as well as compromise. Although the term is generally applied to behavior within governments, politics is also observed in all human group interactions, including corporate, academic, and religious institutions. Many different political systems exist, as do many different ways of understanding them, and many definitions overlap. Examples of governments include monarchy, Communist state, military dictatorship, theocracy, and liberal democracy, the last of which is considered dominant today. All of these issues have a direct relationship with economics.
|
189 |
+
|
190 |
+
Trade is the voluntary exchange of goods and services, and is a form of economic activity. A mechanism that allows trade is called a market. Modern traders instead generally negotiate through a medium of exchange, such as money. As a result, buying can be separated from selling, or earning. Because of specialization and division of labor, most people concentrate on a small aspect of manufacturing or service, trading their labor for products. Trade exists between regions because different regions have an absolute or comparative advantage in the production of some tradable commodity, or because different regions' size allows for the benefits of mass production.
|
191 |
+
|
192 |
+
Economics is a social science which studies the production, distribution, trade, and consumption of goods and services. Economics focuses on measurable variables, and is broadly divided into two main branches: microeconomics, which deals with individual agents, such as households and businesses, and macroeconomics, which considers the economy as a whole, in which case it considers aggregate supply and demand for money, capital and commodities. Aspects receiving particular attention in economics are resource allocation, production, distribution, trade, and competition. Economic logic is increasingly applied to any problem that involves choice under scarcity or determining economic value.
|
193 |
+
|
194 |
+
War is a state of organized armed conflict between states or non-state actors. War is characterized by the use of lethal violence against others—whether between combatants or upon non-combatants—to achieve military goals through force. Lesser, often spontaneous conflicts, such as brawls, riots, revolts, and melees, are not considered to be warfare. Revolutions can be nonviolent or an organized and armed revolution which denotes a state of war. During the 20th century, it is estimated that between 167 and 188 million people died as a result of war.[285] A common definition defines war as a series of military campaigns between at least two opposing sides involving a dispute over sovereignty, territory, resources, religion, or other issues. A war between internal elements of a state is a civil war.
|
195 |
+
|
196 |
+
There have been a wide variety of rapidly advancing tactics throughout the history of war, ranging from conventional war to asymmetric warfare to total war and unconventional warfare. Techniques include hand to hand combat, the use of ranged weapons, naval warfare, and, more recently, air support. Military intelligence has often played a key role in determining victory and defeat. Propaganda, which often includes information, slanted opinion and disinformation, plays a key role both in maintaining unity within a warring group and in sowing discord among opponents. In modern warfare, soldiers and combat vehicles are used to control the land, warships the sea, and aircraft the sky. These fields have also overlapped in the forms of marines, paratroopers, aircraft carriers, and surface-to-air missiles, among others. Satellites in low Earth orbit have made outer space a factor in warfare as well through their use for detailed intelligence gathering; however, no known aggressive actions have been taken from space.
|
197 |
+
|
198 |
+
Stone tools were used by proto-humans at least 2.5 million years ago.[286] The controlled use of fire began around 1.5 million years ago. Since then, humans have made major advances, developing complex technology to create tools to aid their lives and allowing for other advancements in culture. Major leaps in technology include the discovery of agriculture—what is known as the Neolithic Revolution, and the invention of automated machines in the Industrial Revolution.
|
199 |
+
|
200 |
+
Archaeology attempts to tell the story of past or lost cultures in part by close examination of the artifacts they produced. Early humans left stone tools, pottery, and jewelry that are particular to various regions and times.
|
201 |
+
|
202 |
+
Throughout history, humans have altered their appearance by wearing clothing[287] and adornments, by trimming or shaving hair or by means of body modifications.
|
203 |
+
|
204 |
+
Body modification is the deliberate altering of the human body for any non-medical reason, such as aesthetics, sexual enhancement, a rite of passage, religious reasons, to display group membership or affiliation, to create body art, shock value, or self-expression.[288] In its most broad definition it includes plastic surgery, socially acceptable decoration (e.g. common ear piercing in many societies), and religious rites of passage (e.g. circumcision in a number of cultures).[288]
|
205 |
+
|
206 |
+
Philosophy is a discipline or field of study involving the investigation, analysis, and development of ideas at a general, abstract, or fundamental level. It is the discipline searching for a general understanding of reality, reasoning and values. Major fields of philosophy include logic, metaphysics, epistemology, philosophy of mind, and axiology (which includes ethics and aesthetics). Philosophy covers a very wide range of approaches, and is used to refer to a worldview, to a perspective on an issue, or to the positions argued for by a particular philosopher or school of philosophy.
|
207 |
+
|
208 |
+
Religion is generally defined as a belief system concerning the supernatural, sacred or divine, and practices, values, institutions and rituals associated with such belief. Some religions also have a moral code. The evolution and the history of the first religions have recently become areas of active scientific investigation.[289][290][291] However, in the course of its development, religion has taken on many forms that vary by culture and individual perspective. Some of the chief questions and issues religions are concerned with include life after death (commonly involving belief in an afterlife), the origin of life, the nature of the universe (religious cosmology) and its ultimate fate (eschatology), and what is moral or immoral. A common source for answers to these questions are beliefs in transcendent divine beings such as deities or a singular God, although not all religions are theistic. Spirituality, belief or involvement in matters of the soul or spirit, is one of the many different approaches humans take in trying to answer fundamental questions about humankind's place in the universe, the meaning of life, and the ideal way to live one's life. Though these topics have also been addressed by philosophy, and to some extent by science, spirituality is unique in that it focuses on mystical or supernatural concepts such as karma and God.
|
209 |
+
|
210 |
+
Although the exact level of religiosity can be hard to measure,[292] a majority of humans professes some variety of religious or spiritual belief, although many (in some countries a majority) are irreligious. This includes humans who have no religious beliefs or do not identify with any religion. Humanism is a philosophy which seeks to include all of humanity and all issues common to humans; it is usually non-religious. Most religions and spiritual beliefs are clearly distinct from science on both a philosophical and methodological level; the two are not generally considered mutually exclusive and a majority of humans hold a mix of both scientific and religious views. The distinction between philosophy and religion, on the other hand, is at times less clear, and the two are linked in such fields as the philosophy of religion and theology.
|
211 |
+
|
212 |
+
Humans have been producing art works for at least seventy-three thousand years.[293] Art may be defined as a form of cultural expression and the usage of narratives of liberation and exploration (i.e. art history, art criticism, and art theory) to mediate its boundaries. This distinction may be applied to objects or performances, current or historical, and its prestige extends to those who made, found, exhibit, or own them. In the modern use of the word, art is commonly understood to be the process or result of making material works that, from concept to creation, adhere to the "creative impulse" of human beings.
|
213 |
+
|
214 |
+
Music is a natural intuitive phenomenon based on the three distinct and interrelated organization structures of rhythm, harmony, and melody. Listening to music is perhaps the most common and universal form of entertainment, while learning and understanding it are popular disciplines.[citation needed] There are a wide variety of music genres and ethnic musics. Literature, the body of written—and possibly oral—works, especially creative ones, includes prose, poetry and drama, both fiction and non-fiction. Literature includes such genres as epic, legend, myth, ballad, and folklore.
|
215 |
+
|
216 |
+
Another unique aspect of human culture and thought is the development of complex methods for acquiring knowledge through observation, quantification, and verification.[294][295] The scientific method has been developed to acquire knowledge of the physical world and the rules, processes and principles of which it consists, and combined with mathematics it enables the prediction of complex patterns of causality and consequence.[citation needed] An understanding of mathematics is unique to humans, although other species of animal have some numerical cognition.[296]
|
217 |
+
|
218 |
+
All of science can be divided into three major branches, the formal sciences (e.g., logic and mathematics), which are concerned with formal systems, the applied sciences (e.g., engineering, medicine), which are focused on practical applications, and the empirical sciences, which are based on empirical observation and are in turn divided into natural sciences (e.g., physics, chemistry, biology) and social sciences (e.g., psychology, economics, sociology).[297] A pseudoscience is an activity or a teaching which is mistakenly regarded as being scientific by its major proponents.[298]
|
en/2613.html.txt
ADDED
@@ -0,0 +1,243 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Homosexuality is romantic attraction, sexual attraction, or sexual behavior between members of the same sex or gender.[1] As a sexual orientation, homosexuality is "an enduring pattern of emotional, romantic, and/or sexual attractions" to people of the same sex. It "also refers to a person's sense of identity based on those attractions, related behaviors, and membership in a community of others who share those attractions."[2][3]
|
6 |
+
|
7 |
+
Along with bisexuality and heterosexuality, homosexuality is one of the three main categories of sexual orientation within the heterosexual–homosexual continuum.[2] Scientists do not know the exact cause of sexual orientation, but they theorize that it is caused by a complex interplay of genetic, hormonal, and environmental influences,[4][5][6] and do not view it as a choice.[4][5][7] Although no single theory on the cause of sexual orientation has yet gained widespread support, scientists favor biologically-based theories.[4] There is considerably more evidence supporting nonsocial, biological causes of sexual orientation than social ones, especially for males.[8][9][10] There is no substantive evidence which suggests parenting or early childhood experiences play a role with regard to sexual orientation.[11] While some people believe that homosexual activity is unnatural,[12] scientific research shows that homosexuality is a normal and natural variation in human sexuality and is not in and of itself a source of negative psychological effects.[2][13] There is insufficient evidence to support the use of psychological interventions to change sexual orientation.[14][15]
|
8 |
+
|
9 |
+
The most common terms for homosexual people are lesbian for females and gay for males, but gay also commonly refers to both homosexual females and males. The percentage of people who are gay or lesbian and the proportion of people who are in same-sex romantic relationships or have had same-sex sexual experiences are difficult for researchers to estimate reliably for a variety of reasons, including many gay and lesbian people not openly identifying as such due to prejudice or discrimination such as homophobia and heterosexism.[16] Homosexual behavior has also been documented in many non-human animal species.[22]
|
10 |
+
|
11 |
+
Many gay and lesbian people are in committed same-sex relationships, though only in the 2010s have census forms and political conditions facilitated their visibility and enumeration.[23] These relationships are equivalent to heterosexual relationships in essential psychological respects.[3] Homosexual relationships and acts have been admired, as well as condemned, throughout recorded history, depending on the form they took and the culture in which they occurred.[24] Since the end of the 19th century, there has been a global movement towards freedom and equality for gay people, including the introduction of anti-bullying legislation to protect gay children at school, legislation ensuring non-discrimination, equal ability to serve in the military, equal access to health care, equal ability to adopt and parent, and the establishment of marriage equality.
|
12 |
+
|
13 |
+
The word homosexual is a Greek and Latin hybrid, with the first element derived from Greek ὁμός homos, "same" (not related to the Latin homo, "man", as in Homo sapiens), thus connoting sexual acts and affections between members of the same sex, including lesbianism.[25][26] The first known appearance of homosexual in print is found in an 1869 German pamphlet by the Austrian-born novelist Karl-Maria Kertbeny, published anonymously,[27] arguing against a Prussian anti-sodomy law.[27][28] In 1886, the psychiatrist Richard von Krafft-Ebing used the terms homosexual and heterosexual in his book Psychopathia Sexualis. Krafft-Ebing's book was so popular among both laymen and doctors that the terms heterosexual and homosexual became the most widely accepted terms for sexual orientation.[29][30] As such, the current use of the term has its roots in the broader 19th-century tradition of personality taxonomy.
|
14 |
+
|
15 |
+
Many modern style guides in the U.S. recommend against using homosexual as a noun, instead using gay man or lesbian.[31] Similarly, some recommend completely avoiding usage of homosexual as it has a negative, clinical history and because the word only refers to one's sexual behavior (as opposed to romantic feelings) and thus it has a negative connotation.[31] Gay and lesbian are the most common alternatives. The first letters are frequently combined to create the initialism LGBT (sometimes written as GLBT), in which B and T refer to bisexual and transgender people.
|
16 |
+
|
17 |
+
Gay especially refers to male homosexuality,[32] but may be used in a broader sense to refer to all LGBT people. In the context of sexuality, lesbian refers only to female homosexuality. The word lesbian is derived from the name of the Greek island Lesbos, where the poet Sappho wrote largely about her emotional relationships with young women.[33][34]
|
18 |
+
|
19 |
+
Although early writers also used the adjective homosexual to refer to any single-sex context (such as an all-girls school), today the term is used exclusively in reference to sexual attraction, activity, and orientation. The term homosocial is now used to describe single-sex contexts that are not specifically sexual. There is also a word referring to same-sex love, homophilia.
|
20 |
+
|
21 |
+
Some synonyms for same-sex attraction or sexual activity include men who have sex with men or MSM (used in the medical community when specifically discussing sexual activity) and homoerotic (referring to works of art).[35][36] Pejorative terms in English include queer, faggot, fairy, poof, and homo.[37][38][39][40] Beginning in the 1990s, some of these have been reclaimed as positive words by gay men and lesbians, as in the usage of queer studies, queer theory, and even the popular American television program Queer Eye for the Straight Guy.[41] The word homo occurs in many other languages without the pejorative connotations it has in English.[42] As with ethnic slurs and racial slurs, the use of these terms can still be highly offensive. The range of acceptable use for these terms depends on the context and speaker.[43] Conversely, gay, a word originally embraced by homosexual men and women as a positive, affirmative term (as in gay liberation and gay rights),[44] has come into widespread pejorative use among young people.[45]
|
22 |
+
|
23 |
+
The American LGBT rights organization GLAAD advises the media to avoid using the term homosexual to describe gay people or same-sex relationships as the term is "frequently used by anti-gay extremists to denigrate gay people, couples and relationships".[46]
|
24 |
+
|
25 |
+
Some scholars argue that the term "homosexuality" is problematic when applied to ancient cultures since, for example, neither Greeks or Romans possessed any one word covering the same semantic range as the modern concept of "homosexuality".[47][48] Furthermore, there were diverse sexual practices that varied in acceptance depending on time and place.[47] Other scholars argue that there are significant continuities between ancient and modern homosexuality.[49][50]
|
26 |
+
|
27 |
+
In a detailed compilation of historical and ethnographic materials of preindustrial cultures, "strong disapproval of homosexuality was reported for 41% of 42 cultures; it was accepted or ignored by 21%, and 12% reported no such concept. Of 70 ethnographies, 59% reported homosexuality absent or rare in frequency and 41% reported it present or not uncommon."[51]
|
28 |
+
|
29 |
+
In cultures influenced by Abrahamic religions, the law and the church established sodomy as a transgression against divine law or a crime against nature. The condemnation of anal sex between males, however, predates Christian belief. It was frequent in ancient Greece; "unnatural" can be traced back to Plato.[52]
|
30 |
+
|
31 |
+
Many historical figures, including Socrates, Lord Byron, Edward II, and Hadrian,[53] have had terms such as gay or bisexual applied to them. Some scholars, such as Michel Foucault, have regarded this as risking the anachronistic introduction of a contemporary construction of sexuality foreign to their times,[54] though other scholars challenge this.[55][50][49]
|
32 |
+
|
33 |
+
In social science, there has been a dispute between "essentialist" and "constructionist" views of homosexuality. The debate divides those who believe that terms such as "gay" and "straight" refer to objective, culturally invariant properties of persons from those who believe that the experiences they name are artifacts of unique cultural and social processes. "Essentialists" typically believe that sexual preferences are determined by biological forces, while "constructionists" assume that sexual desires are learned.[56] The philosopher of science Michael Ruse has stated that the social constructionist approach, which is influenced by Foucault, is based on a selective reading of the historical record that confuses the existence of homosexual people with the way in which they are labelled or treated.[57]
|
34 |
+
|
35 |
+
The first record of a possible homosexual couple in history is commonly regarded as Khnumhotep and Niankhkhnum, an ancient Egyptian male couple, who lived around 2400 BCE. The pair are portrayed in a nose-kissing position, the most intimate pose in Egyptian art, surrounded by what appear to be their heirs. The anthropologists Stephen Murray and Will Roscoe reported that women in Lesotho engaged in socially sanctioned "long term, erotic relationships" called motsoalle.[58] The anthropologist E. E. Evans-Pritchard also recorded that male Azande warriors in the northern Congo routinely took on young male lovers between the ages of twelve and twenty, who helped with household tasks and participated in intercrural sex with their older husbands.[59]
|
36 |
+
|
37 |
+
As is true of many other non-Western cultures, it is difficult to determine the extent to which Western notions of sexual orientation and gender identity apply to Pre-Columbian cultures. Evidence of homoerotic sexual acts and transvestism has been found in many pre-conquest civilizations in Latin America, such as the Aztecs, Mayas, Quechuas, Moches, Zapotecs, the Incas, and the Tupinambá of Brazil.[60][61][62]
|
38 |
+
|
39 |
+
The Spanish conquerors were horrified to discover sodomy openly practiced among native peoples, and attempted to crush it out by subjecting the berdaches (as the Spanish called them) under their rule to severe penalties, including public execution, burning and being torn to pieces by dogs.[63] The Spanish conquerors talked extensively of sodomy among the natives to depict them as savages and hence justify their conquest and forceful conversion to Christianity. As a result of the growing influence and power of the conquerors, many native cultures started condemning homosexual acts themselves.[citation needed]
|
40 |
+
|
41 |
+
Among some of the indigenous peoples of the Americas in North America prior to European colonization, a relatively common form of same-sex sexuality centered around the figure of the Two-Spirit individual (the term itself was coined only in 1990). Typically, this individual was recognized early in life, given a choice by the parents to follow the path and, if the child accepted the role, raised in the appropriate manner, learning the customs of the gender it had chosen. Two-Spirit individuals were commonly shamans and were revered as having powers beyond those of ordinary shamans. Their sexual life was with the ordinary tribe members of the same sex.[citation needed]
|
42 |
+
|
43 |
+
During the colonial times following the European invasion, homosexuality was prosecuted by the Inquisition, some times leading to death sentences on the charges of sodomy, and the practices became clandestine. Many homosexual individuals went into heterosexual marriages to keep appearances, and many turned to the clergy to escape public scrutiny of their lack of interest in the opposite sex.[citation needed]
|
44 |
+
|
45 |
+
In 1986, the Supreme Court of the United States ruled in Bowers v. Hardwick that a state could criminalize sodomy, but, in 2003, overturned itself in Lawrence v. Texas and thereby legalized homosexual activity throughout the United States of America.
|
46 |
+
|
47 |
+
Same-sex marriage in the United States expanded from one state in 2004 to all fifty states in 2015, through various state court rulings, state legislation, direct popular votes (referendums and initiatives), and federal court rulings.
|
48 |
+
|
49 |
+
In East Asia, same-sex love has been referred to since the earliest recorded history.
|
50 |
+
|
51 |
+
Homosexuality in China, known as the passions of the cut peach and various other euphemisms, has been recorded since approximately 600 BCE. Homosexuality was mentioned in many famous works of Chinese literature. The instances of same-sex affection and sexual interactions described in the classical novel Dream of the Red Chamber seem as familiar to observers in the present as do equivalent stories of romances between heterosexual people during the same period. Confucianism, being primarily a social and political philosophy, focused little on sexuality, whether homosexual or heterosexual. Ming Dynasty literature, such as Bian Er Chai (弁而釵/弁而钗), portray homosexual relationships between men as more enjoyable and more "harmonious" than heterosexual relationships.[64] Writings from the Liu Song Dynasty by Wang Shunu claimed that homosexuality was as common as heterosexuality in the late 3rd century.[65]
|
52 |
+
|
53 |
+
Opposition to homosexuality in China originates in the medieval Tang Dynasty (618–907), attributed to the rising influence of Christian and Islamic values,[66] but did not become fully established until the Westernization efforts of the late Qing Dynasty and the Republic of China.[67]
|
54 |
+
|
55 |
+
The Laws of Manu mentions a "third sex", members of which may engage in nontraditional gender expression and homosexual activities.[68]
|
56 |
+
|
57 |
+
The earliest Western documents (in the form of literary works, art objects, and mythographic materials) concerning same-sex relationships are derived from ancient Greece.
|
58 |
+
|
59 |
+
In regard to male homosexuality, such documents depict an at times complex understanding in which relationships with women and relationships with adolescent boys could be a part of a normal man's love life. Same-sex relationships were a social institution variously constructed over time and from one city to another. The formal practice, an erotic yet often restrained relationship between a free adult male and a free adolescent, was valued for its pedagogic benefits and as a means of population control, though occasionally blamed for causing disorder. Plato praised its benefits in his early writings[69] but in his late works proposed its prohibition.[70] Aristotle, in the Politics, dismissed Plato's ideas about abolishing homosexuality (2.4); he explains that barbarians like the Celts accorded it a special honor (2.6.6), while the Cretans used it to regulate the population (2.7.5).[71]
|
60 |
+
|
61 |
+
Little is known of female homosexuality in antiquity. Sappho, born on the island of Lesbos, was included by later Greeks in the canonical list of nine lyric poets. The adjectives deriving from her name and place of birth (Sapphic and Lesbian) came to be applied to female homosexuality beginning in the 19th century.[72][73] Sappho's poetry centers on passion and love for various personages and both genders. The narrators of many of her poems speak of infatuations and love (sometimes requited, sometimes not) for various females, but descriptions of physical acts between women are few and subject to debate.[74][75]
|
62 |
+
|
63 |
+
In Ancient Rome, the young male body remained a focus of male sexual attention, but relationships were between older free men and slaves or freed youths who took the receptive role in sex. The Hellenophile emperor Hadrian is renowned for his relationship with Antinous, but the Christian emperor Theodosius I decreed a law on 6 August 390, condemning passive males to be burned at the stake. Notwithstanding these regulations taxes on brothels with boys available for homosexual sex continued to be collected until the end of the reign of Anastasius I in 518. Justinian, towards the end of his reign, expanded the proscription to the active partner as well (in 558), warning that such conduct can lead to the destruction of cities through the "wrath of God".
|
64 |
+
|
65 |
+
During the Renaissance, wealthy cities in northern Italy—Florence and Venice in particular—were renowned for their widespread practice of same-sex love, engaged in by a considerable part of the male population and constructed along the classical pattern of Greece and Rome.[76][77] But even as many of the male population were engaging in same-sex relationships, the authorities, under the aegis of the Officers of the Night court, were prosecuting, fining, and imprisoning a good portion of that population.
|
66 |
+
|
67 |
+
From the second half of the 13th century, death was the punishment for male homosexuality in most of Europe.[78]
|
68 |
+
The relationships of socially prominent figures, such as King James I and the Duke of Buckingham, served to highlight the issue, including in anonymously authored street pamphlets: "The world is chang'd I know not how, For men Kiss Men, not Women now;...Of J. the First and Buckingham: He, true it is, his Wives Embraces fled, To slabber his lov'd Ganimede" (Mundus Foppensis, or The Fop Display'd, 1691).
|
69 |
+
|
70 |
+
Love Letters Between a Certain Late Nobleman and the Famous Mr. Wilson was published in 1723 in England, and was presumed by some modern scholars to be a novel. The 1749 edition of John Cleland's popular novel Fanny Hill includes a homosexual scene, but this was removed in its 1750 edition. Also in 1749, the earliest extended and serious defense of homosexuality in English, Ancient and Modern Pederasty Investigated and Exemplified, written by Thomas Cannon, was published, but was suppressed almost immediately. It includes the passage, "Unnatural Desire is a Contradiction in Terms; downright Nonsense. Desire is an amatory Impulse of the inmost human Parts."[79] Around 1785 Jeremy Bentham wrote another defense, but this was not published until 1978.[80] Executions for sodomy continued in the Netherlands until 1803, and in England until 1835, James Pratt and John Smith being the last Englishmen to be so hanged.
|
71 |
+
|
72 |
+
Between 1864 and 1880 Karl Heinrich Ulrichs published a series of twelve tracts, which he collectively titled Research on the Riddle of Man-Manly Love. In 1867, he became the first self-proclaimed homosexual person to speak out publicly in defense of homosexuality when he pleaded at the Congress of German Jurists in Munich for a resolution urging the repeal of anti-homosexual laws.[16] Sexual Inversion by Havelock Ellis, published in 1896, challenged theories that homosexuality was abnormal, as well as stereotypes, and insisted on the ubiquity of homosexuality and its association with intellectual and artistic achievement.[81]
|
73 |
+
|
74 |
+
Although medical texts like these (written partly in Latin to obscure the sexual details) were not widely read by the general public, they did lead to the rise of Magnus Hirschfeld's Scientific-Humanitarian Committee, which campaigned from 1897 to 1933 against anti-sodomy laws in Germany, as well as a much more informal, unpublicized movement among British intellectuals and writers, led by such figures as Edward Carpenter and John Addington Symonds. Beginning in 1894 with Homogenic Love, Socialist activist and poet Edward Carpenter wrote a string of pro-homosexual articles and pamphlets, and "came out" in 1916 in his book My Days and Dreams. In 1900, Elisar von Kupffer published an anthology of homosexual literature from antiquity to his own time, Lieblingminne und Freundesliebe in der Weltliteratur.
|
75 |
+
|
76 |
+
There are a handful of accounts by Arab travelers to Europe during the mid-1800s. Two of these travelers, Rifa'ah al-Tahtawi and Muhammad as-Saffar, show their surprise that the French sometimes deliberately mistranslated love poetry about a young boy, instead referring to a young female, to maintain their social norms and morals.[82]
|
77 |
+
|
78 |
+
Israel is considered the most tolerant country in the Middle East and Asia to homosexuals,[83] with Tel Aviv being named "the gay capital of the Middle East"[84] and considered one of the most gay friendly cities in the world.[85] The annual Pride Parade in support of homosexuality takes place in Tel Aviv.[86]
|
79 |
+
|
80 |
+
On the other hand, many governments in the Middle East often ignore, deny the existence of, or criminalize homosexuality. Homosexuality is illegal in almost all Muslim countries.[87] Same-sex intercourse officially carries the death penalty in several Muslim nations: Saudi Arabia, Iran, Mauritania, northern Nigeria, Sudan, and Yemen.[88] Iranian President Mahmoud Ahmadinejad, during his 2007 speech at Columbia University, asserted that there were no gay people in Iran. However, the probable reason is that they keep their sexuality a secret for fear of government sanction or rejection by their families.[89]
|
81 |
+
|
82 |
+
In ancient Sumer, a set of priests known as gala worked in the temples of the goddess Inanna, where they performed elegies and lamentations.[91]:285 Gala took female names, spoke in the eme-sal dialect, which was traditionally reserved for women, and appear to have engaged in homosexual intercourse.[92] The Sumerian sign for gala was a ligature of the signs for "penis" and "anus".[92] One Sumerian proverb reads: "When the gala wiped off his ass [he said], 'I must not arouse that which belongs to my mistress [i.e., Inanna].'"[92] In later Mesopotamian cultures, kurgarrū and assinnu were servants of the goddess Ishtar (Inanna's East Semitic equivalent), who dressed in female clothing and performed war dances in Ishtar's temples.[92] Several Akkadian proverbs seem to suggest that they may have also engaged in homosexual intercourse.[92]
|
83 |
+
|
84 |
+
In ancient Assyria, homosexuality was present and common; it was also not prohibited, condemned, nor looked upon as immoral or disordered. Some religious texts contain prayers for divine blessings on homosexual relationships.[93][94] The Almanac of Incantations contained prayers favoring on an equal basis the love of a man for a woman, of a woman for a man, and of a man for man.[95]
|
85 |
+
|
86 |
+
Some scholars argue that there are examples of homosexual love in ancient literature, like in the Mesopotamian Epic of Gilgamesh as well as in the Biblical story of David and Jonathan. In the Epic of Gilgamesh, the relationship between the main protagonist Gilgamesh and the character Enkidu has been seen by some to be homosexual in nature.[96][97][98][99] Similarly, David's love for Jonathan is "greater than the love of women."[100]
|
87 |
+
|
88 |
+
In some societies of Melanesia, especially in Papua New Guinea, same-sex relationships were an integral part of the culture until the middle of the 1900s. The Etoro and Marind-anim for example, viewed heterosexuality as unclean and celebrated homosexuality instead. In some traditional Melanesian cultures a prepubertal boy would be paired with an older adolescent who would become his mentor and who would "inseminate" him (orally, anally, or topically, depending on the tribe) over a number of years in order for the younger to also reach puberty. Many Melanesian societies, however, have become hostile towards same-sex relationships since the introduction of Christianity by European missionaries.[101]
|
89 |
+
|
90 |
+
The American Psychological Association, the American Psychiatric Association, and the National Association of Social Workers identify sexual orientation as "not merely a personal characteristic that can be defined in isolation. Rather, one's sexual orientation defines the universe of persons with whom one is likely to find the satisfying and fulfilling relationships":[3]
|
91 |
+
|
92 |
+
Sexual orientation is commonly discussed as a characteristic of the individual, like biological sex, gender identity, or age. This perspective is incomplete because sexual orientation is always defined in relational terms and necessarily involves relationships with other individuals. Sexual acts and romantic attractions are categorized as homosexual or heterosexual according to the biological sex of the individuals involved in them, relative to each other. Indeed, it is by acting—or desiring to act—with another person that individuals express their heterosexuality, homosexuality, or bisexuality. This includes actions as simple as holding hands with or kissing another person. Thus, sexual orientation is integrally linked to the intimate personal relationships that human beings form with others to meet their deeply felt needs for love, attachment, and intimacy. In addition to sexual behavior, these bonds encompass nonsexual physical affection between partners, shared goals and values, mutual support, and ongoing commitment.[3]
|
93 |
+
|
94 |
+
The Kinsey scale, also called the Heterosexual-Homosexual Rating Scale,[102] attempts to describe a person's sexual history or episodes of his or her sexual activity at a given time. It uses a scale from 0, meaning exclusively heterosexual, to 6, meaning exclusively homosexual. In both the Male and Female volumes of the Kinsey Reports, an additional grade, listed as "X", has been interpreted by scholars to indicate asexuality.[103]
|
95 |
+
|
96 |
+
Often, sexual orientation and sexual identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual orientation.[104][105][106] Sexual orientation is stable and unlikely to change for the vast majority of people, but some research indicates that some people may experience change in their sexual orientation, and this is more likely for women than for men.[107] The American Psychological Association distinguishes between sexual orientation (an innate attraction) and sexual orientation identity (which may change at any point in a person's life).[108]
|
97 |
+
|
98 |
+
People with a homosexual orientation can express their sexuality in a variety of ways, and may or may not express it in their behaviors.[2] Many have sexual relationships predominantly with people of their own sex, though some have sexual relationships with those of the opposite sex, bisexual relationships, or none at all (celibacy).[2] Studies have found same-sex and opposite-sex couples to be equivalent to each other in measures of satisfaction and commitment in relationships, that age and sex are more reliable than sexual orientation as a predictor of satisfaction and commitment to a relationship, and that people who are heterosexual or homosexual share comparable expectations and ideals with regard to romantic relationships.[109][110][111]
|
99 |
+
|
100 |
+
Coming out (of the closet) is a phrase referring to one's disclosure of their sexual orientation or gender identity, and is described and experienced variously as a psychological process or journey.[113] Generally, coming out is described in three phases. The first phase is that of "knowing oneself", and the realization emerges that one is open to same-sex relations.[114] This is often described as an internal coming out. The second phase involves one's decision to come out to others, e.g. family, friends, or colleagues. The third phase more generally involves living openly as an LGBT person.[115] In the United States today, people often come out during high school or college age. At this age, they may not trust or ask for help from others, especially when their orientation is not accepted in society. Sometimes their own families are not even informed.
|
101 |
+
|
102 |
+
According to Rosario, Schrimshaw, Hunter, Braun (2006), "the development of a lesbian, gay, or bisexual (LGB) sexual identity is a complex and often difficult process. Unlike members of other minority groups (e.g., ethnic and racial minorities), most LGB individuals are not raised in a community of similar others from whom they learn about their identity and who reinforce and support that identity. Rather, LGB individuals are often raised in communities that are either ignorant of or openly hostile toward homosexuality."[105]
|
103 |
+
|
104 |
+
Outing is the practice of publicly revealing the sexual orientation of a closeted person.[116] Notable politicians, celebrities, military service people, and clergy members have been outed, with motives ranging from malice to political or moral beliefs. Many commentators oppose the practice altogether,[117] while some encourage outing public figures who use their positions of influence to harm other gay people.[118]
|
105 |
+
|
106 |
+
Lesbians often experience their sexuality differently from gay men, and have different understandings about etiology from those derived from studies focused mostly on men.[citation needed]
|
107 |
+
|
108 |
+
In a U.S.-based 1970s mail survey by Shere Hite, lesbians self-reported their reasons for being lesbian. This is the only major piece of research into female sexuality that has looked at how women understand being homosexual since Kinsey in 1953.[citation needed] The research yielded information about women's general understanding of lesbian relationships and their sexual orientation. Women gave various reasons for preferring sexual relations with women to sexual relations with men, including finding women more sensitive to other people's needs.[119]
|
109 |
+
|
110 |
+
Since Hite carried out her study she has acknowledged that some women may have chosen the political identity of a lesbian. Julie Bindel, a UK journalist, reaffirmed that "political lesbianism continues to make intrinsic sense because it reinforces the idea that sexuality is a choice, and we are not destined to a particular fate because of our chromosomes." as recently as 2009.[120]
|
111 |
+
|
112 |
+
Reliable data as to the size of the gay and lesbian population are of value in informing public policy.[123] For example, demographics would help in calculating the costs and benefits of domestic partnership benefits, of the impact of legalizing gay adoption, and of the impact of the U.S. military's Don't Ask Don't Tell policy.[123] Further, knowledge of the size of the "gay and lesbian population holds promise for helping social scientists understand a wide array of important questions—questions about the general nature of labor market choices, accumulation of human capital, specialization within households, discrimination, and decisions about geographic location."[123]
|
113 |
+
|
114 |
+
Measuring the prevalence of homosexuality presents difficulties. It is necessary to consider the measuring criteria that are used, the cutoff point and the time span taken to define a sexual orientation.[16] Many people, despite having same-sex attractions, may be reluctant to identify themselves as gay or bisexual. The research must measure some characteristic that may or may not be defining of sexual orientation. The number of people with same-sex desires may be larger than the number of people who act on those desires, which in turn may be larger than the number of people who self-identify as gay, lesbian, or bisexual.[123]
|
115 |
+
|
116 |
+
In 1948 and 1953, Alfred Kinsey reported that nearly 46% of the male subjects had "reacted" sexually to persons of both sexes in the course of their adult lives, and 37% had had at least one homosexual experience.[124][125] Kinsey's methodology was criticized by John Tukey for using convenience samples and not random samples.[126][127]
|
117 |
+
A later study tried to eliminate the sample bias, but still reached similar conclusions.[128] Simon LeVay cites these Kinsey results as an example of the caution needed to interpret demographic studies, as they may give quite differing numbers depending on what criteria are used to conduct them, in spite of using sound scientific methods.[16]
|
118 |
+
|
119 |
+
According to major studies, 2% to 11% of people have had some form of same-sex sexual contact within their lifetime;[129][130][131][132][133][134][135][136][137][138] this percentage rises to 16–21% when either or both same-sex attraction and behavior are reported.[138] In a 2006 study, 20% of respondents anonymously reported some homosexual feelings, although only 2–3% identified themselves as homosexual.[139] A 1992 study reported that 6.1% of males in Britain have had a homosexual experience, while in France the number was reported at 4.1%.[140]
|
120 |
+
|
121 |
+
In the United States, according to a report by The Williams Institute in April 2011, 3.5% or approximately 9 million of the adult population identify as lesbian, gay, or bisexual.[141][142] According to the 2000 United States Census, there were about 601,209 same-sex unmarried partner households.[143]
|
122 |
+
|
123 |
+
A 2013 study by the CDC, in which over 34,000 Americans were interviewed, puts the percentage of self-identifying lesbians and gay men at 1.6%, and of bisexuals at 0.7%.[144]
|
124 |
+
|
125 |
+
According to a 2008 poll, 13% of Britons have had some form of same-sex sexual contact while only 6% of Britons identify themselves as either homosexual or bisexual.[145] In contrast, a survey by the UK Office for National Statistics (ONS) in 2010 found that 95% of Britons identified as heterosexual, 1.5% of Britons identified themselves as homosexual or bisexual, and the last 3.5% gave more vague answers such as "don't know", "other", or did not respond to the question.[125][146]
|
126 |
+
|
127 |
+
In October 2012, Gallup started conducting annual surveys to study the demographics of LGBT people, determining that 3.4% (±1%) of adults identified as LGBT in the United States.[147] It was the nation's largest poll on the issue at the time.[148][149] In 2017, the percentage was estimated to have risen to 4.5% of adults, with the increase largely driven by Millennials. The poll attributes the rise to greater willingness of younger people to reveal their sexual identity.[150]
|
128 |
+
|
129 |
+
The American Psychological Association, the American Psychiatric Association, and the National Association of Social Workers state:
|
130 |
+
|
131 |
+
In 1952, when the American Psychiatric Association published its first Diagnostic and Statistical Manual of Mental Disorders, homosexuality was included as a disorder. Almost immediately, however, that classification began to be subjected to critical scrutiny in research funded by the National Institute of Mental Health. That study and subsequent research consistently failed to produce any empirical or scientific basis for regarding homosexuality as a disorder or abnormality, rather than a normal and healthy sexual orientation. As results from such research accumulated, professionals in medicine, mental health, and the behavioral and social sciences reached the conclusion that it was inaccurate to classify homosexuality as a mental disorder and that the DSM classification reflected untested assumptions based on once-prevalent social norms and clinical impressions from unrepresentative samples comprising patients seeking therapy and individuals whose conduct brought them into the criminal justice system.
|
132 |
+
|
133 |
+
In recognition of the scientific evidence,[151] the American Psychiatric Association removed homosexuality from the DSM in 1973, stating that "homosexuality per se implies no impairment in judgment, stability, reliability, or general social or vocational capabilities." After thoroughly reviewing the scientific data, the American Psychological Association adopted the same position in 1975, and urged all mental health professionals "to take the lead in removing the stigma of mental illness that has long been associated with homosexual orientations." The National Association of Social Workers has adopted a similar policy.
|
134 |
+
|
135 |
+
Thus, mental health professionals and researchers have long recognized that being homosexual poses no inherent obstacle to leading a happy, healthy, and productive life, and that the vast majority of gay and lesbian people function well in the full array of social institutions and interpersonal relationships.[3]
|
136 |
+
|
137 |
+
The consensus of research and clinical literature demonstrates that same-sex sexual and romantic attractions, feelings, and behaviors are normal and positive variations of human sexuality.[152] There is now a large body of research evidence that indicates that being gay, lesbian or bisexual is compatible with normal mental health and social adjustment.[11] The World Health Organization's ICD-9 (1977) listed homosexuality as a mental illness; it was removed from the ICD-10, endorsed by the Forty-third World Health Assembly on 17 May 1990.[153][154][155] Like the DSM-II, the ICD-10 added ego-dystonic sexual orientation to the list, which refers to people who want to change their gender identities or sexual orientation because of a psychological or behavioral disorder (F66.1). The Chinese Society of Psychiatry removed homosexuality from its Chinese Classification of Mental Disorders in 2001 after five years of study by the association.[156] According to the Royal College of Psychiatrists "This unfortunate history demonstrates how marginalisation of a group of people who have a particular personality feature (in this case homosexuality) can lead to harmful medical practice and a basis for discrimination in society.[11] There is now a large body of research evidence that indicates that being gay, lesbian or bisexual is compatible with normal mental health and social adjustment. However, the experiences of discrimination in society and possible rejection by friends, families and others, such as employers, means that some LGB people experience a greater than expected prevalence of mental health difficulties and substance misuse problems. Although there have been claims by conservative political groups in the USA that this higher prevalence of mental health difficulties is confirmation that homosexuality is itself a mental disorder, there is no evidence whatever to substantiate such a claim."[157]
|
138 |
+
|
139 |
+
Most lesbian, gay, and bisexual people who seek psychotherapy do so for the same reasons as heterosexual people (stress, relationship difficulties, difficulty adjusting to social or work situations, etc.); their sexual orientation may be of primary, incidental, or no importance to their issues and treatment. Whatever the issue, there is a high risk for anti-gay bias in psychotherapy with lesbian, gay, and bisexual clients.[158] Psychological research in this area has been relevant to counteracting prejudicial ("homophobic") attitudes and actions, and to the LGBT rights movement generally.[159]
|
140 |
+
|
141 |
+
The appropriate application of affirmative psychotherapy is based on the following scientific facts:[152]
|
142 |
+
|
143 |
+
Although scientists favor biological models for the cause of sexual orientation,[4] they do not believe that the development of sexual orientation is the result of any one factor. They generally believe that it is determined by a complex interplay of biological and environmental factors, and is shaped at an early age.[2][5][6] There is considerably more evidence supporting nonsocial, biological causes of sexual orientation than social ones, especially for males.[8] There is no substantive evidence which suggests parenting or early childhood experiences play a role with regard to sexual orientation.[11] Scientists do not believe that sexual orientation is a choice.[4][5][7]
|
144 |
+
|
145 |
+
The American Academy of Pediatrics stated in Pediatrics in 2004:
|
146 |
+
|
147 |
+
There is no scientific evidence that abnormal parenting, sexual abuse, or other adverse life events influence sexual orientation. Current knowledge suggests that sexual orientation is usually established during early childhood.[4][160]
|
148 |
+
|
149 |
+
The American Psychological Association, American Psychiatric Association, and National Association of Social Workers stated in 2006:
|
150 |
+
|
151 |
+
Currently, there is no scientific consensus about the specific factors that cause an individual to become heterosexual, homosexual, or bisexual—including possible biological, psychological, or social effects of the parents' sexual orientation. However, the available evidence indicates that the vast majority of lesbian and gay adults were raised by heterosexual parents and the vast majority of children raised by lesbian and gay parents eventually grow up to be heterosexual.[2]
|
152 |
+
|
153 |
+
Despite numerous attempts, no "gay gene" has been identified. However, there is substantial evidence for a genetic basis of homosexuality, especially in males, based on twin studies; some association with regions of Chromosome 8, the Xq28 locus on the X chromosome, and other sites across many chromosomes.[161]
|
154 |
+
|
155 |
+
Sanders et al. 2015
|
156 |
+
|
157 |
+
Sanders et al. 2015
|
158 |
+
|
159 |
+
Starting in the 2010s, potential epigenetic factors have become a topic of increased attention in genetic research on sexual orientation. A study presented at the ASHG 2015 Annual Meeting found that the methylation pattern in nine regions of the genome appeared very closely linked to sexual orientation, with a resulting algorithm using the methylation pattern to predict the sexual orientation of a control group with almost 70% accuracy.[164][165]
|
160 |
+
|
161 |
+
Research into the causes of homosexuality plays a role in political and social debates and also raises concerns about genetic profiling and prenatal testing.[166][167]
|
162 |
+
|
163 |
+
Since homosexuality tends to lower reproductive success, and since there is considerable evidence that human sexual orientation is genetically influenced, it is unclear how it is maintained in the population at a relatively high frequency.[168] There are many possible explanations, such as genes predisposing to homosexuality also conferring advantage in heterosexuals, a kin selection effect, social prestige, and more.[169] A 2009 study also suggested a significant increase in fecundity in the females related to the homosexual people from the maternal line (but not in those related from the paternal one).[170]
|
164 |
+
|
165 |
+
There are no studies of adequate scientific rigor that conclude that sexual orientation change efforts work to change a person's sexual orientation. Those efforts have been controversial due to tensions between the values held by some faith-based organizations, on the one hand, and those held by LGBT rights organizations and professional and scientific organizations and other faith-based organizations, on the other.[14] The longstanding consensus of the behavioral and social sciences and the health and mental health professions is that homosexuality per se is a normal and positive variation of human sexual orientation, and therefore not a mental disorder.[14] The American Psychological Association says that "most people experience little or no sense of choice about their sexual orientation".[171]
|
166 |
+
Some individuals and groups have promoted the idea of homosexuality as symptomatic of developmental defects or spiritual and moral failings and have argued that sexual orientation change efforts, including psychotherapy and religious efforts, could alter homosexual feelings and behaviors. Many of these individuals and groups appeared to be embedded within the larger context of conservative religious political movements that have supported the stigmatization of homosexuality on political or religious grounds.[14]
|
167 |
+
|
168 |
+
No major mental health professional organization has sanctioned efforts to change sexual orientation and virtually all of them have adopted policy statements cautioning the profession and the public about treatments that purport to change sexual orientation. These include the American Psychiatric Association, American Psychological Association, American Counseling Association, National Association of Social Workers in the US,[172] the Royal College of Psychiatrists,[173] and the Australian Psychological Society.[174] The American Psychological Association and the Royal College of Psychiatrists expressed concerns that the positions espoused by NARTH are not supported by the science and create an environment in which prejudice and discrimination can flourish.[173][175]
|
169 |
+
|
170 |
+
The American Psychological Association states that "sexual orientation is not a choice that can be changed at will, and that sexual orientation is most likely the result of a complex interaction of environmental, cognitive and biological factors...is shaped at an early age...[and evidence suggests] biological, including genetic or inborn hormonal factors, play a significant role in a person's sexuality."[5] They say that "sexual orientation identity—not sexual orientation—appears to change via psychotherapy, support groups, and life events."[14] The American Psychiatric Association says "individuals maybe become aware at different points in their lives that they are heterosexual, gay, lesbian, or bisexual" and "opposes any psychiatric treatment, such as 'reparative' or 'conversion' therapy, which is based upon the assumption that homosexuality per se is a mental disorder, or based upon a prior assumption that the patient should change his/her homosexual orientation". They do, however, encourage gay affirmative psychotherapy.[176] Similarly, the American *Psychological* Association[177] is doubtful about the effectiveness and side-effect profile of sexual orientation change efforts, including conversion therapy.
|
171 |
+
|
172 |
+
The American Psychological Association "encourages mental health professionals to avoid misrepresenting the efficacy of sexual orientation change efforts by promoting or promising change in sexual orientation when providing assistance to individuals distressed by their own or others' sexual orientation and concludes that the benefits reported by participants in sexual orientation change efforts can be gained through approaches that do not attempt to change sexual orientation".[14]
|
173 |
+
|
174 |
+
Scientific research has been generally consistent in showing that lesbian and gay parents are as fit and capable as heterosexual parents, and their children are as psychologically healthy and well-adjusted as children reared by heterosexual parents.[178][179][180] According to scientific literature reviews, there is no evidence to the contrary.[3][181][182][183][184]
|
175 |
+
|
176 |
+
A 2001 review suggested that the children with lesbian or gay parents appear less traditionally gender-typed and are more likely to be open to homoerotic relationships, partly due to genetic (80% of the children being raised by same-sex couples in the US are not adopted and most are the result of previous heterosexual marriages.[185]) and family socialization processes (children grow up in relatively more tolerant school, neighborhood, and social contexts, which are less heterosexist), even though majority of children raised by same-sex couples identify as heterosexual.[186] A 2005 review by Charlotte J. Patterson for the American Psychological Association found that the available data did not suggest higher rates of homosexuality among the children of lesbian or gay parents.[187]
|
177 |
+
|
178 |
+
The terms "men who have sex with men" (MSM) and "women who have sex with women" (WSW) refer to people who engage in sexual activity with others of the same sex regardless of how they identify themselves—as many choose not to accept social identities as lesbian, gay and bisexual.[188][189][190][191][192] These terms are often used in medical literature and social research to describe such groups for study, without needing to consider the issues of sexual self-identity. The terms are seen as problematic by some, however, because they "obscure social dimensions of sexuality; undermine the self-labeling of lesbian, gay, and bisexual people; and do not sufficiently describe variations in sexual behavior".[193]
|
179 |
+
|
180 |
+
In contrast to its benefits, sexual behavior can be a disease vector. Safe sex is a relevant harm reduction philosophy.[194] Many countries currently prohibit men who have sex with men from donating blood; the policy of the United States Food and Drug Administration states that "they are, as a group, at increased risk for HIV, hepatitis B and certain other infections that can be transmitted by transfusion."[195]
|
181 |
+
|
182 |
+
These safer sex recommendations are agreed upon by public health officials for women who have sex with women to avoid sexually transmitted infections (STIs):
|
183 |
+
|
184 |
+
These safer sex recommendations are agreed upon by public health officials for men who have sex with men to avoid sexually transmitted infections:
|
185 |
+
|
186 |
+
When it was first described in medical literature, homosexuality was often approached from a view that sought to find an inherent psychopathology as its root cause. Much literature on mental health and homosexual patients centered on their depression, substance abuse, and suicide. Although these issues exist among people who are non-heterosexual, discussion about their causes shifted after homosexuality was removed from the Diagnostic and Statistical Manual (DSM) in 1973. Instead, social ostracism, legal discrimination, internalization of negative stereotypes, and limited support structures indicate factors homosexual people face in Western societies that often adversely affect their mental health.[199]
|
187 |
+
Stigma, prejudice, and discrimination stemming from negative societal attitudes toward homosexuality lead to a higher prevalence of mental health disorders among lesbians, gay men, and bisexuals compared to their heterosexual peers.[200] Evidence indicates that the liberalization of these attitudes over the 1990s through the 2010s is associated with a decrease in such mental health risks among younger LGBT people.[201]
|
188 |
+
|
189 |
+
Gay and lesbian youth bear an increased risk of suicide, substance abuse, school problems, and isolation because of a "hostile and condemning environment, verbal and physical abuse, rejection and isolation from family and peers".[202] Further, LGBT youths are more likely to report psychological and physical abuse by parents or caretakers, and more sexual abuse. Suggested reasons for this disparity are that (1) LGBT youths may be specifically targeted on the basis of their perceived sexual orientation or gender non-conforming appearance, and (2) that "risk factors associated with sexual minority status, including discrimination, invisibility, and rejection by family members...may lead to an increase in behaviors that are associated with risk for victimization, such as substance abuse, sex with multiple partners, or running away from home as a teenager."[203] A 2008 study showed a correlation between the degree of rejecting behavior by parents of LGB adolescents and negative health problems in the teenagers studied:
|
190 |
+
|
191 |
+
Higher rates of family rejection were significantly associated with poorer health outcomes. On the basis of odds ratios, lesbian, gay, and bisexual young adults who reported higher levels of family rejection during adolescence were 8.4 times more likely to report having attempted suicide, 5.9 times more likely to report high levels of depression, 3.4 times more likely to use illegal drugs, and 3.4 times more likely to report having engaged in unprotected sexual intercourse compared with peers from families that reported no or low levels of family rejection.[204]
|
192 |
+
|
193 |
+
Crisis centers in larger cities and information sites on the Internet have arisen to help youth and adults.[205] The Trevor Project, a suicide prevention helpline for gay youth, was established following the 1998 airing on HBO of the Academy Award winning short film Trevor.[206]
|
194 |
+
|
195 |
+
Most nations do not prohibit consensual sex between unrelated persons above the local age of consent. Some jurisdictions further recognize identical rights, protections, and privileges for the family structures of same-sex couples, including marriage. Some countries and jurisdictions mandate that all individuals restrict themselves to heterosexual activity and disallow homosexual activity via sodomy laws. Offenders can face the death penalty in Islamic countries and jurisdictions ruled by sharia. There are, however, often significant differences between official policy and real-world enforcement.
|
196 |
+
|
197 |
+
Although homosexual acts were decriminalized in some parts of the Western world, such as Poland in 1932, Denmark in 1933, Sweden in 1944, and the England and Wales in 1967, it was not until the mid-1970s that the gay community first began to achieve limited civil rights in some developed countries. A turning point was reached in 1973 when the American Psychiatric Association, which previously listed homosexuality in the DSM-I in 1952, removed homosexuality in the DSM-II, in recognition of scientific evidence.[3] In 1977, Quebec became the first state-level jurisdiction in the world to prohibit discrimination on the grounds of sexual orientation. During the 1980s and 1990s, most developed countries enacted laws decriminalizing homosexual behavior and prohibiting discrimination against lesbian and gay people in employment, housing, and services. On the other hand, many countries today in the Middle East and Africa, as well as several countries in Asia, the Caribbean and the South Pacific, outlaw homosexuality. In 2013, the Supreme Court of India upheld Section 377 of the Indian Penal Code,[207] but in 2018 overturned itself and legalized homosexual activity throughout India.[208] The Section 377, a British colonial-era law which criminalizes homosexual activity, remains in effect in more than 40 former British colonies.[209][210]
|
198 |
+
|
199 |
+
10 countries or jurisdictions, all of which are Islamic and ruled by sharia, impose the death penalty for homosexuality.
|
200 |
+
|
201 |
+
In the European Union, discrimination of any type based on sexual orientation or gender identity is illegal under the Charter of Fundamental Rights of the European Union.[220]
|
202 |
+
|
203 |
+
Since the 1960s, many LGBT people in the West, particularly those in major metropolitan areas, have developed a so-called gay culture. To many,[who?] gay culture is exemplified by the gay pride movement, with annual parades and displays of rainbow flags. Yet not all LGBT people choose to participate in "queer culture", and many gay men and women specifically decline to do so. To some[who?] it seems to be a frivolous display, perpetuating gay stereotypes.
|
204 |
+
|
205 |
+
With the outbreak of AIDS in the early 1980s, many LGBT groups and individuals organized campaigns to promote efforts in AIDS education, prevention, research, patient support, and community outreach, as well as to demand government support for these programs.
|
206 |
+
|
207 |
+
The death toll wrought by the AIDS epidemic at first seemed to slow the progress of the gay rights movement, but in time it galvanized some parts of the LGBT community into community service and political action, and challenged the heterosexual community to respond compassionately. Major American motion pictures from this period that dramatized the response of individuals and communities to the AIDS crisis include An Early Frost (1985), Longtime Companion (1990), And the Band Played On (1993), Philadelphia (1993), and Common Threads: Stories from the Quilt (1989).
|
208 |
+
|
209 |
+
Publicly gay politicians have attained numerous government posts, even in countries that had sodomy laws in their recent past. Examples include Guido Westerwelle, Germany's Vice-Chancellor; Peter Mandelson, a British Labour Party cabinet minister and Per-Kristian Foss, formerly Norwegian Minister of Finance.
|
210 |
+
|
211 |
+
LGBT movements are opposed by a variety of individuals and organizations. Some social conservatives believe that all sexual relationships with people other than an opposite-sex spouse undermine the traditional family[221] and that children should be reared in homes with both a father and a mother.[222][223] Some argue that gay rights may conflict with individuals' freedom of speech,[224][225] religious freedoms in the workplace,[226][227] the ability to run churches,[228] charitable organizations[229][230] and other religious organizations[231] in accordance with one's religious views, and that the acceptance of homosexual relationships by religious organizations might be forced through threatening to remove the tax-exempt status of churches whose views do not align with those of the government.[232][233][234][235] Some critics charge that political correctness has led to the association of sex between males and HIV being downplayed.[236]
|
212 |
+
|
213 |
+
Policies and attitudes toward gay and lesbian military personnel vary widely around the world. Some countries allow gay men, lesbians, and bisexual people to serve openly and have granted them the same rights and privileges as their heterosexual counterparts. Many countries neither ban nor support LGB service members. A few countries continue to ban homosexual personnel outright.
|
214 |
+
|
215 |
+
Most Western military forces have removed policies excluding sexual minority members. Of the 26 countries that participate militarily in NATO, more than 20 permit openly gay, lesbian and bisexual people to serve. Of the permanent members of the United Nations Security Council, three (United Kingdom, France and United States) do so. The other two generally do not: China bans gay and lesbian people outright, Russia excludes all gay and lesbian people during peacetime but allows some gay men to serve in wartime (see below). Israel is the only country in the Middle East region that allows openly LGB people to serve in the military.
|
216 |
+
|
217 |
+
While the question of homosexuality in the military has been highly politicized in the United States, it is not necessarily so in many countries. Generally speaking, sexuality in these cultures is considered a more personal aspect of one's identity than it is in the United States.
|
218 |
+
|
219 |
+
According to the American Psychological Association, empirical evidence fails to show that sexual orientation is germane to any aspect of military effectiveness including unit cohesion, morale, recruitment and retention.[237] Sexual orientation is irrelevant to task cohesion, the only type of cohesion that critically predicts the team's military readiness and success.[238]
|
220 |
+
|
221 |
+
Societal acceptance of non-heterosexual orientations such as homosexuality is lowest in Asian, African and Eastern European countries,[239][240] and is highest in Western Europe, Australia, and the Americas. Western society has become increasingly accepting of homosexuality since the 1990s. In 2017, Professor Amy Adamczyk contended that these cross-national differences in acceptance can be largely explained by three factors: the relative strength of democratic institutions, the level of economic development, and the religious context of the places where people live.[241]
|
222 |
+
|
223 |
+
In 2006, the American Psychological Association, American Psychiatric Association and National Association of Social Workers stated in an amicus brief presented to the Supreme Court of California: "Gay men and lesbians form stable, committed relationships that are equivalent to heterosexual relationships in essential respects. The institution of marriage offers social, psychological, and health benefits that are denied to same-sex couples. By denying same-sex couples the right to marry, the state reinforces and perpetuates the stigma historically associated with homosexuality. Homosexuality remains stigmatized, and this stigma has negative consequences. California's prohibition on marriage for same-sex couples reflects and reinforces this stigma". They concluded: "There is no scientific basis for distinguishing between same-sex couples and heterosexual couples with respect to the legal rights, obligations, benefits, and burdens conferred by civil marriage."[3]
|
224 |
+
|
225 |
+
Though the relationship between homosexuality and religion is complex, current authoritative bodies and doctrines of the world's largest religions view homosexual behaviour negatively.[citation needed] This can range from quietly discouraging homosexual activity, to explicitly forbidding same-sex sexual practices among adherents and actively opposing social acceptance of homosexuality. Some teach that homosexual desire itself is sinful,[242] others state that only the sexual act is a sin,[243] others are completely accepting of gays and lesbians,[244] while some encourage homosexuality.[245] Some claim that homosexuality can be overcome through religious faith and practice. On the other hand, voices exist within many of these religions that view homosexuality more positively, and liberal religious denominations may bless same-sex marriages. Some view same-sex love and sexuality as sacred, and a mythology of same-sex love can be found around the world.[246]
|
226 |
+
|
227 |
+
Gay bullying can be the verbal or physical abuse against a person who is perceived by the aggressor to be lesbian, gay, bisexual or transgender, including persons who are actually heterosexual or of non-specific or unknown sexual orientation. In the US, teenage students heard anti-gay slurs such as "homo", "faggot" and "sissy" about 26 times a day on average, or once every 14 minutes, according to a 1998 study by Mental Health America (formerly National Mental Health Association).[247]
|
228 |
+
|
229 |
+
In many cultures, homosexual people are frequently subject to prejudice and discrimination. A 2011 Dutch study concluded that 49% of Holland's youth and 58% of youth foreign to the country reject homosexuality.[248] Similar to other minority groups they can also be subject to stereotyping. These attitudes tend to be due to forms of homophobia and heterosexism (negative attitudes, bias, and discrimination in favor of opposite-sex sexuality and relationships). Heterosexism can include the presumption that everyone is heterosexual or that opposite-sex attractions and relationships are the norm and therefore superior. Homophobia is a fear of, aversion to, or discrimination against homosexual people. It manifests in different forms, and a number of different types have been postulated, among which are internalized homophobia, social homophobia, emotional homophobia, rationalized homophobia, and others.[249] Similar is lesbophobia (specifically targeting lesbians) and biphobia (against bisexual people). When such attitudes manifest as crimes they are often called hate crimes and gay bashing.
|
230 |
+
|
231 |
+
Negative stereotypes characterize LGB people as less romantically stable, more promiscuous and more likely to abuse children, but there is no scientific basis to such assertions. Gay men and lesbians form stable, committed relationships that are equivalent to heterosexual relationships in essential respects.[3] Sexual orientation does not affect the likelihood that people will abuse children.[250][251][252] Claims that there is scientific evidence to support an association between being gay and being a pedophile are based on misuses of those terms and misrepresentation of the actual evidence.[251]
|
232 |
+
|
233 |
+
In the United States, the FBI reported that 20.4% of hate crimes reported to law enforcement in 2011 were based on sexual orientation bias. 56.7% of these crimes were based on bias against homosexual men. 11.1% were based on bias against homosexual women. 29.6% were based on anti-homosexual bias without regard to gender.[253] The 1998 murder of Matthew Shepard, a gay student, is a notorious such incident in the U.S. LGBT people, especially lesbians, may become the victims of "corrective rape", a violent crime with the supposed aim of making them heterosexual. In certain parts of the world, LGBT people are also at risk of "honor killings" perpetrated by their families or relatives.[254][255][256]
|
234 |
+
|
235 |
+
In Morocco, a constitutional monarchy following Islamic laws, homosexual acts are a punishable offence. With a population hostile towards LGBT people, the country has witnessed public demonstrations against homosexuals, public denunciations of presumed homosexual individuals, as well as violent intrusions in private homes. The community in the country is exposed to additional risk of prejudice, social rejection and violence, with a greater impossibility of obtaining protection even from the police.[257]
|
236 |
+
|
237 |
+
In 2017, Abderrahim El Habachi became one of the many homosexual Moroccans to flee the country for fear, and sought shelter in the United Kingdom. In August 2019, he accused the UK Home Office of not taking LGBT migrants seriously. He highlighted that they are being put through tough times because the Home Office don't believe their sexual orientation or in the struggles they face in their countries.[258]
|
238 |
+
|
239 |
+
Homosexual and bisexual behaviors occur in a number of other animal species. Such behaviors include sexual activity, courtship, affection, pair bonding, and parenting,[20] and are widespread; a 1999 review by researcher Bruce Bagemihl shows that homosexual behavior has been documented in about 500 species, ranging from primates to gut worms.[20][21] Animal sexual behavior takes many different forms, even within the same species. The motivations for and implications of these behaviors have yet to be fully understood, since most species have yet to be fully studied.[260] According to Bagemihl, "the animal kingdom [does] it with much greater sexual diversity—including homosexual, bisexual and nonreproductive sex—than the scientific community and society at large have previously been willing to accept".[261]
|
240 |
+
|
241 |
+
A review paper by N. W. Bailey and Marlene Zuk looking into studies of same-sex sexual behaviour in animals challenges the view that such behaviour lowers reproductive success, citing several hypotheses about how same-sex sexual behavior might be adaptive; these hypotheses vary greatly among different species. Bailey and Zuk also suggest future research needs to look into evolutionary consequences of same-sex sexual behaviour, rather than only looking into origins of such behaviour.[262]
|
242 |
+
|
243 |
+
Category:LGBT culture
|
en/2614.html.txt
ADDED
@@ -0,0 +1,243 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Homosexuality is romantic attraction, sexual attraction, or sexual behavior between members of the same sex or gender.[1] As a sexual orientation, homosexuality is "an enduring pattern of emotional, romantic, and/or sexual attractions" to people of the same sex. It "also refers to a person's sense of identity based on those attractions, related behaviors, and membership in a community of others who share those attractions."[2][3]
|
6 |
+
|
7 |
+
Along with bisexuality and heterosexuality, homosexuality is one of the three main categories of sexual orientation within the heterosexual–homosexual continuum.[2] Scientists do not know the exact cause of sexual orientation, but they theorize that it is caused by a complex interplay of genetic, hormonal, and environmental influences,[4][5][6] and do not view it as a choice.[4][5][7] Although no single theory on the cause of sexual orientation has yet gained widespread support, scientists favor biologically-based theories.[4] There is considerably more evidence supporting nonsocial, biological causes of sexual orientation than social ones, especially for males.[8][9][10] There is no substantive evidence which suggests parenting or early childhood experiences play a role with regard to sexual orientation.[11] While some people believe that homosexual activity is unnatural,[12] scientific research shows that homosexuality is a normal and natural variation in human sexuality and is not in and of itself a source of negative psychological effects.[2][13] There is insufficient evidence to support the use of psychological interventions to change sexual orientation.[14][15]
|
8 |
+
|
9 |
+
The most common terms for homosexual people are lesbian for females and gay for males, but gay also commonly refers to both homosexual females and males. The percentage of people who are gay or lesbian and the proportion of people who are in same-sex romantic relationships or have had same-sex sexual experiences are difficult for researchers to estimate reliably for a variety of reasons, including many gay and lesbian people not openly identifying as such due to prejudice or discrimination such as homophobia and heterosexism.[16] Homosexual behavior has also been documented in many non-human animal species.[22]
|
10 |
+
|
11 |
+
Many gay and lesbian people are in committed same-sex relationships, though only in the 2010s have census forms and political conditions facilitated their visibility and enumeration.[23] These relationships are equivalent to heterosexual relationships in essential psychological respects.[3] Homosexual relationships and acts have been admired, as well as condemned, throughout recorded history, depending on the form they took and the culture in which they occurred.[24] Since the end of the 19th century, there has been a global movement towards freedom and equality for gay people, including the introduction of anti-bullying legislation to protect gay children at school, legislation ensuring non-discrimination, equal ability to serve in the military, equal access to health care, equal ability to adopt and parent, and the establishment of marriage equality.
|
12 |
+
|
13 |
+
The word homosexual is a Greek and Latin hybrid, with the first element derived from Greek ὁμός homos, "same" (not related to the Latin homo, "man", as in Homo sapiens), thus connoting sexual acts and affections between members of the same sex, including lesbianism.[25][26] The first known appearance of homosexual in print is found in an 1869 German pamphlet by the Austrian-born novelist Karl-Maria Kertbeny, published anonymously,[27] arguing against a Prussian anti-sodomy law.[27][28] In 1886, the psychiatrist Richard von Krafft-Ebing used the terms homosexual and heterosexual in his book Psychopathia Sexualis. Krafft-Ebing's book was so popular among both laymen and doctors that the terms heterosexual and homosexual became the most widely accepted terms for sexual orientation.[29][30] As such, the current use of the term has its roots in the broader 19th-century tradition of personality taxonomy.
|
14 |
+
|
15 |
+
Many modern style guides in the U.S. recommend against using homosexual as a noun, instead using gay man or lesbian.[31] Similarly, some recommend completely avoiding usage of homosexual as it has a negative, clinical history and because the word only refers to one's sexual behavior (as opposed to romantic feelings) and thus it has a negative connotation.[31] Gay and lesbian are the most common alternatives. The first letters are frequently combined to create the initialism LGBT (sometimes written as GLBT), in which B and T refer to bisexual and transgender people.
|
16 |
+
|
17 |
+
Gay especially refers to male homosexuality,[32] but may be used in a broader sense to refer to all LGBT people. In the context of sexuality, lesbian refers only to female homosexuality. The word lesbian is derived from the name of the Greek island Lesbos, where the poet Sappho wrote largely about her emotional relationships with young women.[33][34]
|
18 |
+
|
19 |
+
Although early writers also used the adjective homosexual to refer to any single-sex context (such as an all-girls school), today the term is used exclusively in reference to sexual attraction, activity, and orientation. The term homosocial is now used to describe single-sex contexts that are not specifically sexual. There is also a word referring to same-sex love, homophilia.
|
20 |
+
|
21 |
+
Some synonyms for same-sex attraction or sexual activity include men who have sex with men or MSM (used in the medical community when specifically discussing sexual activity) and homoerotic (referring to works of art).[35][36] Pejorative terms in English include queer, faggot, fairy, poof, and homo.[37][38][39][40] Beginning in the 1990s, some of these have been reclaimed as positive words by gay men and lesbians, as in the usage of queer studies, queer theory, and even the popular American television program Queer Eye for the Straight Guy.[41] The word homo occurs in many other languages without the pejorative connotations it has in English.[42] As with ethnic slurs and racial slurs, the use of these terms can still be highly offensive. The range of acceptable use for these terms depends on the context and speaker.[43] Conversely, gay, a word originally embraced by homosexual men and women as a positive, affirmative term (as in gay liberation and gay rights),[44] has come into widespread pejorative use among young people.[45]
|
22 |
+
|
23 |
+
The American LGBT rights organization GLAAD advises the media to avoid using the term homosexual to describe gay people or same-sex relationships as the term is "frequently used by anti-gay extremists to denigrate gay people, couples and relationships".[46]
|
24 |
+
|
25 |
+
Some scholars argue that the term "homosexuality" is problematic when applied to ancient cultures since, for example, neither Greeks or Romans possessed any one word covering the same semantic range as the modern concept of "homosexuality".[47][48] Furthermore, there were diverse sexual practices that varied in acceptance depending on time and place.[47] Other scholars argue that there are significant continuities between ancient and modern homosexuality.[49][50]
|
26 |
+
|
27 |
+
In a detailed compilation of historical and ethnographic materials of preindustrial cultures, "strong disapproval of homosexuality was reported for 41% of 42 cultures; it was accepted or ignored by 21%, and 12% reported no such concept. Of 70 ethnographies, 59% reported homosexuality absent or rare in frequency and 41% reported it present or not uncommon."[51]
|
28 |
+
|
29 |
+
In cultures influenced by Abrahamic religions, the law and the church established sodomy as a transgression against divine law or a crime against nature. The condemnation of anal sex between males, however, predates Christian belief. It was frequent in ancient Greece; "unnatural" can be traced back to Plato.[52]
|
30 |
+
|
31 |
+
Many historical figures, including Socrates, Lord Byron, Edward II, and Hadrian,[53] have had terms such as gay or bisexual applied to them. Some scholars, such as Michel Foucault, have regarded this as risking the anachronistic introduction of a contemporary construction of sexuality foreign to their times,[54] though other scholars challenge this.[55][50][49]
|
32 |
+
|
33 |
+
In social science, there has been a dispute between "essentialist" and "constructionist" views of homosexuality. The debate divides those who believe that terms such as "gay" and "straight" refer to objective, culturally invariant properties of persons from those who believe that the experiences they name are artifacts of unique cultural and social processes. "Essentialists" typically believe that sexual preferences are determined by biological forces, while "constructionists" assume that sexual desires are learned.[56] The philosopher of science Michael Ruse has stated that the social constructionist approach, which is influenced by Foucault, is based on a selective reading of the historical record that confuses the existence of homosexual people with the way in which they are labelled or treated.[57]
|
34 |
+
|
35 |
+
The first record of a possible homosexual couple in history is commonly regarded as Khnumhotep and Niankhkhnum, an ancient Egyptian male couple, who lived around 2400 BCE. The pair are portrayed in a nose-kissing position, the most intimate pose in Egyptian art, surrounded by what appear to be their heirs. The anthropologists Stephen Murray and Will Roscoe reported that women in Lesotho engaged in socially sanctioned "long term, erotic relationships" called motsoalle.[58] The anthropologist E. E. Evans-Pritchard also recorded that male Azande warriors in the northern Congo routinely took on young male lovers between the ages of twelve and twenty, who helped with household tasks and participated in intercrural sex with their older husbands.[59]
|
36 |
+
|
37 |
+
As is true of many other non-Western cultures, it is difficult to determine the extent to which Western notions of sexual orientation and gender identity apply to Pre-Columbian cultures. Evidence of homoerotic sexual acts and transvestism has been found in many pre-conquest civilizations in Latin America, such as the Aztecs, Mayas, Quechuas, Moches, Zapotecs, the Incas, and the Tupinambá of Brazil.[60][61][62]
|
38 |
+
|
39 |
+
The Spanish conquerors were horrified to discover sodomy openly practiced among native peoples, and attempted to crush it out by subjecting the berdaches (as the Spanish called them) under their rule to severe penalties, including public execution, burning and being torn to pieces by dogs.[63] The Spanish conquerors talked extensively of sodomy among the natives to depict them as savages and hence justify their conquest and forceful conversion to Christianity. As a result of the growing influence and power of the conquerors, many native cultures started condemning homosexual acts themselves.[citation needed]
|
40 |
+
|
41 |
+
Among some of the indigenous peoples of the Americas in North America prior to European colonization, a relatively common form of same-sex sexuality centered around the figure of the Two-Spirit individual (the term itself was coined only in 1990). Typically, this individual was recognized early in life, given a choice by the parents to follow the path and, if the child accepted the role, raised in the appropriate manner, learning the customs of the gender it had chosen. Two-Spirit individuals were commonly shamans and were revered as having powers beyond those of ordinary shamans. Their sexual life was with the ordinary tribe members of the same sex.[citation needed]
|
42 |
+
|
43 |
+
During the colonial times following the European invasion, homosexuality was prosecuted by the Inquisition, some times leading to death sentences on the charges of sodomy, and the practices became clandestine. Many homosexual individuals went into heterosexual marriages to keep appearances, and many turned to the clergy to escape public scrutiny of their lack of interest in the opposite sex.[citation needed]
|
44 |
+
|
45 |
+
In 1986, the Supreme Court of the United States ruled in Bowers v. Hardwick that a state could criminalize sodomy, but, in 2003, overturned itself in Lawrence v. Texas and thereby legalized homosexual activity throughout the United States of America.
|
46 |
+
|
47 |
+
Same-sex marriage in the United States expanded from one state in 2004 to all fifty states in 2015, through various state court rulings, state legislation, direct popular votes (referendums and initiatives), and federal court rulings.
|
48 |
+
|
49 |
+
In East Asia, same-sex love has been referred to since the earliest recorded history.
|
50 |
+
|
51 |
+
Homosexuality in China, known as the passions of the cut peach and various other euphemisms, has been recorded since approximately 600 BCE. Homosexuality was mentioned in many famous works of Chinese literature. The instances of same-sex affection and sexual interactions described in the classical novel Dream of the Red Chamber seem as familiar to observers in the present as do equivalent stories of romances between heterosexual people during the same period. Confucianism, being primarily a social and political philosophy, focused little on sexuality, whether homosexual or heterosexual. Ming Dynasty literature, such as Bian Er Chai (弁而釵/弁而钗), portray homosexual relationships between men as more enjoyable and more "harmonious" than heterosexual relationships.[64] Writings from the Liu Song Dynasty by Wang Shunu claimed that homosexuality was as common as heterosexuality in the late 3rd century.[65]
|
52 |
+
|
53 |
+
Opposition to homosexuality in China originates in the medieval Tang Dynasty (618–907), attributed to the rising influence of Christian and Islamic values,[66] but did not become fully established until the Westernization efforts of the late Qing Dynasty and the Republic of China.[67]
|
54 |
+
|
55 |
+
The Laws of Manu mentions a "third sex", members of which may engage in nontraditional gender expression and homosexual activities.[68]
|
56 |
+
|
57 |
+
The earliest Western documents (in the form of literary works, art objects, and mythographic materials) concerning same-sex relationships are derived from ancient Greece.
|
58 |
+
|
59 |
+
In regard to male homosexuality, such documents depict an at times complex understanding in which relationships with women and relationships with adolescent boys could be a part of a normal man's love life. Same-sex relationships were a social institution variously constructed over time and from one city to another. The formal practice, an erotic yet often restrained relationship between a free adult male and a free adolescent, was valued for its pedagogic benefits and as a means of population control, though occasionally blamed for causing disorder. Plato praised its benefits in his early writings[69] but in his late works proposed its prohibition.[70] Aristotle, in the Politics, dismissed Plato's ideas about abolishing homosexuality (2.4); he explains that barbarians like the Celts accorded it a special honor (2.6.6), while the Cretans used it to regulate the population (2.7.5).[71]
|
60 |
+
|
61 |
+
Little is known of female homosexuality in antiquity. Sappho, born on the island of Lesbos, was included by later Greeks in the canonical list of nine lyric poets. The adjectives deriving from her name and place of birth (Sapphic and Lesbian) came to be applied to female homosexuality beginning in the 19th century.[72][73] Sappho's poetry centers on passion and love for various personages and both genders. The narrators of many of her poems speak of infatuations and love (sometimes requited, sometimes not) for various females, but descriptions of physical acts between women are few and subject to debate.[74][75]
|
62 |
+
|
63 |
+
In Ancient Rome, the young male body remained a focus of male sexual attention, but relationships were between older free men and slaves or freed youths who took the receptive role in sex. The Hellenophile emperor Hadrian is renowned for his relationship with Antinous, but the Christian emperor Theodosius I decreed a law on 6 August 390, condemning passive males to be burned at the stake. Notwithstanding these regulations taxes on brothels with boys available for homosexual sex continued to be collected until the end of the reign of Anastasius I in 518. Justinian, towards the end of his reign, expanded the proscription to the active partner as well (in 558), warning that such conduct can lead to the destruction of cities through the "wrath of God".
|
64 |
+
|
65 |
+
During the Renaissance, wealthy cities in northern Italy—Florence and Venice in particular—were renowned for their widespread practice of same-sex love, engaged in by a considerable part of the male population and constructed along the classical pattern of Greece and Rome.[76][77] But even as many of the male population were engaging in same-sex relationships, the authorities, under the aegis of the Officers of the Night court, were prosecuting, fining, and imprisoning a good portion of that population.
|
66 |
+
|
67 |
+
From the second half of the 13th century, death was the punishment for male homosexuality in most of Europe.[78]
|
68 |
+
The relationships of socially prominent figures, such as King James I and the Duke of Buckingham, served to highlight the issue, including in anonymously authored street pamphlets: "The world is chang'd I know not how, For men Kiss Men, not Women now;...Of J. the First and Buckingham: He, true it is, his Wives Embraces fled, To slabber his lov'd Ganimede" (Mundus Foppensis, or The Fop Display'd, 1691).
|
69 |
+
|
70 |
+
Love Letters Between a Certain Late Nobleman and the Famous Mr. Wilson was published in 1723 in England, and was presumed by some modern scholars to be a novel. The 1749 edition of John Cleland's popular novel Fanny Hill includes a homosexual scene, but this was removed in its 1750 edition. Also in 1749, the earliest extended and serious defense of homosexuality in English, Ancient and Modern Pederasty Investigated and Exemplified, written by Thomas Cannon, was published, but was suppressed almost immediately. It includes the passage, "Unnatural Desire is a Contradiction in Terms; downright Nonsense. Desire is an amatory Impulse of the inmost human Parts."[79] Around 1785 Jeremy Bentham wrote another defense, but this was not published until 1978.[80] Executions for sodomy continued in the Netherlands until 1803, and in England until 1835, James Pratt and John Smith being the last Englishmen to be so hanged.
|
71 |
+
|
72 |
+
Between 1864 and 1880 Karl Heinrich Ulrichs published a series of twelve tracts, which he collectively titled Research on the Riddle of Man-Manly Love. In 1867, he became the first self-proclaimed homosexual person to speak out publicly in defense of homosexuality when he pleaded at the Congress of German Jurists in Munich for a resolution urging the repeal of anti-homosexual laws.[16] Sexual Inversion by Havelock Ellis, published in 1896, challenged theories that homosexuality was abnormal, as well as stereotypes, and insisted on the ubiquity of homosexuality and its association with intellectual and artistic achievement.[81]
|
73 |
+
|
74 |
+
Although medical texts like these (written partly in Latin to obscure the sexual details) were not widely read by the general public, they did lead to the rise of Magnus Hirschfeld's Scientific-Humanitarian Committee, which campaigned from 1897 to 1933 against anti-sodomy laws in Germany, as well as a much more informal, unpublicized movement among British intellectuals and writers, led by such figures as Edward Carpenter and John Addington Symonds. Beginning in 1894 with Homogenic Love, Socialist activist and poet Edward Carpenter wrote a string of pro-homosexual articles and pamphlets, and "came out" in 1916 in his book My Days and Dreams. In 1900, Elisar von Kupffer published an anthology of homosexual literature from antiquity to his own time, Lieblingminne und Freundesliebe in der Weltliteratur.
|
75 |
+
|
76 |
+
There are a handful of accounts by Arab travelers to Europe during the mid-1800s. Two of these travelers, Rifa'ah al-Tahtawi and Muhammad as-Saffar, show their surprise that the French sometimes deliberately mistranslated love poetry about a young boy, instead referring to a young female, to maintain their social norms and morals.[82]
|
77 |
+
|
78 |
+
Israel is considered the most tolerant country in the Middle East and Asia to homosexuals,[83] with Tel Aviv being named "the gay capital of the Middle East"[84] and considered one of the most gay friendly cities in the world.[85] The annual Pride Parade in support of homosexuality takes place in Tel Aviv.[86]
|
79 |
+
|
80 |
+
On the other hand, many governments in the Middle East often ignore, deny the existence of, or criminalize homosexuality. Homosexuality is illegal in almost all Muslim countries.[87] Same-sex intercourse officially carries the death penalty in several Muslim nations: Saudi Arabia, Iran, Mauritania, northern Nigeria, Sudan, and Yemen.[88] Iranian President Mahmoud Ahmadinejad, during his 2007 speech at Columbia University, asserted that there were no gay people in Iran. However, the probable reason is that they keep their sexuality a secret for fear of government sanction or rejection by their families.[89]
|
81 |
+
|
82 |
+
In ancient Sumer, a set of priests known as gala worked in the temples of the goddess Inanna, where they performed elegies and lamentations.[91]:285 Gala took female names, spoke in the eme-sal dialect, which was traditionally reserved for women, and appear to have engaged in homosexual intercourse.[92] The Sumerian sign for gala was a ligature of the signs for "penis" and "anus".[92] One Sumerian proverb reads: "When the gala wiped off his ass [he said], 'I must not arouse that which belongs to my mistress [i.e., Inanna].'"[92] In later Mesopotamian cultures, kurgarrū and assinnu were servants of the goddess Ishtar (Inanna's East Semitic equivalent), who dressed in female clothing and performed war dances in Ishtar's temples.[92] Several Akkadian proverbs seem to suggest that they may have also engaged in homosexual intercourse.[92]
|
83 |
+
|
84 |
+
In ancient Assyria, homosexuality was present and common; it was also not prohibited, condemned, nor looked upon as immoral or disordered. Some religious texts contain prayers for divine blessings on homosexual relationships.[93][94] The Almanac of Incantations contained prayers favoring on an equal basis the love of a man for a woman, of a woman for a man, and of a man for man.[95]
|
85 |
+
|
86 |
+
Some scholars argue that there are examples of homosexual love in ancient literature, like in the Mesopotamian Epic of Gilgamesh as well as in the Biblical story of David and Jonathan. In the Epic of Gilgamesh, the relationship between the main protagonist Gilgamesh and the character Enkidu has been seen by some to be homosexual in nature.[96][97][98][99] Similarly, David's love for Jonathan is "greater than the love of women."[100]
|
87 |
+
|
88 |
+
In some societies of Melanesia, especially in Papua New Guinea, same-sex relationships were an integral part of the culture until the middle of the 1900s. The Etoro and Marind-anim for example, viewed heterosexuality as unclean and celebrated homosexuality instead. In some traditional Melanesian cultures a prepubertal boy would be paired with an older adolescent who would become his mentor and who would "inseminate" him (orally, anally, or topically, depending on the tribe) over a number of years in order for the younger to also reach puberty. Many Melanesian societies, however, have become hostile towards same-sex relationships since the introduction of Christianity by European missionaries.[101]
|
89 |
+
|
90 |
+
The American Psychological Association, the American Psychiatric Association, and the National Association of Social Workers identify sexual orientation as "not merely a personal characteristic that can be defined in isolation. Rather, one's sexual orientation defines the universe of persons with whom one is likely to find the satisfying and fulfilling relationships":[3]
|
91 |
+
|
92 |
+
Sexual orientation is commonly discussed as a characteristic of the individual, like biological sex, gender identity, or age. This perspective is incomplete because sexual orientation is always defined in relational terms and necessarily involves relationships with other individuals. Sexual acts and romantic attractions are categorized as homosexual or heterosexual according to the biological sex of the individuals involved in them, relative to each other. Indeed, it is by acting—or desiring to act—with another person that individuals express their heterosexuality, homosexuality, or bisexuality. This includes actions as simple as holding hands with or kissing another person. Thus, sexual orientation is integrally linked to the intimate personal relationships that human beings form with others to meet their deeply felt needs for love, attachment, and intimacy. In addition to sexual behavior, these bonds encompass nonsexual physical affection between partners, shared goals and values, mutual support, and ongoing commitment.[3]
|
93 |
+
|
94 |
+
The Kinsey scale, also called the Heterosexual-Homosexual Rating Scale,[102] attempts to describe a person's sexual history or episodes of his or her sexual activity at a given time. It uses a scale from 0, meaning exclusively heterosexual, to 6, meaning exclusively homosexual. In both the Male and Female volumes of the Kinsey Reports, an additional grade, listed as "X", has been interpreted by scholars to indicate asexuality.[103]
|
95 |
+
|
96 |
+
Often, sexual orientation and sexual identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual orientation.[104][105][106] Sexual orientation is stable and unlikely to change for the vast majority of people, but some research indicates that some people may experience change in their sexual orientation, and this is more likely for women than for men.[107] The American Psychological Association distinguishes between sexual orientation (an innate attraction) and sexual orientation identity (which may change at any point in a person's life).[108]
|
97 |
+
|
98 |
+
People with a homosexual orientation can express their sexuality in a variety of ways, and may or may not express it in their behaviors.[2] Many have sexual relationships predominantly with people of their own sex, though some have sexual relationships with those of the opposite sex, bisexual relationships, or none at all (celibacy).[2] Studies have found same-sex and opposite-sex couples to be equivalent to each other in measures of satisfaction and commitment in relationships, that age and sex are more reliable than sexual orientation as a predictor of satisfaction and commitment to a relationship, and that people who are heterosexual or homosexual share comparable expectations and ideals with regard to romantic relationships.[109][110][111]
|
99 |
+
|
100 |
+
Coming out (of the closet) is a phrase referring to one's disclosure of their sexual orientation or gender identity, and is described and experienced variously as a psychological process or journey.[113] Generally, coming out is described in three phases. The first phase is that of "knowing oneself", and the realization emerges that one is open to same-sex relations.[114] This is often described as an internal coming out. The second phase involves one's decision to come out to others, e.g. family, friends, or colleagues. The third phase more generally involves living openly as an LGBT person.[115] In the United States today, people often come out during high school or college age. At this age, they may not trust or ask for help from others, especially when their orientation is not accepted in society. Sometimes their own families are not even informed.
|
101 |
+
|
102 |
+
According to Rosario, Schrimshaw, Hunter, Braun (2006), "the development of a lesbian, gay, or bisexual (LGB) sexual identity is a complex and often difficult process. Unlike members of other minority groups (e.g., ethnic and racial minorities), most LGB individuals are not raised in a community of similar others from whom they learn about their identity and who reinforce and support that identity. Rather, LGB individuals are often raised in communities that are either ignorant of or openly hostile toward homosexuality."[105]
|
103 |
+
|
104 |
+
Outing is the practice of publicly revealing the sexual orientation of a closeted person.[116] Notable politicians, celebrities, military service people, and clergy members have been outed, with motives ranging from malice to political or moral beliefs. Many commentators oppose the practice altogether,[117] while some encourage outing public figures who use their positions of influence to harm other gay people.[118]
|
105 |
+
|
106 |
+
Lesbians often experience their sexuality differently from gay men, and have different understandings about etiology from those derived from studies focused mostly on men.[citation needed]
|
107 |
+
|
108 |
+
In a U.S.-based 1970s mail survey by Shere Hite, lesbians self-reported their reasons for being lesbian. This is the only major piece of research into female sexuality that has looked at how women understand being homosexual since Kinsey in 1953.[citation needed] The research yielded information about women's general understanding of lesbian relationships and their sexual orientation. Women gave various reasons for preferring sexual relations with women to sexual relations with men, including finding women more sensitive to other people's needs.[119]
|
109 |
+
|
110 |
+
Since Hite carried out her study she has acknowledged that some women may have chosen the political identity of a lesbian. Julie Bindel, a UK journalist, reaffirmed that "political lesbianism continues to make intrinsic sense because it reinforces the idea that sexuality is a choice, and we are not destined to a particular fate because of our chromosomes." as recently as 2009.[120]
|
111 |
+
|
112 |
+
Reliable data as to the size of the gay and lesbian population are of value in informing public policy.[123] For example, demographics would help in calculating the costs and benefits of domestic partnership benefits, of the impact of legalizing gay adoption, and of the impact of the U.S. military's Don't Ask Don't Tell policy.[123] Further, knowledge of the size of the "gay and lesbian population holds promise for helping social scientists understand a wide array of important questions—questions about the general nature of labor market choices, accumulation of human capital, specialization within households, discrimination, and decisions about geographic location."[123]
|
113 |
+
|
114 |
+
Measuring the prevalence of homosexuality presents difficulties. It is necessary to consider the measuring criteria that are used, the cutoff point and the time span taken to define a sexual orientation.[16] Many people, despite having same-sex attractions, may be reluctant to identify themselves as gay or bisexual. The research must measure some characteristic that may or may not be defining of sexual orientation. The number of people with same-sex desires may be larger than the number of people who act on those desires, which in turn may be larger than the number of people who self-identify as gay, lesbian, or bisexual.[123]
|
115 |
+
|
116 |
+
In 1948 and 1953, Alfred Kinsey reported that nearly 46% of the male subjects had "reacted" sexually to persons of both sexes in the course of their adult lives, and 37% had had at least one homosexual experience.[124][125] Kinsey's methodology was criticized by John Tukey for using convenience samples and not random samples.[126][127]
|
117 |
+
A later study tried to eliminate the sample bias, but still reached similar conclusions.[128] Simon LeVay cites these Kinsey results as an example of the caution needed to interpret demographic studies, as they may give quite differing numbers depending on what criteria are used to conduct them, in spite of using sound scientific methods.[16]
|
118 |
+
|
119 |
+
According to major studies, 2% to 11% of people have had some form of same-sex sexual contact within their lifetime;[129][130][131][132][133][134][135][136][137][138] this percentage rises to 16–21% when either or both same-sex attraction and behavior are reported.[138] In a 2006 study, 20% of respondents anonymously reported some homosexual feelings, although only 2–3% identified themselves as homosexual.[139] A 1992 study reported that 6.1% of males in Britain have had a homosexual experience, while in France the number was reported at 4.1%.[140]
|
120 |
+
|
121 |
+
In the United States, according to a report by The Williams Institute in April 2011, 3.5% or approximately 9 million of the adult population identify as lesbian, gay, or bisexual.[141][142] According to the 2000 United States Census, there were about 601,209 same-sex unmarried partner households.[143]
|
122 |
+
|
123 |
+
A 2013 study by the CDC, in which over 34,000 Americans were interviewed, puts the percentage of self-identifying lesbians and gay men at 1.6%, and of bisexuals at 0.7%.[144]
|
124 |
+
|
125 |
+
According to a 2008 poll, 13% of Britons have had some form of same-sex sexual contact while only 6% of Britons identify themselves as either homosexual or bisexual.[145] In contrast, a survey by the UK Office for National Statistics (ONS) in 2010 found that 95% of Britons identified as heterosexual, 1.5% of Britons identified themselves as homosexual or bisexual, and the last 3.5% gave more vague answers such as "don't know", "other", or did not respond to the question.[125][146]
|
126 |
+
|
127 |
+
In October 2012, Gallup started conducting annual surveys to study the demographics of LGBT people, determining that 3.4% (±1%) of adults identified as LGBT in the United States.[147] It was the nation's largest poll on the issue at the time.[148][149] In 2017, the percentage was estimated to have risen to 4.5% of adults, with the increase largely driven by Millennials. The poll attributes the rise to greater willingness of younger people to reveal their sexual identity.[150]
|
128 |
+
|
129 |
+
The American Psychological Association, the American Psychiatric Association, and the National Association of Social Workers state:
|
130 |
+
|
131 |
+
In 1952, when the American Psychiatric Association published its first Diagnostic and Statistical Manual of Mental Disorders, homosexuality was included as a disorder. Almost immediately, however, that classification began to be subjected to critical scrutiny in research funded by the National Institute of Mental Health. That study and subsequent research consistently failed to produce any empirical or scientific basis for regarding homosexuality as a disorder or abnormality, rather than a normal and healthy sexual orientation. As results from such research accumulated, professionals in medicine, mental health, and the behavioral and social sciences reached the conclusion that it was inaccurate to classify homosexuality as a mental disorder and that the DSM classification reflected untested assumptions based on once-prevalent social norms and clinical impressions from unrepresentative samples comprising patients seeking therapy and individuals whose conduct brought them into the criminal justice system.
|
132 |
+
|
133 |
+
In recognition of the scientific evidence,[151] the American Psychiatric Association removed homosexuality from the DSM in 1973, stating that "homosexuality per se implies no impairment in judgment, stability, reliability, or general social or vocational capabilities." After thoroughly reviewing the scientific data, the American Psychological Association adopted the same position in 1975, and urged all mental health professionals "to take the lead in removing the stigma of mental illness that has long been associated with homosexual orientations." The National Association of Social Workers has adopted a similar policy.
|
134 |
+
|
135 |
+
Thus, mental health professionals and researchers have long recognized that being homosexual poses no inherent obstacle to leading a happy, healthy, and productive life, and that the vast majority of gay and lesbian people function well in the full array of social institutions and interpersonal relationships.[3]
|
136 |
+
|
137 |
+
The consensus of research and clinical literature demonstrates that same-sex sexual and romantic attractions, feelings, and behaviors are normal and positive variations of human sexuality.[152] There is now a large body of research evidence that indicates that being gay, lesbian or bisexual is compatible with normal mental health and social adjustment.[11] The World Health Organization's ICD-9 (1977) listed homosexuality as a mental illness; it was removed from the ICD-10, endorsed by the Forty-third World Health Assembly on 17 May 1990.[153][154][155] Like the DSM-II, the ICD-10 added ego-dystonic sexual orientation to the list, which refers to people who want to change their gender identities or sexual orientation because of a psychological or behavioral disorder (F66.1). The Chinese Society of Psychiatry removed homosexuality from its Chinese Classification of Mental Disorders in 2001 after five years of study by the association.[156] According to the Royal College of Psychiatrists "This unfortunate history demonstrates how marginalisation of a group of people who have a particular personality feature (in this case homosexuality) can lead to harmful medical practice and a basis for discrimination in society.[11] There is now a large body of research evidence that indicates that being gay, lesbian or bisexual is compatible with normal mental health and social adjustment. However, the experiences of discrimination in society and possible rejection by friends, families and others, such as employers, means that some LGB people experience a greater than expected prevalence of mental health difficulties and substance misuse problems. Although there have been claims by conservative political groups in the USA that this higher prevalence of mental health difficulties is confirmation that homosexuality is itself a mental disorder, there is no evidence whatever to substantiate such a claim."[157]
|
138 |
+
|
139 |
+
Most lesbian, gay, and bisexual people who seek psychotherapy do so for the same reasons as heterosexual people (stress, relationship difficulties, difficulty adjusting to social or work situations, etc.); their sexual orientation may be of primary, incidental, or no importance to their issues and treatment. Whatever the issue, there is a high risk for anti-gay bias in psychotherapy with lesbian, gay, and bisexual clients.[158] Psychological research in this area has been relevant to counteracting prejudicial ("homophobic") attitudes and actions, and to the LGBT rights movement generally.[159]
|
140 |
+
|
141 |
+
The appropriate application of affirmative psychotherapy is based on the following scientific facts:[152]
|
142 |
+
|
143 |
+
Although scientists favor biological models for the cause of sexual orientation,[4] they do not believe that the development of sexual orientation is the result of any one factor. They generally believe that it is determined by a complex interplay of biological and environmental factors, and is shaped at an early age.[2][5][6] There is considerably more evidence supporting nonsocial, biological causes of sexual orientation than social ones, especially for males.[8] There is no substantive evidence which suggests parenting or early childhood experiences play a role with regard to sexual orientation.[11] Scientists do not believe that sexual orientation is a choice.[4][5][7]
|
144 |
+
|
145 |
+
The American Academy of Pediatrics stated in Pediatrics in 2004:
|
146 |
+
|
147 |
+
There is no scientific evidence that abnormal parenting, sexual abuse, or other adverse life events influence sexual orientation. Current knowledge suggests that sexual orientation is usually established during early childhood.[4][160]
|
148 |
+
|
149 |
+
The American Psychological Association, American Psychiatric Association, and National Association of Social Workers stated in 2006:
|
150 |
+
|
151 |
+
Currently, there is no scientific consensus about the specific factors that cause an individual to become heterosexual, homosexual, or bisexual—including possible biological, psychological, or social effects of the parents' sexual orientation. However, the available evidence indicates that the vast majority of lesbian and gay adults were raised by heterosexual parents and the vast majority of children raised by lesbian and gay parents eventually grow up to be heterosexual.[2]
|
152 |
+
|
153 |
+
Despite numerous attempts, no "gay gene" has been identified. However, there is substantial evidence for a genetic basis of homosexuality, especially in males, based on twin studies; some association with regions of Chromosome 8, the Xq28 locus on the X chromosome, and other sites across many chromosomes.[161]
|
154 |
+
|
155 |
+
Sanders et al. 2015
|
156 |
+
|
157 |
+
Sanders et al. 2015
|
158 |
+
|
159 |
+
Starting in the 2010s, potential epigenetic factors have become a topic of increased attention in genetic research on sexual orientation. A study presented at the ASHG 2015 Annual Meeting found that the methylation pattern in nine regions of the genome appeared very closely linked to sexual orientation, with a resulting algorithm using the methylation pattern to predict the sexual orientation of a control group with almost 70% accuracy.[164][165]
|
160 |
+
|
161 |
+
Research into the causes of homosexuality plays a role in political and social debates and also raises concerns about genetic profiling and prenatal testing.[166][167]
|
162 |
+
|
163 |
+
Since homosexuality tends to lower reproductive success, and since there is considerable evidence that human sexual orientation is genetically influenced, it is unclear how it is maintained in the population at a relatively high frequency.[168] There are many possible explanations, such as genes predisposing to homosexuality also conferring advantage in heterosexuals, a kin selection effect, social prestige, and more.[169] A 2009 study also suggested a significant increase in fecundity in the females related to the homosexual people from the maternal line (but not in those related from the paternal one).[170]
|
164 |
+
|
165 |
+
There are no studies of adequate scientific rigor that conclude that sexual orientation change efforts work to change a person's sexual orientation. Those efforts have been controversial due to tensions between the values held by some faith-based organizations, on the one hand, and those held by LGBT rights organizations and professional and scientific organizations and other faith-based organizations, on the other.[14] The longstanding consensus of the behavioral and social sciences and the health and mental health professions is that homosexuality per se is a normal and positive variation of human sexual orientation, and therefore not a mental disorder.[14] The American Psychological Association says that "most people experience little or no sense of choice about their sexual orientation".[171]
|
166 |
+
Some individuals and groups have promoted the idea of homosexuality as symptomatic of developmental defects or spiritual and moral failings and have argued that sexual orientation change efforts, including psychotherapy and religious efforts, could alter homosexual feelings and behaviors. Many of these individuals and groups appeared to be embedded within the larger context of conservative religious political movements that have supported the stigmatization of homosexuality on political or religious grounds.[14]
|
167 |
+
|
168 |
+
No major mental health professional organization has sanctioned efforts to change sexual orientation and virtually all of them have adopted policy statements cautioning the profession and the public about treatments that purport to change sexual orientation. These include the American Psychiatric Association, American Psychological Association, American Counseling Association, National Association of Social Workers in the US,[172] the Royal College of Psychiatrists,[173] and the Australian Psychological Society.[174] The American Psychological Association and the Royal College of Psychiatrists expressed concerns that the positions espoused by NARTH are not supported by the science and create an environment in which prejudice and discrimination can flourish.[173][175]
|
169 |
+
|
170 |
+
The American Psychological Association states that "sexual orientation is not a choice that can be changed at will, and that sexual orientation is most likely the result of a complex interaction of environmental, cognitive and biological factors...is shaped at an early age...[and evidence suggests] biological, including genetic or inborn hormonal factors, play a significant role in a person's sexuality."[5] They say that "sexual orientation identity—not sexual orientation—appears to change via psychotherapy, support groups, and life events."[14] The American Psychiatric Association says "individuals maybe become aware at different points in their lives that they are heterosexual, gay, lesbian, or bisexual" and "opposes any psychiatric treatment, such as 'reparative' or 'conversion' therapy, which is based upon the assumption that homosexuality per se is a mental disorder, or based upon a prior assumption that the patient should change his/her homosexual orientation". They do, however, encourage gay affirmative psychotherapy.[176] Similarly, the American *Psychological* Association[177] is doubtful about the effectiveness and side-effect profile of sexual orientation change efforts, including conversion therapy.
|
171 |
+
|
172 |
+
The American Psychological Association "encourages mental health professionals to avoid misrepresenting the efficacy of sexual orientation change efforts by promoting or promising change in sexual orientation when providing assistance to individuals distressed by their own or others' sexual orientation and concludes that the benefits reported by participants in sexual orientation change efforts can be gained through approaches that do not attempt to change sexual orientation".[14]
|
173 |
+
|
174 |
+
Scientific research has been generally consistent in showing that lesbian and gay parents are as fit and capable as heterosexual parents, and their children are as psychologically healthy and well-adjusted as children reared by heterosexual parents.[178][179][180] According to scientific literature reviews, there is no evidence to the contrary.[3][181][182][183][184]
|
175 |
+
|
176 |
+
A 2001 review suggested that the children with lesbian or gay parents appear less traditionally gender-typed and are more likely to be open to homoerotic relationships, partly due to genetic (80% of the children being raised by same-sex couples in the US are not adopted and most are the result of previous heterosexual marriages.[185]) and family socialization processes (children grow up in relatively more tolerant school, neighborhood, and social contexts, which are less heterosexist), even though majority of children raised by same-sex couples identify as heterosexual.[186] A 2005 review by Charlotte J. Patterson for the American Psychological Association found that the available data did not suggest higher rates of homosexuality among the children of lesbian or gay parents.[187]
|
177 |
+
|
178 |
+
The terms "men who have sex with men" (MSM) and "women who have sex with women" (WSW) refer to people who engage in sexual activity with others of the same sex regardless of how they identify themselves—as many choose not to accept social identities as lesbian, gay and bisexual.[188][189][190][191][192] These terms are often used in medical literature and social research to describe such groups for study, without needing to consider the issues of sexual self-identity. The terms are seen as problematic by some, however, because they "obscure social dimensions of sexuality; undermine the self-labeling of lesbian, gay, and bisexual people; and do not sufficiently describe variations in sexual behavior".[193]
|
179 |
+
|
180 |
+
In contrast to its benefits, sexual behavior can be a disease vector. Safe sex is a relevant harm reduction philosophy.[194] Many countries currently prohibit men who have sex with men from donating blood; the policy of the United States Food and Drug Administration states that "they are, as a group, at increased risk for HIV, hepatitis B and certain other infections that can be transmitted by transfusion."[195]
|
181 |
+
|
182 |
+
These safer sex recommendations are agreed upon by public health officials for women who have sex with women to avoid sexually transmitted infections (STIs):
|
183 |
+
|
184 |
+
These safer sex recommendations are agreed upon by public health officials for men who have sex with men to avoid sexually transmitted infections:
|
185 |
+
|
186 |
+
When it was first described in medical literature, homosexuality was often approached from a view that sought to find an inherent psychopathology as its root cause. Much literature on mental health and homosexual patients centered on their depression, substance abuse, and suicide. Although these issues exist among people who are non-heterosexual, discussion about their causes shifted after homosexuality was removed from the Diagnostic and Statistical Manual (DSM) in 1973. Instead, social ostracism, legal discrimination, internalization of negative stereotypes, and limited support structures indicate factors homosexual people face in Western societies that often adversely affect their mental health.[199]
|
187 |
+
Stigma, prejudice, and discrimination stemming from negative societal attitudes toward homosexuality lead to a higher prevalence of mental health disorders among lesbians, gay men, and bisexuals compared to their heterosexual peers.[200] Evidence indicates that the liberalization of these attitudes over the 1990s through the 2010s is associated with a decrease in such mental health risks among younger LGBT people.[201]
|
188 |
+
|
189 |
+
Gay and lesbian youth bear an increased risk of suicide, substance abuse, school problems, and isolation because of a "hostile and condemning environment, verbal and physical abuse, rejection and isolation from family and peers".[202] Further, LGBT youths are more likely to report psychological and physical abuse by parents or caretakers, and more sexual abuse. Suggested reasons for this disparity are that (1) LGBT youths may be specifically targeted on the basis of their perceived sexual orientation or gender non-conforming appearance, and (2) that "risk factors associated with sexual minority status, including discrimination, invisibility, and rejection by family members...may lead to an increase in behaviors that are associated with risk for victimization, such as substance abuse, sex with multiple partners, or running away from home as a teenager."[203] A 2008 study showed a correlation between the degree of rejecting behavior by parents of LGB adolescents and negative health problems in the teenagers studied:
|
190 |
+
|
191 |
+
Higher rates of family rejection were significantly associated with poorer health outcomes. On the basis of odds ratios, lesbian, gay, and bisexual young adults who reported higher levels of family rejection during adolescence were 8.4 times more likely to report having attempted suicide, 5.9 times more likely to report high levels of depression, 3.4 times more likely to use illegal drugs, and 3.4 times more likely to report having engaged in unprotected sexual intercourse compared with peers from families that reported no or low levels of family rejection.[204]
|
192 |
+
|
193 |
+
Crisis centers in larger cities and information sites on the Internet have arisen to help youth and adults.[205] The Trevor Project, a suicide prevention helpline for gay youth, was established following the 1998 airing on HBO of the Academy Award winning short film Trevor.[206]
|
194 |
+
|
195 |
+
Most nations do not prohibit consensual sex between unrelated persons above the local age of consent. Some jurisdictions further recognize identical rights, protections, and privileges for the family structures of same-sex couples, including marriage. Some countries and jurisdictions mandate that all individuals restrict themselves to heterosexual activity and disallow homosexual activity via sodomy laws. Offenders can face the death penalty in Islamic countries and jurisdictions ruled by sharia. There are, however, often significant differences between official policy and real-world enforcement.
|
196 |
+
|
197 |
+
Although homosexual acts were decriminalized in some parts of the Western world, such as Poland in 1932, Denmark in 1933, Sweden in 1944, and the England and Wales in 1967, it was not until the mid-1970s that the gay community first began to achieve limited civil rights in some developed countries. A turning point was reached in 1973 when the American Psychiatric Association, which previously listed homosexuality in the DSM-I in 1952, removed homosexuality in the DSM-II, in recognition of scientific evidence.[3] In 1977, Quebec became the first state-level jurisdiction in the world to prohibit discrimination on the grounds of sexual orientation. During the 1980s and 1990s, most developed countries enacted laws decriminalizing homosexual behavior and prohibiting discrimination against lesbian and gay people in employment, housing, and services. On the other hand, many countries today in the Middle East and Africa, as well as several countries in Asia, the Caribbean and the South Pacific, outlaw homosexuality. In 2013, the Supreme Court of India upheld Section 377 of the Indian Penal Code,[207] but in 2018 overturned itself and legalized homosexual activity throughout India.[208] The Section 377, a British colonial-era law which criminalizes homosexual activity, remains in effect in more than 40 former British colonies.[209][210]
|
198 |
+
|
199 |
+
10 countries or jurisdictions, all of which are Islamic and ruled by sharia, impose the death penalty for homosexuality.
|
200 |
+
|
201 |
+
In the European Union, discrimination of any type based on sexual orientation or gender identity is illegal under the Charter of Fundamental Rights of the European Union.[220]
|
202 |
+
|
203 |
+
Since the 1960s, many LGBT people in the West, particularly those in major metropolitan areas, have developed a so-called gay culture. To many,[who?] gay culture is exemplified by the gay pride movement, with annual parades and displays of rainbow flags. Yet not all LGBT people choose to participate in "queer culture", and many gay men and women specifically decline to do so. To some[who?] it seems to be a frivolous display, perpetuating gay stereotypes.
|
204 |
+
|
205 |
+
With the outbreak of AIDS in the early 1980s, many LGBT groups and individuals organized campaigns to promote efforts in AIDS education, prevention, research, patient support, and community outreach, as well as to demand government support for these programs.
|
206 |
+
|
207 |
+
The death toll wrought by the AIDS epidemic at first seemed to slow the progress of the gay rights movement, but in time it galvanized some parts of the LGBT community into community service and political action, and challenged the heterosexual community to respond compassionately. Major American motion pictures from this period that dramatized the response of individuals and communities to the AIDS crisis include An Early Frost (1985), Longtime Companion (1990), And the Band Played On (1993), Philadelphia (1993), and Common Threads: Stories from the Quilt (1989).
|
208 |
+
|
209 |
+
Publicly gay politicians have attained numerous government posts, even in countries that had sodomy laws in their recent past. Examples include Guido Westerwelle, Germany's Vice-Chancellor; Peter Mandelson, a British Labour Party cabinet minister and Per-Kristian Foss, formerly Norwegian Minister of Finance.
|
210 |
+
|
211 |
+
LGBT movements are opposed by a variety of individuals and organizations. Some social conservatives believe that all sexual relationships with people other than an opposite-sex spouse undermine the traditional family[221] and that children should be reared in homes with both a father and a mother.[222][223] Some argue that gay rights may conflict with individuals' freedom of speech,[224][225] religious freedoms in the workplace,[226][227] the ability to run churches,[228] charitable organizations[229][230] and other religious organizations[231] in accordance with one's religious views, and that the acceptance of homosexual relationships by religious organizations might be forced through threatening to remove the tax-exempt status of churches whose views do not align with those of the government.[232][233][234][235] Some critics charge that political correctness has led to the association of sex between males and HIV being downplayed.[236]
|
212 |
+
|
213 |
+
Policies and attitudes toward gay and lesbian military personnel vary widely around the world. Some countries allow gay men, lesbians, and bisexual people to serve openly and have granted them the same rights and privileges as their heterosexual counterparts. Many countries neither ban nor support LGB service members. A few countries continue to ban homosexual personnel outright.
|
214 |
+
|
215 |
+
Most Western military forces have removed policies excluding sexual minority members. Of the 26 countries that participate militarily in NATO, more than 20 permit openly gay, lesbian and bisexual people to serve. Of the permanent members of the United Nations Security Council, three (United Kingdom, France and United States) do so. The other two generally do not: China bans gay and lesbian people outright, Russia excludes all gay and lesbian people during peacetime but allows some gay men to serve in wartime (see below). Israel is the only country in the Middle East region that allows openly LGB people to serve in the military.
|
216 |
+
|
217 |
+
While the question of homosexuality in the military has been highly politicized in the United States, it is not necessarily so in many countries. Generally speaking, sexuality in these cultures is considered a more personal aspect of one's identity than it is in the United States.
|
218 |
+
|
219 |
+
According to the American Psychological Association, empirical evidence fails to show that sexual orientation is germane to any aspect of military effectiveness including unit cohesion, morale, recruitment and retention.[237] Sexual orientation is irrelevant to task cohesion, the only type of cohesion that critically predicts the team's military readiness and success.[238]
|
220 |
+
|
221 |
+
Societal acceptance of non-heterosexual orientations such as homosexuality is lowest in Asian, African and Eastern European countries,[239][240] and is highest in Western Europe, Australia, and the Americas. Western society has become increasingly accepting of homosexuality since the 1990s. In 2017, Professor Amy Adamczyk contended that these cross-national differences in acceptance can be largely explained by three factors: the relative strength of democratic institutions, the level of economic development, and the religious context of the places where people live.[241]
|
222 |
+
|
223 |
+
In 2006, the American Psychological Association, American Psychiatric Association and National Association of Social Workers stated in an amicus brief presented to the Supreme Court of California: "Gay men and lesbians form stable, committed relationships that are equivalent to heterosexual relationships in essential respects. The institution of marriage offers social, psychological, and health benefits that are denied to same-sex couples. By denying same-sex couples the right to marry, the state reinforces and perpetuates the stigma historically associated with homosexuality. Homosexuality remains stigmatized, and this stigma has negative consequences. California's prohibition on marriage for same-sex couples reflects and reinforces this stigma". They concluded: "There is no scientific basis for distinguishing between same-sex couples and heterosexual couples with respect to the legal rights, obligations, benefits, and burdens conferred by civil marriage."[3]
|
224 |
+
|
225 |
+
Though the relationship between homosexuality and religion is complex, current authoritative bodies and doctrines of the world's largest religions view homosexual behaviour negatively.[citation needed] This can range from quietly discouraging homosexual activity, to explicitly forbidding same-sex sexual practices among adherents and actively opposing social acceptance of homosexuality. Some teach that homosexual desire itself is sinful,[242] others state that only the sexual act is a sin,[243] others are completely accepting of gays and lesbians,[244] while some encourage homosexuality.[245] Some claim that homosexuality can be overcome through religious faith and practice. On the other hand, voices exist within many of these religions that view homosexuality more positively, and liberal religious denominations may bless same-sex marriages. Some view same-sex love and sexuality as sacred, and a mythology of same-sex love can be found around the world.[246]
|
226 |
+
|
227 |
+
Gay bullying can be the verbal or physical abuse against a person who is perceived by the aggressor to be lesbian, gay, bisexual or transgender, including persons who are actually heterosexual or of non-specific or unknown sexual orientation. In the US, teenage students heard anti-gay slurs such as "homo", "faggot" and "sissy" about 26 times a day on average, or once every 14 minutes, according to a 1998 study by Mental Health America (formerly National Mental Health Association).[247]
|
228 |
+
|
229 |
+
In many cultures, homosexual people are frequently subject to prejudice and discrimination. A 2011 Dutch study concluded that 49% of Holland's youth and 58% of youth foreign to the country reject homosexuality.[248] Similar to other minority groups they can also be subject to stereotyping. These attitudes tend to be due to forms of homophobia and heterosexism (negative attitudes, bias, and discrimination in favor of opposite-sex sexuality and relationships). Heterosexism can include the presumption that everyone is heterosexual or that opposite-sex attractions and relationships are the norm and therefore superior. Homophobia is a fear of, aversion to, or discrimination against homosexual people. It manifests in different forms, and a number of different types have been postulated, among which are internalized homophobia, social homophobia, emotional homophobia, rationalized homophobia, and others.[249] Similar is lesbophobia (specifically targeting lesbians) and biphobia (against bisexual people). When such attitudes manifest as crimes they are often called hate crimes and gay bashing.
|
230 |
+
|
231 |
+
Negative stereotypes characterize LGB people as less romantically stable, more promiscuous and more likely to abuse children, but there is no scientific basis to such assertions. Gay men and lesbians form stable, committed relationships that are equivalent to heterosexual relationships in essential respects.[3] Sexual orientation does not affect the likelihood that people will abuse children.[250][251][252] Claims that there is scientific evidence to support an association between being gay and being a pedophile are based on misuses of those terms and misrepresentation of the actual evidence.[251]
|
232 |
+
|
233 |
+
In the United States, the FBI reported that 20.4% of hate crimes reported to law enforcement in 2011 were based on sexual orientation bias. 56.7% of these crimes were based on bias against homosexual men. 11.1% were based on bias against homosexual women. 29.6% were based on anti-homosexual bias without regard to gender.[253] The 1998 murder of Matthew Shepard, a gay student, is a notorious such incident in the U.S. LGBT people, especially lesbians, may become the victims of "corrective rape", a violent crime with the supposed aim of making them heterosexual. In certain parts of the world, LGBT people are also at risk of "honor killings" perpetrated by their families or relatives.[254][255][256]
|
234 |
+
|
235 |
+
In Morocco, a constitutional monarchy following Islamic laws, homosexual acts are a punishable offence. With a population hostile towards LGBT people, the country has witnessed public demonstrations against homosexuals, public denunciations of presumed homosexual individuals, as well as violent intrusions in private homes. The community in the country is exposed to additional risk of prejudice, social rejection and violence, with a greater impossibility of obtaining protection even from the police.[257]
|
236 |
+
|
237 |
+
In 2017, Abderrahim El Habachi became one of the many homosexual Moroccans to flee the country for fear, and sought shelter in the United Kingdom. In August 2019, he accused the UK Home Office of not taking LGBT migrants seriously. He highlighted that they are being put through tough times because the Home Office don't believe their sexual orientation or in the struggles they face in their countries.[258]
|
238 |
+
|
239 |
+
Homosexual and bisexual behaviors occur in a number of other animal species. Such behaviors include sexual activity, courtship, affection, pair bonding, and parenting,[20] and are widespread; a 1999 review by researcher Bruce Bagemihl shows that homosexual behavior has been documented in about 500 species, ranging from primates to gut worms.[20][21] Animal sexual behavior takes many different forms, even within the same species. The motivations for and implications of these behaviors have yet to be fully understood, since most species have yet to be fully studied.[260] According to Bagemihl, "the animal kingdom [does] it with much greater sexual diversity—including homosexual, bisexual and nonreproductive sex—than the scientific community and society at large have previously been willing to accept".[261]
|
240 |
+
|
241 |
+
A review paper by N. W. Bailey and Marlene Zuk looking into studies of same-sex sexual behaviour in animals challenges the view that such behaviour lowers reproductive success, citing several hypotheses about how same-sex sexual behavior might be adaptive; these hypotheses vary greatly among different species. Bailey and Zuk also suggest future research needs to look into evolutionary consequences of same-sex sexual behaviour, rather than only looking into origins of such behaviour.[262]
|
242 |
+
|
243 |
+
Category:LGBT culture
|
en/2615.html.txt
ADDED
@@ -0,0 +1,265 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
in the Americas
|
4 |
+
|
5 |
+
Belize (/bəˈliːz/ (listen)) is a Caribbean country located on the northeastern coast of Central America. Belize is bordered on the northwest by Mexico, on the east by the Caribbean Sea, and on the south and west by Guatemala. It has an area of 22,970 square kilometres (8,867 sq mi) and a population of 408,487 (2019).[5] Its mainland is about 290 km (180 mi) long and 110 km (68 mi) wide. It has the lowest population and population density in Central America.[10] The country's population growth rate of 1.87% per year (2018 estimate) is the second highest in the region and one of the highest in the Western Hemisphere.[2]
|
6 |
+
|
7 |
+
The Maya Civilization spread into the area of Belize between 1500 B.C. and A.D. 300 and flourished until about 1200.[11] European contact began in 1502 when Christopher Columbus sailed along the Gulf of Honduras.[12] European exploration was begun by English settlers in 1638. This period was also marked by Spain and Britain both laying claim to the land until Britain defeated the Spanish in the Battle of St. George's Caye (1798).[13] It became a British colony in 1840, known as British Honduras, and a Crown colony in 1862. Independence was achieved from the United Kingdom on 21 September 1981.
|
8 |
+
|
9 |
+
Belize has a diverse society that is composed of many cultures and languages that reflect its rich history. English is the official language of Belize, while Belizean Creole is the most widely spoken national language, being the native language of over a third of the population. Over half the population is multilingual, with Spanish being the second most common spoken language. It is known for its September Celebrations, its extensive barrier reef coral reefs and punta music.[14][15]
|
10 |
+
|
11 |
+
Belize's abundance of terrestrial and marine species and its diversity of ecosystems give it a key place in the globally significant Mesoamerican Biological Corridor.[16] It is considered a Central American and Caribbean nation with strong ties to both the American and Caribbean regions.[17] It is a member of the Caribbean Community (CARICOM), the Community of Latin American and Caribbean States (CELAC), and the Central American Integration System (SICA), the only country to hold full membership in all three regional organizations. Belize is a Commonwealth realm, with Queen Elizabeth II as its monarch and head of state.
|
12 |
+
|
13 |
+
The earliest known record of the name "Belize" appears in the journal of the Dominican priest Fray José Delgado, dating to 1677.[18] Delgado recorded the names of three major rivers that he crossed while travelling north along the Caribbean coast: Rio Soyte, Rio Xibum and Rio Balis. The names of these waterways, which correspond to the Sittee River, Sibun River and Belize River, were provided to Delgado by his translator.[18] It has been proposed that Delgado's "Balis" was actually the Mayan word belix (or beliz), meaning "muddy-watered".[18] More recently, it has been proposed that the name comes from the Mayan phrase "bel Itza", meaning "the road to Itza[disambiguation needed]".[19]
|
14 |
+
|
15 |
+
In the 1820s, the Creole elite of Belize invented the legend that the toponym Belize derived from the Spanish pronunciation of the name of a Scottish buccaneer, Peter Wallace, who established a settlement at the mouth of the Belize River in 1638.[20] There is no proof that buccaneers settled in this area and the very existence of Wallace is considered a myth.[18][19] Writers and historians have suggested several other possible etymologies, including postulated French and African origins.[18]
|
16 |
+
|
17 |
+
The Maya Civilization emerged at least three millennia ago in the lowland area of the Yucatán Peninsula and the highlands to the south, in the area of present-day southeastern Mexico, Belize, Guatemala, and western Honduras. Many aspects of this culture persist in the area, despite nearly 500 years of European domination. Prior to about 2500 BC, some hunting and foraging bands settled in small farming villages; they domesticated crops such as corn, beans, squash, and chili peppers.
|
18 |
+
|
19 |
+
A profusion of languages and subcultures developed within the Maya core culture. Between about 2500 BC and 250 AD, the basic institutions of Maya civilization emerged.[11]
|
20 |
+
|
21 |
+
The Maya civilization spread across the territory of present-day Belize around 1500 BC and flourished there until about AD 900. The recorded history of the middle and southern regions focuses on Caracol, an urban political centre that may have supported over 140,000 people.[21][22] North of the Maya Mountains, the most important political centre was Lamanai.[23] In the late Classic Era of Maya civilization (600–1000 AD), an estimated 400,000 to 1,000,000 people inhabited the area of present-day Belize.[11][24]
|
22 |
+
|
23 |
+
When Spanish explorers arrived in the 16th century, the area of present-day Belize included three distinct Maya territories:[25]
|
24 |
+
|
25 |
+
Spanish conquistadors explored the land and declared it part of the Spanish empire but failed to settle it because of its lack of resources and the hostile tribes of the Yucatán.
|
26 |
+
|
27 |
+
English pirates sporadically visited the coast of what is now Belize, seeking a sheltered region from which they could attack Spanish ships (see English settlement in Belize) and cut logwood (Haematoxylum campechianum) trees. The first British permanent settlement was founded around 1716 in what became the Belize District,[26] and during the 18th century, established a system using black slaves to cut logwood trees. This yielded a valuable fixing agent for clothing dyes,[27] and was one of the first ways to achieve a fast black before the advent of artificial dyes. The Spanish granted the British settlers the right to occupy the area and cut logwood in exchange for their help suppressing piracy.[11]
|
28 |
+
|
29 |
+
The British first appointed a superintendent over the Belize area in 1786. Before then the British government had not recognized the settlement as a colony for fear of provoking a Spanish attack. The delay in government oversight allowed the settlers to establish their own laws and forms of government. During this period, a few successful settlers gained control of the local legislature, known as the Public Meeting, as well as of most of the settlement's land and timber.
|
30 |
+
|
31 |
+
Throughout the 18th century, the Spanish attacked Belize every time war broke out with Britain. The Battle of St. George's Caye was the last of such military engagements, in 1798, between a Spanish fleet and a small force of Baymen and their slaves. From 3 to 5 September, the Spaniards tried to force their way through Montego Caye shoal, but were blocked by defenders. Spain's last attempt occurred on 10 September, when the Baymen repelled the Spanish fleet in a short engagement with no known casualties on either side. The anniversary of the battle has been declared a national holiday in Belize and is celebrated to commemorate the "first Belizeans" and the defence of their territory.[28]
|
32 |
+
|
33 |
+
In the early 19th century, the British sought to reform the settlers, threatening to suspend the Public Meeting unless it observed the government's instructions to eliminate slavery outright. After a generation of wrangling, slavery was abolished in the British Empire in 1833.[29] As a result of their slaves' abilities in the work of mahogany extraction, owners in British Honduras were compensated at £53.69 per slave on average, the highest amount paid in any British territory.[26]
|
34 |
+
|
35 |
+
However, the end of slavery did little to change the former slaves' working conditions if they stayed at their trade. A series of institutions restricted the ability of individuals to buy land, in a debt-peonage system. Former "extra special" mahogany or logwood cutters undergirded the early ascription of the capacities (and consequently the limitations) of people of African descent in the colony. Because a small elite controlled the settlement's land and commerce, former slaves had little choice but to continue to work in timber cutting.[26]
|
36 |
+
|
37 |
+
In 1836, after the emancipation of Central America from Spanish rule, the British claimed the right to administer the region. In 1862, Great Britain formally declared it a British Crown Colony, subordinate to Jamaica, and named it British Honduras.[30]
|
38 |
+
|
39 |
+
As a colony, Belize began to attract British investors. Among the British firms that dominated the colony in the late 19th century was the Belize Estate and Produce Company, which eventually acquired half of all privately held land and eventually eliminated peonage. Belize Estate's influence accounts in part for the colony's reliance on the mahogany trade throughout the rest of the 19th century and the first half of the 20th century.
|
40 |
+
|
41 |
+
The Great Depression of the 1930s caused a near-collapse of the colony's economy as British demand for timber plummeted. The effects of widespread unemployment were worsened by a devastating hurricane that struck the colony in 1931. Perceptions of the government's relief effort as inadequate were aggravated by its refusal to legalize labour unions or introduce a minimum wage. Economic conditions improved during World War II, as many Belizean men entered the armed forces or otherwise contributed to the war effort.
|
42 |
+
|
43 |
+
Following the war, the colony's economy stagnated. Britain's decision to devalue the British Honduras dollar in 1949 worsened economic conditions and led to the creation of the People's Committee, which demanded independence. The People's Committee's successor, the People's United Party (PUP), sought constitutional reforms that expanded voting rights to all adults. The first election under universal suffrage was held in 1954 and was decisively won by the PUP, beginning a three-decade period in which the PUP dominated the country's politics. Pro-independence activist George Cadle Price became PUP's leader in 1956 and the effective head of government in 1961, a post he would hold under various titles until 1984.
|
44 |
+
|
45 |
+
Under a new constitution, Britain granted British Honduras self-government in 1964. On 1 June 1973, British Honduras was officially renamed Belize.[31] Progress toward independence, however, was hampered by a Guatemalan claim to sovereignty over Belizean territory.
|
46 |
+
|
47 |
+
Belize was granted independence on 21 September 1981. Guatemala refused to recognize the new nation because of its longstanding territorial dispute with the British colony, claiming that Belize belonged to Guatemala. About 1,500 British troops remained in Belize to deter any possible incursions.[32]
|
48 |
+
|
49 |
+
With Price at the helm, the PUP won all national elections until 1984. In that election, the first national election after independence, the PUP was defeated by the United Democratic Party (UDP). UDP leader Manuel Esquivel replaced Price as prime minister, with Price himself unexpectedly losing his own House seat to a UDP challenger. The PUP under Price returned to power after elections in 1989. The following year the United Kingdom announced that it would end its military involvement in Belize, and the RAF Harrier detachment was withdrawn the same year, having remained stationed in the country continuously since its deployment had become permanent there in 1980. British soldiers were withdrawn in 1994, but the United Kingdom left behind a military training unit to assist with the newly created Belize Defence Force.
|
50 |
+
|
51 |
+
The UDP regained power in the 1993 national election, and Esquivel became prime minister for a second time. Soon afterwards, Esquivel announced the suspension of a pact reached with Guatemala during Price's tenure, claiming Price had made too many concessions to gain Guatemalan recognition. The pact may have curtailed the 130-year-old border dispute between the two countries. Border tensions continued into the early 2000s, although the two countries cooperated in other areas.
|
52 |
+
|
53 |
+
The PUP won a landslide victory in the 1998 national elections, and PUP leader Said Musa was sworn in as prime minister. In the 2003 elections the PUP maintained its majority, and Musa continued as prime minister. He pledged to improve conditions in the underdeveloped and largely inaccessible southern part of Belize.
|
54 |
+
|
55 |
+
In 2005, Belize was the site of unrest caused by discontent with the PUP government, including tax increases in the national budget. On 8 February 2008, Dean Barrow was sworn in as prime minister after his UDP won a landslide victory in general elections. Barrow and the UDP were re-elected in 2012 with a considerably smaller majority. Barrow led the UDP to a third consecutive general election victory in November 2015, increasing the party's number of seats from 17 to 19. However, he stated the election would be his last as party leader and preparations are under way for the party to elect his successor.
|
56 |
+
|
57 |
+
Belize is a parliamentary constitutional monarchy. The structure of government is based on the British parliamentary system, and the legal system is modelled on the common law of England. The head of state is Queen Elizabeth II, who holds the title Queen of Belize. The Queen lives in the United Kingdom, and is represented in Belize by the Governor-General. Executive authority is exercised by the cabinet, which advises the Governor-General and is led by the Prime Minister of Belize, who is head of government. Cabinet ministers are members of the majority political party in parliament and usually hold elected seats within it concurrent with their cabinet positions.
|
58 |
+
|
59 |
+
The bicameral National Assembly of Belize comprises a House of Representatives and a Senate. The 31 members of the House are popularly elected to a maximum five-year term and introduce legislation affecting the development of Belize. The Governor-General appoints the 12 members of the Senate, with a Senate president selected by the members. The Senate is responsible for debating and approving bills passed by the House.
|
60 |
+
|
61 |
+
Legislative power is vested in both the government and the Parliament of Belize. Constitutional safeguards include freedom of speech, press, worship, movement, and association. The judiciary is independent of the executive and the legislature.[33]
|
62 |
+
|
63 |
+
Members of the independent judiciary are appointed. The judicial system includes local magistrates grouped under the Magistrates' Court, which hears less serious cases. The Supreme Court (Chief Justice) hears murder and similarly serious cases, and the Court of Appeal hears appeals from convicted individuals seeking to have their sentences overturned. Defendants may, under certain circumstances, appeal their cases to the Caribbean Court of Justice.
|
64 |
+
|
65 |
+
Since 1974, the party system in Belize has been dominated by the centre-left People's United Party and the centre-right United Democratic Party, although other small parties took part in all levels of elections in the past. Though none of these small political parties has ever won any significant number of seats and/or offices, their challenge has been growing over the years.
|
66 |
+
|
67 |
+
Belize is a full participating member of the United Nations, Commonwealth of Nations; Organization of American States (OAS); Central American Integration System (SICA); Caribbean Community (CARICOM); CARICOM Single Market and Economy (CSME); Association of Caribbean States (ACS);[34] and the Caribbean Court of Justice (CCJ), which currently serves as a final court of appeal for only Barbados, Belize, and Guyana. In 2001 the Caribbean Community heads of government voted on a measure declaring that the region should work towards replacing the UK's Judicial Committee of the Privy Council with the Caribbean Court of Justice. It is still in the process of acceding to CARICOM treaties including the trade and single market treaties.
|
68 |
+
|
69 |
+
Belize is an original member (1995) of the World Trade Organization (WTO), and participates actively in its work. The pact involves the Caribbean Forum (CARIFORUM) subgroup of the Group of African, Caribbean, and Pacific states (ACP). CARIFORUM presently the only part of the wider ACP-bloc that has concluded the full regional trade-pact with the European Union.
|
70 |
+
|
71 |
+
The British Army Garrison in Belize is used primarily for jungle warfare training, with access to over 13,000 square kilometres (5,000 sq mi) of jungle terrain.[34]
|
72 |
+
|
73 |
+
The Belize Defence Force (BDF) serves as the country's military and is responsible for protecting the sovereignty of Belize. The BDF, with the Belize National Coast Guard and the Immigration Department, is a department of the Ministry of Defence and Immigration. In 1997 the regular army numbered over 900, the reserve army 381, the air wing 45 and the maritime wing 36, amounting to an overall strength of approximately 1400.[35] In 2005, the maritime wing became part of the Belizean Coast Guard.[36] In 2012, the Belizean government spent about $17 million on the military, constituting 1.08% of the country's gross domestic product (GDP).[37]
|
74 |
+
After Belize achieved independence in 1981 the United Kingdom maintained a deterrent force (British Forces Belize) in the country to protect it from invasion by Guatemala (see Guatemalan claim to Belizean territory). During the 1980s this included a battalion and No. 1417 Flight RAF of Harriers. The main British force left in 1994, three years after Guatemala recognized Belizean independence, but the United Kingdom maintained a training presence via the British Army Training and Support Unit Belize (BATSUB) and 25 Flight AAC until 2011 when the last British Forces left Ladyville Barracks, with the exception of seconded advisers.[35]
|
75 |
+
|
76 |
+
Belize is divided into six districts.
|
77 |
+
|
78 |
+
These districts are further divided into 31 constituencies. Local government in Belize comprises four types of local authorities: city councils, town councils, village councils and community councils. The two city councils (Belize City and Belmopan) and seven town councils cover the urban population of the country, while village and community councils cover the rural population.[39]
|
79 |
+
|
80 |
+
Throughout Belize's history, Guatemala has claimed sovereignty over all or part of Belizean territory. This claim is occasionally reflected in maps drawn by Guatemala's government, showing Belize as Guatemala's twenty-third department.[40][b]
|
81 |
+
|
82 |
+
The Guatemalan territorial claim involves approximately 53% of Belize's mainland, which includes significant portions of four districts: Belize, Cayo, Stann Creek, and Toledo.[42] Roughly 43% of the country's population (≈154,949 Belizeans) reside in this region.[43]
|
83 |
+
|
84 |
+
As of 2020[update], the border dispute with Guatemala remains unresolved and quite contentious.[40][44][45] Guatemala's claim to Belizean territory rests, in part, on Clause VII of the Anglo-Guatemalan Treaty of 1859, which obligated the British to build a road between Belize City and Guatemala. At various times, the issue has required mediation by the United Kingdom, Caribbean Community heads of government, the Organization of American States (OAS), Mexico, and the United States. However, on 15 April 2018, Guatemala's government held a referendum to determine if the country should take its territorial claim on Belize to the International Court of Justice (ICJ) to settle the long-standing issue. Guatemalans voted 95%[46] yes on the matter.[47] A similar referendum was to be held in Belize on 10 April 2019, but a court ruling led to its postponement.[48] The referendum was held on 8 May 2019, and 55.4% of voters opted to send the matter to the ICJ.
|
85 |
+
|
86 |
+
Belize backed the United Nations (UN) Declaration on the Rights of Indigenous Peoples in 2007, which established legal land rights to indigenous groups.[49] Other court cases have affirmed these rights like the Supreme Court of Belize's 2013 decision to uphold its ruling in 2010 that acknowledges customary land titles as communal land for indigenous peoples.[50] Another such case is the Caribbean Court of Justice's (CCJ) 2015 order on the Belizean government, which stipulated that the country develop a land registry to classify and exercise traditional governance over Mayan lands.[51] Despite these rulings, Belize has made little progress to support the land rights of indigenous communities; for instance, in the two years since the CCJ's decision, Belize's government has failed to launch the Mayan land registry, prompting the group to take action into its own hands.[52][53]
|
87 |
+
|
88 |
+
The exact ramifications of these cases need to be examined. As of 2017[update], Belize still struggles to recognize indigenous populations and their respective rights. According to the 50-page voluntary national report Belize created on its progress toward the UN's 2030 Sustainable Development Goals, indigenous groups are not factored into the country's indicators whatsoever.[54] In fact, the groups 'Creole' and 'Garinagu' are not included in the document, and 'Maya' and 'Mestizo' only occur once throughout the entirety of the report.[55] It is yet to be seen if the Belizean government will highlight the consequences of the territorial claim on indigenous land rights prior to the referendum vote in 2019.[56]
|
89 |
+
|
90 |
+
Belize is on the Caribbean coast of northern Central America. It shares a border on the north with the Mexican state of Quintana Roo, on the west with the Guatemalan department of Petén, and on the south with the Guatemalan department of Izabal. To the east in the Caribbean Sea, the second-longest barrier reef in the world flanks much of the 386 kilometres (240 mi) of predominantly marshy coastline.[58] The area of the country totals 22,960 square kilometres (8,865 sq mi), an area slightly larger than El Salvador, Israel, New Jersey or Wales. The many lagoons along the coasts and in the northern interior reduces the actual land area to 21,400 square kilometres (8,263 sq mi). It is the only Central American country with no Pacific coastline.
|
91 |
+
|
92 |
+
Belize is shaped like a rectangle that extends about 280 kilometres (174 mi) north-south and about 100 kilometres (62 mi) east-west, with a total land boundary length of 516 kilometres (321 mi). The undulating courses of two rivers, the Hondo and the Sarstoon River, define much of the course of the country's northern and southern boundaries. The western border follows no natural features and runs north–south through lowland forest and highland plateau.
|
93 |
+
|
94 |
+
The north of Belize consists mostly of flat, swampy coastal plains, in places heavily forested. The flora is highly diverse considering the small geographical area. The south contains the low mountain range of the Maya Mountains. The highest point in Belize is Doyle's Delight at 1,124 m (3,688 ft).[59]
|
95 |
+
|
96 |
+
Belize's rugged geography has also made the country's coastline and jungle attractive to drug smugglers, who use the country as a gateway into Mexico.[60] In 2011, the United States added Belize to the list of nations considered major drug producers or transit countries for narcotics.[61]
|
97 |
+
|
98 |
+
Belize has a rich variety of wildlife because of its unique position between North and South America and a wide range of climates and habitats for plant and animal life.[62] Belize's low human population and approximately 22,970 square kilometres (8,867 sq mi) of undistributed land make for an ideal home for the more than 5,000 species of plants and hundreds of species of animals, including armadillos, snakes, and monkeys.[63][64]
|
99 |
+
|
100 |
+
The Cockscomb Basin Wildlife Sanctuary is a nature reserve in south-central Belize established to protect the forests, fauna, and watersheds of an approximately 400 km2 (150 sq mi) area of the eastern slopes of the Maya Mountains. The reserve was founded in 1990 as the first wilderness sanctuary for the jaguar and is regarded by one author as the premier site for jaguar preservation in the world.[57]
|
101 |
+
|
102 |
+
While over 60% of Belize's land surface is covered by forest,[65] some 20% of the country's land is covered by cultivated land (agriculture) and human settlements.[66] Savanna, scrubland and wetland constitute the remainder of Belize's land cover. Important mangrove ecosystems are also represented across Belize's landscape.[67][68] As a part of the globally significant Mesoamerican Biological Corridor that stretches from southern Mexico to Panama, Belize's biodiversity – both marine and terrestrial – is rich, with abundant flora and fauna.
|
103 |
+
|
104 |
+
Belize is also a leader in protecting biodiversity and natural resources. According to the World Database on Protected Areas, 37% of Belize's land territory falls under some form of official protection, giving Belize one of the most extensive systems of terrestrial protected areas in the Americas.[69] By contrast, Costa Rica only has 27% of its land territory protected.[70]
|
105 |
+
|
106 |
+
Around 13.6% of Belize's territorial waters, which contain the Belize Barrier Reef, are also protected.[71] The Belize Barrier Reef is a UNESCO-recognized World Heritage Site and is the second-largest barrier reef in the world, behind Australia's Great Barrier Reef.
|
107 |
+
|
108 |
+
A remote sensing study conducted by the Water Center for the Humid Tropics of Latin America and the Caribbean (CATHALAC) and NASA, in collaboration with the Forest Department and the Land Information Centre (LIC) of the government of Belize's Ministry of Natural Resources and the Environment (MNRE), and published in August 2010 revealed that Belize's forest cover in early 2010 was approximately 62.7%, down from 75.9% in late 1980.[65] A similar study by Belize Tropical Forest Studies and Conservation International revealed similar trends in terms of Belize's forest cover.[72] Both studies indicate that each year, 0.6% of Belize's forest cover is lost, translating to the clearing of an average of 10,050 hectares (24,835 acres) each year. The USAID-supported SERVIR study by CATHALAC, NASA, and the MNRE also showed that Belize's protected areas have been extremely effective in protecting the country's forests. While only some 6.4% of forests inside of legally declared protected areas were cleared between 1980 and 2010, over a quarter of forests outside of protected areas were lost between 1980 and 2010.
|
109 |
+
|
110 |
+
As a country with a relatively high forest cover and a low deforestation rate, Belize has significant potential for participation in initiatives such as REDD. Significantly, the SERVIR study on Belize's deforestation[65] was also recognized by the Group on Earth Observations (GEO), of which Belize is a member nation.[73]
|
111 |
+
|
112 |
+
Belize is known to have a number of economically important minerals, but none in quantities large enough to warrant mining. These minerals include dolomite, barite (source of barium), bauxite (source of aluminium), cassiterite (source of tin), and gold. In 1990 limestone, used in road-building, was the only mineral resource being exploited for either domestic or export use.
|
113 |
+
|
114 |
+
In 2006, the cultivation of newly discovered crude oil in the town of Spanish Lookout has presented new prospects and problems for this developing nation.[74]
|
115 |
+
|
116 |
+
Access to biocapacity in Belize is much higher than world average. In 2016, Belize had 3.8 global hectares[75] of biocapacity per person within its territory, much more than the world average of 1.6 global hectares per person.[76] In 2016 Belize used 5.4 global hectares of biocapacity per person - their ecological footprint of consumption. This means they use more biocapacity than Belize contains. As a result, Belize is running a biocapacity deficit.[75]
|
117 |
+
|
118 |
+
The Belize Barrier Reef is a series of coral reefs straddling the coast of Belize, roughly 300 metres (980 ft) offshore in the north and 40 kilometres (25 mi) in the south within the country limits. The Belize Barrier Reef is a 300 kilometres (190 mi) long section of the 900 kilometres (560 mi) long Mesoamerican Barrier Reef System, which is continuous from Cancún on the northeast tip of the Yucatán Peninsula through the Riviera Maya up to Honduras making it one of the largest coral reef systems in the world.
|
119 |
+
|
120 |
+
It is the top tourist destination in Belize, popular for scuba diving and snorkelling, and attracting almost half of its 260,000 visitors. It is also vital to its fishing industry.[77] In 1842 Charles Darwin described it as "the most remarkable reef in the West Indies".
|
121 |
+
|
122 |
+
The Belize Barrier Reef was declared a World Heritage Site by UNESCO in 1996 due to its vulnerability and the fact that it contains important natural habitats for in-situ conservation of biodiversity.[78]
|
123 |
+
|
124 |
+
The Belize Barrier Reef is home to a large diversity of plants and animals, and is one of the most diverse ecosystems of the world:
|
125 |
+
|
126 |
+
With 90% of the reef still to be researched, some estimate that only 10% of all species have been discovered.[79]
|
127 |
+
|
128 |
+
Belize became the first country in the world to completely ban bottom trawling in December 2010.[80][81] In December 2015, Belize banned offshore oil drilling within 1 km (0.6 mi) of the Barrier Reef and all of its seven World Heritage Sites.[82]
|
129 |
+
|
130 |
+
Despite these protective measures, the reef remains under threat from oceanic pollution as well as uncontrolled tourism, shipping, and fishing. Other threats include hurricanes, along with global warming and the resulting increase in ocean temperatures,[83] which causes coral bleaching. It is claimed by scientists that over 40% of Belize's coral reef has been damaged since 1998.[77]
|
131 |
+
|
132 |
+
Belize has a tropical climate with pronounced wet and dry seasons, although there are significant variations in weather patterns by region. Temperatures vary according to elevation, proximity to the coast, and the moderating effects of the northeast trade winds off the Caribbean. Average temperatures in the coastal regions range from 24 °C (75.2 °F) in January to 27 °C (80.6 °F) in July. Temperatures are slightly higher inland, except for the southern highland plateaus, such as the Mountain Pine Ridge, where it is noticeably cooler year round. Overall, the seasons are marked more by differences in humidity and rainfall than in temperature.
|
133 |
+
|
134 |
+
Average rainfall varies considerably, from 1,350 millimetres (53 in) in the north and west to over 4,500 millimetres (180 in) in the extreme south. Seasonal differences in rainfall are greatest in the northern and central regions of the country where, between January and April or May, less than 100 millimetres (3.9 in) of rainfall per month. The dry season is shorter in the south, normally only lasting from February to April. A shorter, less rainy period, known locally as the "little dry", usually occurs in late July or August, after the onset of the rainy season.
|
135 |
+
|
136 |
+
Hurricanes have played key—and devastating—roles in Belizean history. In 1931, an unnamed hurricane destroyed over two-thirds of the buildings in Belize City and killed more than 1,000 people. In 1955, Hurricane Janet levelled the northern town of Corozal. Only six years later, Hurricane Hattie struck the central coastal area of the country, with winds in excess of 300 km/h (185 mph) and 4 m (13 ft) storm tides. The devastation of Belize City for the second time in thirty years prompted the relocation of the capital some 80 kilometres (50 mi) inland to the planned city of Belmopan.
|
137 |
+
|
138 |
+
In 1978, Hurricane Greta caused more than US$25 million in damages along the southern coast. On 9 October 2001, Hurricane Iris made landfall at Monkey River Town as a 235 km/h (145 mph) Category Four storm. The storm demolished most of the homes in the village, and destroyed the banana crop. In 2007, Hurricane Dean made landfall as a Category 5 storm only 40 km (25 mi) north of the Belize–Mexico border. Dean caused extensive damage in northern Belize.
|
139 |
+
|
140 |
+
In 2010, Belize was directly affected by the Category 2 Hurricane Richard, which made landfall approximately 32 kilometres (20 mi) south-southeast of Belize City at around 00:45 UTC on 25 October 2010.[84] The storm moved inland towards Belmopan, causing estimated damage of BZ$33.8 million ($17.4 million 2010 USD), primarily from damage to crops and housing.[85]
|
141 |
+
|
142 |
+
The most recent hurricane to affect the nation was Hurricane Earl of 2016.
|
143 |
+
|
144 |
+
Belize has a small, mostly private enterprise economy that is based primarily on agriculture, agro-based industry, and merchandising, with tourism and construction recently assuming greater importance.[74] The country is also a producer of industrial minerals,[86] crude oil, and petroleum. As of 2017[update], oil production was 320 m3/d (2,000 bbl/d).[87] In agriculture, sugar, like in colonial times, remains the chief crop, accounting for nearly half of exports, while the banana industry is the largest employer.[74]
|
145 |
+
|
146 |
+
The government of Belize faces important challenges to economic stability. Rapid action to improve tax collection has been promised, but a lack of progress in reining in spending could bring the exchange rate under pressure. The tourist and construction sectors strengthened in early 1999, leading to a preliminary estimate of revived growth at four percent. Infrastructure remains a major economic development challenge;[88] Belize has the region's most expensive electricity. Trade is important and the major trading partners are the United States, Mexico, the United Kingdom, the European Union, and CARICOM.[88]
|
147 |
+
|
148 |
+
Belize has four commercial bank groups, of which the largest and oldest is Belize Bank. The other three banks are Heritage Bank, Atlantic Bank, and Scotiabank (Belize). A robust complex of credit unions began in the 1940s under the leadership of Marion M. Ganey, S.J.[89]
|
149 |
+
|
150 |
+
Belize is located on the coast of Central America. Based on its location, it is a popular destination for vacations. However, also due to its location, it is currently becoming known in the global arena for attracting many drug trafficking entities in North America. The Belize currency is pegged to the U.S. dollar. This entices drug traffickers and money launderers who want to utilize the banking system. In addition, banks in Belize offer non-residents the ability to establish accounts. Because of this, many drug traffickers and money launderers utilize banks in Belize. As a result, the United States Department of State has recently named Belize one of the world's "major money laundering countries".[90]
|
151 |
+
|
152 |
+
The largest integrated electric utility and the principal distributor in Belize is Belize Electricity Limited. BEL was approximately 70% owned by Fortis Inc., a Canadian investor-owned distribution utility. Fortis took over the management of BEL in 1999, at the invitation of the government of Belize in an attempt to mitigate prior financial problems with the locally managed utility. In addition to its regulated investment in BEL, Fortis owns Belize Electric Company Limited (BECOL), a non-regulated hydroelectric generation business that operates three hydroelectric generating facilities on the Macal River.
|
153 |
+
|
154 |
+
On 14 June 2011, the government of Belize nationalized the ownership interest of Fortis Inc. in Belize Electricity Ltd. The utility encountered serious financial problems after the country's Public Utilities Commission (PUC) in 2008 "disallowed the recovery of previously incurred fuel and purchased power costs in customer rates and set customer rates at a level that does not allow BEL to earn a fair and reasonable return", Fortis said in a June 2011 statement.[91] BEL appealed this judgement to the Court of Appeal; however, a hearing was not expected until 2012. In May 2011, the Supreme Court of Belize granted BEL's application to prevent the PUC from taking any enforcement actions pending the appeal. The Belize Chamber of Commerce and Industry issued a statement saying the government had acted in haste and expressed concern over the message it sent to investors.
|
155 |
+
|
156 |
+
In August 2009, the government of Belize nationalized Belize Telemedia Limited (BTL), which now competes directly with Speednet. As a result of the nationalization process, the interconnection agreements are again subject to negotiations. Both BTL and Speednet boast a full range of products and services including basic telephone services, national and international calls, prepaid services, cellular services via GSM 1900 megahertz (MHz) and 4G LTE respectively, international cellular roaming, fixed wireless, fibre-to-the-home internet service, and national and international data networks.[92]
|
157 |
+
|
158 |
+
A combination of natural factors—climate, the Belize Barrier Reef, over 450 offshore Cays (islands), excellent fishing, safe waters for boating, scuba diving, snorkelling and freediving, numerous rivers for rafting, and kayaking, various jungle and wildlife reserves of fauna and flora, for hiking, bird watching, and helicopter touring, as well as many Maya sites—support the thriving tourism and ecotourism industry. It also has the largest cave system in Central America.
|
159 |
+
|
160 |
+
Development costs are high, but the government of Belize has made tourism its second development priority after agriculture. In 2012, tourist arrivals totalled 917,869 (with about 584,683 from the United States) and tourist receipts amounted to over $1.3 billion.[93]
|
161 |
+
|
162 |
+
Belize's population is estimated to be 360,346 in 2017,[94] and estimated at 408,487 in 2019.[5] Belize's total fertility rate in 2009 was 3.6 children per woman. Its birth rate was 22.9 births/1,000 population (2018 estimate), and the death rate was 4.2 deaths/1,000 population (2018 estimate).[2] A substantial ethnic-demographic shift has been occurring since 1980 when Creoles/Mestizo ratio has shifted from 58/48 to now at 26/53, with Creoles' moving to the US and Mestizo birth rate and entry from El Salvador. Woods, Composition and Distribution of Ethnic Groups in Belize 1997
|
163 |
+
|
164 |
+
The Maya are thought to have been in Belize and the Yucatán region since the second millennium BC; however, much of Belize's original Maya population was wiped out by conflicts between constantly warring tribes. There were many who died of disease after contact and invasion by Europeans. Three Maya groups now inhabit the country: The Yucatec (who came from Yucatán, Mexico, to escape the savage Caste War of the 1840s), the Mopan (indigenous to Belize but were forced out to Guatemala by the British for raiding settlements; they returned to Belize to evade enslavement by the Guatemalans in the 19th century), and Q'eqchi' (also fled from slavery in Guatemala in the 19th century).[95] The latter groups are chiefly found in the Toledo District. The Maya speak their native languages and Spanish, and are also fluent in English and Belize Kriol.
|
165 |
+
|
166 |
+
Creoles, also known as Kriols, make up roughly 21% of the Belizean population and about 75% of the diaspora. They are descendants of the Baymen slave owners, and slaves brought to Belize for the purpose of the logging industry.[96] These slaves were ultimately of West and Central African descent (many also of Miskito ancestry from Nicaragua) and born Africans who had spent very brief periods in Jamaica and Bermuda.[97] Bay Islanders and ethnic Jamaicans came in the late 19th century, further adding to these already varied peoples, creating this ethnic group.
|
167 |
+
|
168 |
+
For all intents and purposes, Creole is an ethnic and linguistic denomination. Some natives, even with blonde hair and blue eyes, may call themselves Creoles.[97]
|
169 |
+
|
170 |
+
Belize Creole English or Kriol developed during the time of slavery, and historically was only spoken by former slaves. However, this ethnicity has become an integral part of the Belizean identity, and as a result it is now spoken by about 45% of Belizeans.[6][97] Belizean Creole is derived mainly from English. Its substrate languages are the Native American language Miskito, and the various West African and Bantu languages brought into the country by slaves. Creoles are found all over Belize, but predominantly in urban areas such as Belize City, coastal towns and villages, and in the Belize River Valley.[98]
|
171 |
+
|
172 |
+
The Garinagu (singular Garifuna), at around 4.5% of the population, are a mix of West/Central African, Arawak, and Island Carib ancestry. Though they were captives removed from their homelands, these people were never documented as slaves. The two prevailing theories are that, in 1635, they were either the survivors of two recorded shipwrecks or somehow took over the ship they came on.[99]
|
173 |
+
|
174 |
+
Throughout history they have been incorrectly labelled as Black Caribs. When the British took over Saint Vincent and the Grenadines after the Treaty of Paris in 1763, they were opposed by French settlers and their Garinagu allies. The Garinagu eventually surrendered to the British in 1796. The British separated the more African-looking Garifunas from the more indigenous-looking ones. 5,000 Garinagu were exiled from the Grenadine island of Baliceaux. However, only about 2,500 of them survived the voyage to Roatán, an island off the coast of Honduras. The Garifuna language belongs to the Arawakan language family, but has a large number of loanwords from Carib languages and from English.
|
175 |
+
|
176 |
+
Because Roatán was too small and infertile to support their population, the Garinagu petitioned the Spanish authorities of Honduras to be allowed to settle on the mainland coast. The Spanish employed them as soldiers, and they spread along the Caribbean coast of Central America. The Garinagu settled in Seine Bight, Punta Gorda and Punta Negra, Belize, by way of Honduras as early as 1802. However, in Belize, 19 November 1832 is the date officially recognized as "Garifuna Settlement Day" in Dangriga.[100]
|
177 |
+
|
178 |
+
According to one genetic study, their ancestry is on average 76% Sub Saharan African, 20% Arawak/Island Carib and 4% European.[99]
|
179 |
+
|
180 |
+
The Mestizo culture are people of mixed Spanish and Maya descent. They originally came to Belize in 1847, to escape the Caste War, which occurred when thousands of Mayas rose against the state in Yucatán and massacred over one-third of the population. The surviving others fled across the borders into British territory. The Mestizos are found everywhere in Belize but most make their homes in the northern districts of Corozal and Orange Walk. Some other mestizos came from El Salvador, Guatemala, Honduras, & Nicaragua. The Mestizos are the largest ethnic group in Belize and make up approximately half of the population. The Mestizo towns centre on a main square, and social life focuses on the Catholic Church built on one side of it. Spanish is the main language of most Mestizos and Spanish descendants, but many speak English and Belize Kriol fluently.[101] Due to the influences of Kriol and English, many Mestizos speak what is known as "Kitchen Spanish".[102] The mixture of Latin and Maya foods like tamales, escabeche, chirmole, relleno, and empanadas came from their Mexican side and corn tortillas were handed down by their Mayan side. Music comes mainly from the marimba, but they also play and sing with the guitar. Dances performed at village fiestas include the Hog-Head, Zapateados, the Mestizada, Paso Doble and many more.
|
181 |
+
|
182 |
+
The majority of the Mennonite population comprises so-called Russian Mennonites of German descent who settled in the Russian Empire during the 18th and 19th centuries. Most Russian Mennonites live in Mennonite settlements like Spanish Lookout, Shipyard, Little Belize, and Blue Creek. These Mennonites speak Plautdietsch (a Low German dialect) in everyday life, but use mostly Standard German for reading (the Bible) and writing. The Plautdietsch-speaking Mennonites came mostly from Mexico in the years after 1958 and they are trilingual with Spanish. There are also some mainly Pennsylvania German-speaking Old Order Mennonites who came from the United States and Canada in the late 1960s. They live primarily in Upper Barton Creek and associated settlements. These Mennonites attracted people from different Anabaptist backgrounds who formed a new community. They look quite similar to Old Order Amish, but are different from them.[citation needed]
|
183 |
+
|
184 |
+
The remaining 5% or so of the population consist of a mix of Indians, Chinese, Whites from the United Kingdom, United States and Canada, and many other foreign groups brought to assist the country's development. During the 1860s, a large influx of East Indians who spent brief periods in Jamaica and American Civil War veterans from Louisiana and other Southern states established Confederate settlements in British Honduras and introduced commercial sugar cane production to the colony, establishing 11 settlements in the interior. The 20th century saw the arrival of more Asian settlers from mainland China, South Korea, India, Syria, and Lebanon. Said Musa, the son of an immigrant from Palestine, was the Prime Minister of Belize from 1998 to 2008. Central American immigrants from El Salvador, Guatemala, Honduras, & Nicaragua and expatriate Americans and Africans also began to settle in the country.[100]
|
185 |
+
|
186 |
+
Creoles and other ethnic groups are emigrating mostly to the United States, but also to the United Kingdom and other developed nations for better opportunities. Based on the latest US Census, the number of Belizeans in the United States is approximately 160,000 (including 70,000 legal residents and naturalized citizens), consisting mainly of Creoles and Garinagu.[104]
|
187 |
+
|
188 |
+
Because of conflicts in neighbouring Central American nations, Mestizo refugees from El Salvador, Guatemala, and Honduras have fled to Belize in significant numbers during the 1980s, and have been significantly adding to this group. These two events have been changing the demographics of the nation for the last 30 years.[105]
|
189 |
+
|
190 |
+
English is the official language of Belize. This stems from the country being a former British colony. Belize is the only country in Central America with English as the official language. Also, English is the primary language of public education, government and most media outlets. About half of Belizeans regardless of ethnicity speak a mostly English-based creole called Belize Creole (or Kriol in Belize Creole). Although English is widely used, Kriol is spoken in all situations whether informal, formal, social or interethnic dialogue, even in meetings of the House of Representatives.
|
191 |
+
|
192 |
+
When a Creole language exists alongside its lexifier language, as is the case in Belize, a continuum forms between the Creole and the lexifier language. It is therefore difficult to substantiate or differentiate the number of Belize Creole speakers compared to English speakers. Kriol might best be described as the lingua franca of the nation.[106]
|
193 |
+
|
194 |
+
Approximately 50% of Belizeans self-identify as Mestizo, Latino, or Hispanic and 30% speak Spanish as a native language.[107] When Belize was a British colony, Spanish was banned in schools but today it is widely taught as a second language. "Kitchen Spanish" is an intermediate form of Spanish mixed with Belize Creole, spoken in the northern towns such as Corozal and San Pedro.[102]
|
195 |
+
|
196 |
+
Over half the population is multilingual.[108] Being a small, multiethnic state, surrounded by Spanish-speaking nations, multilingualism is strongly encouraged.[109]
|
197 |
+
|
198 |
+
Belize is also home to three Maya languages: Q'eqchi', Mopan (an endangered language), and Yucatec Maya.[110][111][112]
|
199 |
+
Approximately 16,100 people speak the Arawakan-based Garifuna language,[113] and 6,900 Mennonites in Belize speak mainly Plautdietsch while a minority of Mennonites speak Pennsylvania German.[114]
|
200 |
+
|
201 |
+
According to the 2010 census,[6] 40.1% of Belizeans are Roman Catholics, 31.8% are Protestants (8.4% Pentecostal; 5.4% Adventist; 4.7% Anglican; 3.7% Mennonite; 3.6% Baptist; 2.9% Methodist; 2.8% Nazarene), 1.7% are Jehovah's Witnesses, 10.3% adhere to other religions (Maya religion, Garifuna religion, Obeah and Myalism, and minorities of The Church of Jesus Christ of Latter-day Saints, Hindus, Buddhists, Muslims, Bahá'ís, Rastafarians and other) and 15.5% profess to be irreligious.
|
202 |
+
|
203 |
+
According to PROLADES, Belize was 64.6% Roman Catholic, 27.8% Protestant, 7.6% Other in 1971.[115] Until the late 1990s, Belize was a Roman Catholic majority country. Catholics formed 57% of the population in 1991, and dropped to 49% in 2000. The percentage of Roman Catholics in the population has been decreasing in the past few decades due to the growth of Protestant churches, other religions and non-religious people.[116]
|
204 |
+
|
205 |
+
In addition to Catholics, there has always been a large accompanying Protestant minority. It was brought by British, German, and other settlers to the British colony of British Honduras. From the beginning, it was largely Anglican and Mennonite in nature. The Protestant community in Belize experienced a large Pentecostal and Seventh-Day Adventist influx tied to the recent spread of various Evangelical Protestant denominations throughout Latin America. Geographically speaking, German Mennonites live mostly in the rural districts of Cayo and Orange Walk.
|
206 |
+
|
207 |
+
The Greek Orthodox Church has a presence in Santa Elena.[117]
|
208 |
+
|
209 |
+
The Association of Religion Data Archives estimates there were 7,776 Bahá'ís in Belize in 2005, or 2.5% of the national population. Their estimates suggest this is the highest proportion of Bahá'ís in any country.[118] Their data also states that the Bahá'í Faith is the second most common religion in Belize, followed by Judaism.[119] Hinduism is followed by most Indian immigrants, however, Sikhs were the first Indian immigrants to Belize (not counting indentured workers), and the former Chief Justice of Belize George Singh was the son of a Sikh immigrant,[120][121] there was also a Sikh cabinet minister. Muslims claim that there have been Muslims in Belize since the 16th century having been brought over from Africa as slaves, but there are no sources for that claim.[122] The Muslim population of today started in the 1980s.[123] Muslims numbered 243 in 2000 and 577 in 2010 according to the official statistics.[124] and comprise 0.16 percent of the population. A mosque is at the Islamic Mission of Belize (IMB), also known as the Muslim Community of Belize. Another mosque, Masjid Al-Falah, officially opened in 2008 in Belize City.[125]
|
210 |
+
|
211 |
+
Belize has a high prevalence of communicable diseases such as respiratory diseases and intestinal illnesses.[126]
|
212 |
+
|
213 |
+
A number of kindergartens, secondary, and tertiary schools in Belize provide quality education for students—mostly funded by the government. Belize has about a dozen tertiary level institutions, the most prominent of which is the University of Belize, which evolved out of the University College of Belize founded in 1986. Before that St. John's College, founded in 1877, dominated the tertiary education field. The Open Campus of the University of the West Indies has a site in Belize.[127] It also has campuses in Barbados, Trinidad, and Jamaica. The government of Belize contributes financially to the UWI.
|
214 |
+
|
215 |
+
Education in Belize is compulsory between the ages of 6 and 14 years. As of 2010[update], the literacy rate in Belize was estimated at 79.7%,[6] one of the lowest in the Western Hemisphere.
|
216 |
+
|
217 |
+
The educational policy is currently following the "Education Sector Strategy 2011–2016", which sets three objectives for the years to come: Improving access, quality, and governance of the education system by providing technical and vocational education and training.[128]
|
218 |
+
|
219 |
+
Belize has relatively high rates of violent crime.[129] The majority of violence in Belize stems from gang activity, which includes trafficking of drugs and persons, protecting drug smuggling routes, and securing territory for drug dealing.[130]
|
220 |
+
|
221 |
+
In 2018, 143 murders were recorded in Belize, giving the country a homicide rate of 36 murders per 100,000 inhabitants, one of the highest in the world, but lower than the neighbouring countries of Honduras and El Salvador.[131][132] Belize District (containing Belize City) had the most murders by far compared to all the other districts. In 2018, 66% of the murders occurred in the Belize District.[132] The violence in Belize City (especially the southern part of the city) is largely due to gang warfare.[129]
|
222 |
+
|
223 |
+
In 2015, there were 40 reported cases of rape, 214 robberies, 742 burglaries, and 1027 cases of theft.[133]
|
224 |
+
|
225 |
+
The Belize Police Department has implemented many protective measures in hopes of decreasing the high number of crimes. These measures include adding more patrols to "hot spots" in the city, obtaining more resources to deal with the predicament, creating the "Do the Right Thing for Youths at Risk" program, creating the Crime Information Hotline, creating the Yabra Citizen Development Committee, an organization that helps youth, and many other initiatives. The Belize Police Department began an Anti-Crime Christmas campaign targeting criminals; as a result, the crime rates dropped in that month.[130] In 2011, the government established a truce among many major gangs, lowering the murder rate.[129]
|
226 |
+
|
227 |
+
Belize's social structure is marked by enduring differences in the distribution of wealth, power, and prestige. Because of the small size of Belize's population and the intimate scale of social relations, the social distance between the rich and the poor, while significant, is nowhere as vast as in other Caribbean and Central American societies, such as Jamaica and El Salvador. Belize lacks the violent class and racial conflict that has figured so prominently in the social life of its Central American neighbours.[134]
|
228 |
+
|
229 |
+
Political and economic power remain vested in the hands of the local elite. The sizeable middle group is composed of peoples of different ethnic backgrounds. This middle group does not constitute a unified social class, but rather a number of middle-class and working-class groups, loosely oriented around shared dispositions toward education, cultural respectability, and possibilities for upward social mobility. These beliefs, and the social practices they engender, help distinguish the middle group from the grass roots majority of the Belizean people.[134]
|
230 |
+
|
231 |
+
In 2013, the World Economic Forum ranked Belize 101st out of 135 countries in its Global Gender Gap Report. Of all the countries in Latin America and the Caribbean, Belize ranked 3rd from last and had the lowest female-to-male ratio for primary school enrolment.[135] In 2013, the UN gave Belize a Gender Inequality Index score of 0.435, ranking it 79th out of 148 countries.[136]
|
232 |
+
|
233 |
+
As of 2013[update], 48.3% of women in Belize participate in the workforce, compared to 81.8% of men.[136] 13.3% of the seats in Belize's National Assembly are filled by women.[136]
|
234 |
+
|
235 |
+
In Belizean folklore, there are the legends of Lang Bobi Suzi, La Llorona, La Sucia, Tata Duende, X'tabai, Anansi, Xtabay, Sisimite and the cadejo.
|
236 |
+
|
237 |
+
Most of the public holidays in Belize are traditional Commonwealth and Christian holidays, although some are specific to Belizean culture such as Garifuna Settlement Day and Heroes and Benefactors' Day, formerly Baron Bliss Day.[137] In addition, the month of September is considered a special time of national celebration called September Celebrations with a whole month of activities on a special events calendar. Besides Independence Day and St. George's Caye Day, Belizeans also celebrate Carnival during September, which typically includes several events spread across multiple days, with the main event being the Carnival Road March, usually held the Saturday before the 10th of September. In some areas of Belize, however, Carnival is celebrated at the traditional time before Lent (in February).[138]
|
238 |
+
|
239 |
+
Belizean cuisine is an amalgamation of all ethnicities in the nation, and their respectively wide variety of foods. It might best be described as both similar to Mexican/Central American cuisine and Jamaican/Anglo-Caribbean cuisine but very different to these areas as well with Belizean touches and innovation which have been handed down by generations. All immigrant communities add to the diversity of Belizean food including the Indian and Chinese Communities.
|
240 |
+
|
241 |
+
Belizean diet is very modern and traditional. There are no rules. Breakfast typically consists of bread, flour tortillas, or fry jacks that are often homemade. Fry jacks are eaten with various cheeses, "fry" beans, various forms of eggs or cereal, along with powdered milk, coffee, or tea. Tacos made from corn or flour tortillas and meat pies can also be consumed for a hearty breakfast from a street vendor. Midday meals are the main meals for Belizeans, usually called "dinner". They vary, from foods such as rice and beans with or without coconut milk, tamales (fried maize shells with beans or fish), "panades", meat pies, escabeche (onion soup), chimole (soup), caldo, stewed chicken and garnaches (fried tortillas with beans, cheese, and sauce) to various constituted dinners featuring some type of rice and beans, meat and salad or coleslaw. Fried "fry" chicken is also consumed at this time.
|
242 |
+
|
243 |
+
In rural areas, meals are typically more simple than in cities. The Maya use maize, beans, or squash for most meals, and the Garifuna are fond of seafood, cassava (particularly made into cassava bread or Ereba) and vegetables. The nation abounds with restaurants and fast food establishments selling fairly cheaply. Local fruits are quite common, but raw vegetables from the markets less so. Mealtime is a communion for families and schools and some businesses close at midday for lunch, reopening later in the afternoon. Steak is also common.
|
244 |
+
|
245 |
+
Punta is a popular genre of Garifuna music and has become one of the most popular kinds of music in Belize. It is distinctly Afro-Caribbean, and is sometimes said to be ready for international popularization like similarly-descended styles (reggae, calypso, merengue).
|
246 |
+
|
247 |
+
Brukdown is a modern style of Belizean music related to calypso. It evolved out of the music and dance of loggers, especially a form called buru. Reggae, dance hall, and soca imported from Jamaica and the rest of the West Indies, rap, hip-hop, heavy metal and rock music from the United States, are also popular among the youth of Belize.
|
248 |
+
|
249 |
+
The major sports in Belize are football, basketball, volleyball and cycling, with smaller followings of boat racing, athletics, softball, cricket, rugby and netball. Fishing is also popular in coastal areas of Belize.
|
250 |
+
|
251 |
+
The Cross Country Cycling Classic, also known as the "cross country" race or the Holy Saturday Cross Country Cycling Classic, is considered one of the most important Belize sports events. This one-day sports event is meant for amateur cyclists but has also gained worldwide popularity. The history of Cross Country Cycling Classic in Belize dates back to the period when Monrad Metzgen picked up the idea from a small village on the Northern Highway (now Phillip Goldson Highway). The people from this village used to cover long distances on their bicycles to attend the weekly game of cricket. He improvised on this observation by creating a sporting event on the difficult terrain of the Western Highway, which was then poorly built.
|
252 |
+
|
253 |
+
Another major annual sporting event in Belize is the La Ruta Maya Belize River Challenge, a 4-day canoe marathon held each year in March. The race runs from San Ignacio to Belize City, a distance of 290 kilometres (180 mi).[139]
|
254 |
+
|
255 |
+
On Easter day, citizens of Dangriga participate in a yearly fishing tournament. First, second, and third prize are awarded based on a scoring combination of size, species, and number. The tournament is broadcast over local radio stations, and prize money is awarded to the winners.
|
256 |
+
|
257 |
+
The Belize national basketball team is the only national team that has achieved major victories internationally. The team won the 1998 CARICOM Men's Basketball Championship, held at the Civic Centre in Belize City, and subsequently participated in the 1999 Centrobasquet Tournament in Havana. The national team finished seventh of eight teams after winning only 1 game despite playing close all the way. In a return engagement at the 2000 CARICOM championship in Barbados, Belize placed fourth. Shortly thereafter, Belize moved to the Central American region and won the Central American Games championship in 2001.
|
258 |
+
|
259 |
+
The team has failed to duplicate this success, most recently finishing with a 2 and 4 record in the 2006 COCABA championship. The team finished second in the 2009 COCABA tournament in Cancun, Mexico where it went 3–0 in group play. Belize won its opening match in the Centrobasquet Tournament, 2010, defeating Trinidad and Tobago, but lost badly to Mexico in a rematch of the COCABA final. A tough win over Cuba set Belize in position to advance, but they fell to Puerto Rico in their final match and failed to qualify.
|
260 |
+
|
261 |
+
Simone Biles, the winner of four gold medals in the 2016 Rio Summer Olympics is a dual citizen of the United States and of Belize,[140] which she considers her second home.[141] Biles is herself Belizean-American in descent.[142]
|
262 |
+
|
263 |
+
The national flower of Belize is the black orchid (Prosthechea cochleata, also known as Encyclia cochleata). The national tree is the mahogany tree (Swietenia macrophylla), which inspired the national motto Sub Umbra Floreo, which means "Under the shade I flourish". The national animal is the Baird's tapir and the national bird is the keel-billed toucan (Ramphastos sulphuratus).[143]
|
264 |
+
|
265 |
+
Coordinates: 17°4′N 88°42′W / 17.067°N 88.700°W / 17.067; -88.700
|
en/2616.html.txt
ADDED
@@ -0,0 +1,282 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Coordinates: 15°00′N 86°30′W / 15.000°N 86.500°W / 15.000; -86.500
|
2 |
+
|
3 |
+
Honduras (/hɒnˈdjʊərəs, -ˈdʊər-/ (listen), /-æs/;[7] Spanish: [onˈduɾas] (listen)), officially the Republic of Honduras (Spanish: República de Honduras), is a country in Central America. The republic of Honduras is bordered to the west by Guatemala, to the southwest by El Salvador, to the southeast by Nicaragua, to the south by the Pacific Ocean at the Gulf of Fonseca, and to the north by the Gulf of Honduras, a large inlet of the Caribbean Sea.
|
4 |
+
|
5 |
+
Honduras was home to several important Mesoamerican cultures, most notably the Maya, before the Spanish Colonization in the sixteenth century. The Spanish introduced Roman Catholicism and the now predominant Spanish language, along with numerous customs that have blended with the indigenous culture. Honduras became independent in 1821 and has since been a republic, although it has consistently endured much social strife and political instability, and remains one of the poorest countries in the Western Hemisphere. In 1960, the northern part of what was the Mosquito Coast was transferred from Nicaragua to Honduras by the International Court of Justice.[8]
|
6 |
+
|
7 |
+
The nation's economy is primarily agricultural, making it especially vulnerable to natural disasters such as Hurricane Mitch in 1998.[9] The lower class is primarily agriculturally based while wealth is concentrated in the country's urban centers.[10] Honduras has a Human Development Index of 0.625, classifying it as a nation with medium development.[11] When the Index is adjusted for income inequality, its Inequality-adjusted Human Development Index is 0.443.[11]
|
8 |
+
|
9 |
+
Honduran society is predominantly Mestizo; however, American Indian, black and white individuals also live in Honduras (2017).[12] The nation had a relatively high political stability until its 2009 coup and again with the 2017 presidential election.[13]
|
10 |
+
|
11 |
+
Honduras spans about 112,492 km2 (43,433 sq mi) and has a population exceeding 9 million.[2][3] Its northern portions are part of the Western Caribbean Zone, as reflected in the area's demographics and culture. Honduras is known for its rich natural resources, including minerals, coffee, tropical fruit, and sugar cane, as well as for its growing textiles industry, which serves the international market.
|
12 |
+
|
13 |
+
The literal meaning of the term "Honduras" is "depths" in Spanish. The name could either refer to the bay of Trujillo as an anchorage, fondura in the Leonese dialect of Spanish, or to Columbus's alleged quote that "Gracias a Dios que hemos salido de esas Honduras" ("Thank God we have departed from those depths").[14][15][16]
|
14 |
+
|
15 |
+
It was not until the end of the 16th century that Honduras was used for the whole province. Prior to 1580, Honduras referred to only the eastern part of the province, and Higueras referred to the western part.[16] Another early name is Guaymuras, revived as the name for the political dialogue in 2009 that took place in Honduras as opposed to Costa Rica.[17]
|
16 |
+
|
17 |
+
Hondurans are often referred to as Catracho or Catracha (fem) in Spanish. The word was coined by Nicaraguans and derives from the last name of the Spanish Honduran General Florencio Xatruch, who in 1857 led Honduran armed forces against an attempted invasion by North American adventurer William Walker. The nickname is considered complimentary, not derogatory.
|
18 |
+
|
19 |
+
In pre-Columbian times, almost all of modern Honduras was part of the Mesoamerican cultural area, with the exception of La Mosquitia in the extreme east, which seems to have been more connected to the Isthmo-Colombian area although also in contact with and influenced by Mesoamerican societies. In the extreme west, Mayan civilization flourished for hundreds of years. The dominant and most well-known and well-studied state within Honduras' borders was in Copán, which was located in a mainly non-Maya area, or on the frontier between Maya and non-Maya areas. Copán declined with other Lowland centres during the conflagrations of the Terminal Classic in the 9th century. The Maya of this civilization survive in western Honduras as the Ch'orti', isolated from their Choltian linguistic peers to the west.[18]
|
20 |
+
|
21 |
+
However, Copán represents only a fraction of Honduran pre-Columbian history. Remnants of other civilizations are found throughout the country. Archaeologists have studied sites such as Naco [es] and La Sierra in the Naco Valley, Los Naranjos on Lake Yojoa, Yarumela in the Comayagua Valley,[19] La Ceiba and Salitron Viejo[20] (both now under the Cajón Dam reservoir), Selin Farm and Cuyamel in the Aguan valley, Cerro Palenque, Travesia, Curruste, Ticamaya, Despoloncal, and Playa de los Muertos in the lower Ulúa River valley, and many others.
|
22 |
+
|
23 |
+
In 2012, LiDAR scanning revealed that several previously unknown high density settlements existed in La Mosquitia, corresponding to the legend of "La Ciudad Blanca". Excavation and study has since improved knowledge of the region's history. It is estimated that these settlements reached their zenith from 500 to 1000 AD.
|
24 |
+
|
25 |
+
On his fourth and the final voyage to the New World in 1502, Christopher Columbus landed near the modern town of Trujillo, near Guaimoreto Lagoon, becoming the first European to visit the Bay Islands on the coast of Honduras.[21] On 30 July 1502, Columbus sent his brother Bartholomew to explore the islands and Bartholomew encountered a Mayan trading vessel from Yucatán, carrying well-dressed Maya and a rich cargo.[22][23] Bartholomew's men stole the cargo they wanted and kidnapped the ship's elderly captain to serve as an interpreter[23] in the first recorded encounter between the Spanish and the Maya.[24]
|
26 |
+
|
27 |
+
In March 1524, Gil González Dávila became the first Spaniard to enter Honduras as a conquistador.[25][26] followed by Hernán Cortés, who had brought forces down from Mexico. Much of the conquest took place in the following two decades, first by groups loyal to Cristóbal de Olid, and then by those loyal to Francisco de Montejo but most particularly by those following Alvarado.[who?] In addition to Spanish resources, the conquerors relied heavily on armed forces from Mexico—Tlaxcalans and Mexica armies of thousands who remained garrisoned in the region.
|
28 |
+
|
29 |
+
Resistance to conquest was led in particular by Lempira. Many regions in the north of Honduras never fell to the Spanish, notably the Miskito Kingdom. After the Spanish conquest, Honduras became part of Spain's vast empire in the New World within the Kingdom of Guatemala. Trujillo and Gracias were the first city-capitals. The Spanish ruled the region for approximately three centuries.
|
30 |
+
|
31 |
+
Honduras was organized as a province of the Kingdom of Guatemala and the capital was fixed, first at Trujillo on the Atlantic coast, and later at Comayagua, and finally at Tegucigalpa in the central part of the country.
|
32 |
+
|
33 |
+
Silver mining was a key factor in the Spanish conquest and settlement of Honduras.[27] Initially the mines were worked by local people through the encomienda system, but as disease and resistance made this option less available, slaves from other parts of Central America were brought in. When local slave trading stopped at the end of the sixteenth century, African slaves, mostly from Angola, were imported.[28] After about 1650, very few slaves or other outside workers arrived in Honduras.
|
34 |
+
|
35 |
+
Although the Spanish conquered the southern or Pacific portion of Honduras fairly quickly, they were less successful on the northern, or Atlantic side. They managed to found a few towns along the coast, at Puerto Caballos and Trujillo in particular, but failed to conquer the eastern portion of the region and many pockets of independent indigenous people as well. The Miskito Kingdom in the northeast was particularly effective at resisting conquest. The Miskito Kingdom found support from northern European privateers, pirates and especially the British formerly English colony of Jamaica, which placed much of the area under its protection after 1740.
|
36 |
+
|
37 |
+
Honduras gained independence from Spain in 1821 and was a part of the First Mexican Empire until 1823, when it became part of the United Provinces of Central America. It has been an independent republic and has held regular elections since 1838. In the 1840s and 1850s Honduras participated in several failed attempts at Central American unity, such as the Confederation of Central America (1842–1845), the covenant of Guatemala (1842), the Diet of Sonsonate (1846), the Diet of Nacaome (1847) and National Representation in Central America (1849–1852). Although Honduras eventually adopted the name Republic of Honduras, the unionist ideal never waned, and Honduras was one of the Central American countries that pushed the hardest for a policy of regional unity.
|
38 |
+
|
39 |
+
Policies favoring international trade and investment began in the 1870s, and soon foreign interests became involved, first in shipping from the north coast, especially tropical fruit and most notably bananas, and then in building railroads. In 1888, a projected railroad line from the Caribbean coast to the capital, Tegucigalpa, ran out of money when it reached San Pedro Sula. As a result, San Pedro grew into the nation's primary industrial center and second-largest city. Comayagua was the capital of Honduras until 1880, when the capital moved to Tegucigalpa.
|
40 |
+
|
41 |
+
Since independence, nearly 300 small internal rebellions and civil wars have occurred in the country, including some changes of régime.[citation needed]
|
42 |
+
|
43 |
+
In the late nineteenth century, Honduras granted land and substantial exemptions to several US-based fruit and infrastructure companies in return for developing the country's northern regions. Thousands of workers came to the north coast as a result to work in banana plantations and other businesses that grew up around the export industry. Banana-exporting companies, dominated until 1930 by the Cuyamel Fruit Company, as well as the United Fruit Company, and Standard Fruit Company, built an enclave economy in northern Honduras, controlling infrastructure and creating self-sufficient, tax-exempt sectors that contributed relatively little to economic growth. American troops landed in Honduras in 1903, 1907, 1911, 1912, 1919, 1924 and 1925.[29]
|
44 |
+
|
45 |
+
In 1904, the writer O. Henry coined the term "banana republic" to describe Honduras,[30] publishing a book called Cabbages and Kings, about a fictional country, Anchuria, inspired by his experiences in Honduras, where he had lived for six months.[31] In The Admiral, O.Henry refers to the nation as a "small maritime banana republic"; naturally, the fruit was the entire basis of its economy.[32][33] According to a literary analyst writing for The Economist, "his phrase neatly conjures up the image of a tropical, agrarian country. But its real meaning is sharper: it refers to the fruit companies from the United States that came to exert extraordinary influence over the politics of Honduras and its neighbors."[34][35] In addition to drawing Central American workers north, the fruit companies encouraged immigration of workers from the English-speaking Caribbean, notably Jamaica and Belize, which introduced an African-descended, English-speaking and largely Protestant population into the country, although many of these workers left following changes to immigration law in 1939.[36]
|
46 |
+
Honduras joined the Allied Nations after Pearl Harbor, on 8 December 1941, and signed the Declaration by United Nations on 1 January 1942, along with twenty-five other governments.
|
47 |
+
|
48 |
+
Constitutional crises in the 1940s led to reforms in the 1950s. One reform gave workers permission to organize, and a 1954 general strike paralyzed the northern part of the country for more than two months, but led to reforms. In 1963 a military coup unseated democratically elected President Ramón Villeda Morales. In 1960, the northern part of what was the Mosquito Coast was transferred from Nicaragua to Honduras by the International Court of Justice.[8]
|
49 |
+
|
50 |
+
In 1969, Honduras and El Salvador fought what became known as the Football War.[37] Border tensions led to acrimony between the two countries after Oswaldo López Arellano, the president of Honduras, blamed the deteriorating Honduran economy on immigrants from El Salvador. The relationship reached a low when El Salvador met Honduras for a three-round football elimination match preliminary to the World Cup.[38]
|
51 |
+
|
52 |
+
Tensions escalated and on 14 July 1969, the Salvadoran army invaded Honduras.[37] The Organization of American States (OAS) negotiated a cease-fire which took effect on 20 July and brought about a withdrawal of Salvadoran troops in early August.[38] Contributing factors to the conflict were a boundary dispute and the presence of thousands of Salvadorans living in Honduras illegally. After the week-long war, as many as 130,000 Salvadoran immigrants were expelled.[10]
|
53 |
+
|
54 |
+
Hurricane Fifi caused severe damage when it skimmed the northern coast of Honduras on 18 and 19 September 1974. Melgar Castro (1975–78) and Paz Garcia (1978–82) largely built the current physical infrastructure and telecommunications system of Honduras.[39]
|
55 |
+
|
56 |
+
In 1979, the country returned to civilian rule. A constituent assembly was popularly elected in April 1980 to write a new constitution, and general elections were held in November 1981. The constitution was approved in 1982 and the PLH government of Roberto Suazo won the election with a promise to carry out an ambitious program of economic and social development to tackle the recession in which Honduras found itself. He launched ambitious social and economic development projects sponsored by American development aid. Honduras became host to the largest Peace Corps mission in the world, and nongovernmental and international voluntary agencies proliferated. The Peace Corps withdrew its volunteers in 2012, citing safety concerns.[40]
|
57 |
+
|
58 |
+
During the early 1980s, the United States established a continuing military presence in Honduras to support El Salvador, the Contra guerrillas fighting the Nicaraguan government, and also develop an airstrip and modern port in Honduras. Though spared the bloody civil wars wracking its neighbors, the Honduran army quietly waged campaigns against Marxist–Leninist militias such as the Cinchoneros Popular Liberation Movement, notorious for kidnappings and bombings,[41] and against many non-militants as well. The operation included a CIA-backed campaign of extrajudicial killings by government-backed units, most notably Battalion 316.[42]
|
59 |
+
|
60 |
+
In 1998, Hurricane Mitch caused massive and widespread destruction. Honduran President Carlos Roberto Flores said that fifty years of progress in the country had been reversed. Mitch destroyed about 70% of the country's crops and an estimated 70–80% of the transportation infrastructure, including nearly all bridges and secondary roads. Across Honduras 33,000 houses were destroyed, and an additional 50,000 damaged. Some 5,000 people killed, and 12,000 more injured. Total losses were estimated at US$3 billion.[43]
|
61 |
+
|
62 |
+
In 2007, President of Honduras Manuel Zelaya and President of the United States George W. Bush began talks on US assistance to Honduras to tackle the latter's growing drug cartels in Mosquito, Eastern Honduras using US Special Forces. This marked the beginning of a new foothold for the US Military's continued presence in Central America.[44]
|
63 |
+
|
64 |
+
Under Zelaya, Honduras joined ALBA in 2008, but withdrew in 2010 after the 2009 Honduran coup d'état. In 2009, a constitutional crisis resulted when power transferred in a coup from the president to the head of Congress. The OAS suspended Honduras because it did not regard its government as legitimate.[45][46]
|
65 |
+
|
66 |
+
Countries around the world, the OAS, and the United Nations[47] formally and unanimously condemned the action as a coup d'état, refusing to recognize the de facto government, even though the lawyers consulted by the Library of Congress submitted to the United States Congress an opinion that declared the coup legal.[47][48][49] The Honduran Supreme Court also ruled that the proceedings had been legal. The government that followed the de facto government established a truth and reconciliation commission, Comisión de la Verdad y Reconciliación, which after more than a year of research and debate concluded that the ousting had been a coup d'état, and illegal in the commission's opinion.[50][51][52]
|
67 |
+
|
68 |
+
The north coast of Honduras borders the Caribbean Sea and the Pacific Ocean lies south through the Gulf of Fonseca. Honduras consists mainly of mountains, with narrow plains along the coasts. A large undeveloped lowland jungle, La Mosquitia lies in the northeast, and the heavily populated lowland Sula valley in the northwest. In La Mosquitia lies the UNESCO world-heritage site Río Plátano Biosphere Reserve, with the Coco River which divides Honduras from Nicaragua.
|
69 |
+
|
70 |
+
The Islas de la Bahía and the Swan Islands are off the north coast. Misteriosa Bank and Rosario Bank, 130 to 150 kilometres (81 to 93 miles) north of the Swan Islands, fall within the Exclusive Economic Zone (EEZ) of Honduras.
|
71 |
+
|
72 |
+
Natural resources include timber, gold, silver, copper, lead, zinc, iron ore, antimony, coal, fish, shrimp, and hydropower.
|
73 |
+
|
74 |
+
The climate varies from tropical in the lowlands to temperate in the mountains. The central and southern regions are relatively hotter and less humid than the northern coast.
|
75 |
+
|
76 |
+
The region is considered a biodiversity hotspot because of the many plant and animal species found there. Like other countries in the region, it contains vast biological resources. Honduras hosts more than 6,000 species of vascular plants, of which 630 (described so far) are orchids; around 250 reptiles and amphibians, more than 700 bird species, and 110 mammalian species, of which half are bats.[53]
|
77 |
+
|
78 |
+
In the northeastern region of La Mosquitia lies the Río Plátano Biosphere Reserve, a lowland rainforest which is home to a great diversity of life. The reserve was added to the UNESCO World Heritage Sites List in 1982.
|
79 |
+
|
80 |
+
Honduras has rain forests, cloud forests (which can rise up to nearly 3,000 metres or 9,800 feet above sea level), mangroves, savannas and mountain ranges with pine and oak trees, and the Mesoamerican Barrier Reef System. In the Bay Islands there are bottlenose dolphins, manta rays, parrot fish, schools of blue tang and whale shark.
|
81 |
+
|
82 |
+
Deforestation resulting from logging is rampant in Olancho Department. The clearing of land for agriculture is prevalent in the largely undeveloped La Mosquitia region, causing land degradation and soil erosion.
|
83 |
+
|
84 |
+
Lake Yojoa, which is Honduras' largest source of fresh water, is polluted by heavy metals produced from mining activities.[54] Some rivers and streams are also polluted by mining.[55]
|
85 |
+
|
86 |
+
Honduras is governed within a framework of a presidential representative democratic republic. The President of Honduras is both head of state and head of government. Executive power is exercised by the Honduran government. Legislative power is vested in the National Congress of Honduras. The judiciary is independent of both the executive branch and the legislature.
|
87 |
+
|
88 |
+
The National Congress of Honduras (Congreso Nacional) has 128 members (diputados), elected for a four-year term by proportional representation. Congressional seats are assigned the parties' candidates on a departmental basis in proportion to the number of votes each party receives.[1]
|
89 |
+
|
90 |
+
In 1963, a military coup removed the democratically elected president, Ramón Villeda Morales. A string of authoritarian military governments held power uninterrupted until 1981, when Roberto Suazo Córdova was elected president.
|
91 |
+
|
92 |
+
The party system was dominated by the conservative National Party of Honduras (Partido Nacional de Honduras: PNH) and the liberal Liberal Party of Honduras (Partido Liberal de Honduras: PLH) until the 2009 Honduran coup d'état removed Manuel Zelaya from office and put Roberto Micheletti in his place.
|
93 |
+
|
94 |
+
In late 2012, 1540 persons were interviewed by ERIC in collaboration with the Jesuit university, as reported by Associated Press. This survey found that 60.3% believed the police were involved in crime, 44.9% had "no confidence" in the Supreme Court, and 72% thought there was electoral fraud in the primary elections of November 2012. Also, 56% expected the presidential, legislative and municipal elections of 2013 to be fraudulent.[56]
|
95 |
+
|
96 |
+
Current Honduran president Juan Orlando Hernández took office on 27 January 2014. After managing to stand for a second term,[57] a very close election in 2017 left uncertainty as to whether Hernandez or his main challenger, television personality Salvador Nasralla, had prevailed.[58]
|
97 |
+
|
98 |
+
Honduras and Nicaragua had tense relations throughout 2000 and early 2001 due to a boundary dispute off the Atlantic coast. Nicaragua imposed a 35% tariff against Honduran goods due to the dispute.[citation needed]
|
99 |
+
|
100 |
+
In June 2009 a coup d'état ousted President Manuel Zelaya; he was taken in a military aircraft to neighboring Costa Rica. The General Assembly of the United Nations voted to denounce the coup and called for the restoration of Zelaya. Several Latin American nations, including Mexico, temporarily severed diplomatic relations with Honduras. In July 2010, full diplomatic relations were once again re-established with Mexico.[59] The United States sent out mixed messages after the coup; Obama called the ouster a coup and expressed support for Zelaya's return to power. US Secretary of State Hillary Clinton, advised by John Negroponte, the former Reagan-era Ambassador to Honduras implicated in the Iran–Contra affair, refrained from expressing support.[60] She has since explained that the US would have had to cut aid if it called Zelaya's ouster a military coup, although the US has a record of ignoring these events when it chooses.[61] Zelaya had expressed an interest in Hugo Chávez' Bolivarian Alliance for Peoples of our America (ALBA), and had actually joined in 2008. After the 2009 coup, Honduras withdrew its membership.
|
101 |
+
|
102 |
+
This interest in regional agreements may have increased the alarm of establishment politicians. When Zelaya began calling for a "fourth ballot box" to determine whether Hondurans wished to convoke a special constitutional congress, this sounded a lot to some like the constitutional amendments that had extended the terms of both Hugo Chávez and Evo Morales. "Chávez has served as a role model for like-minded leaders intent on cementing their power. These presidents are barely in office when they typically convene a constitutional convention to guarantee their reelection," said a 2009 Spiegel International analysis,[62] which noted that one reason to join ALBA was discounted Venezuelan oil. In addition to Chávez and Morales, Carlos Menem of Argentina, Fernando Henrique Cardoso of Brazil and Columbian President Álvaro Uribe had all taken this step, and Washington and the EU were both accusing the Sandanista government in Nicaragua of tampering with election results.[62] Politicians of all stripes expressed opposition to Zelaya's referendum proposal, and the Attorney-General accused him of violating the constitution. The Honduran Supreme Court agreed, saying that the constitution had put the Supreme Electoral Tribunal in charge of elections and referenda, not the National Statistics Institute, which Zelaya had proposed to have run the count.[63] Whether or not Zelaya's removal from power had constitutional elements, the Honduran constitution explicitly protects all Hondurans from forced expulsion from Honduras.
|
103 |
+
|
104 |
+
The United States maintains a small military presence at one Honduran base. The two countries conduct joint peacekeeping, counter-narcotics, humanitarian, disaster relief, humanitarian, medical and civic action exercises. U.S. troops conduct and provide logistics support for a variety of bilateral and multilateral exercises. The United States is Honduras' chief trading partner.[39]
|
105 |
+
|
106 |
+
Honduras has a military with the Honduran Army, Honduran Navy and Honduran Air Force.
|
107 |
+
|
108 |
+
In 2017, Honduras signed the UN treaty on the Prohibition of Nuclear Weapons.[64]
|
109 |
+
|
110 |
+
Honduras is divided into 18 departments. The capital city is Tegucigalpa in the Central District within the department of Francisco Morazán.
|
111 |
+
|
112 |
+
A new administrative division called ZEDE (Zonas de empleo y desarrollo económico) was created in 2013. ZEDEs have a high level of autonomy with their own political system at a judicial, economic and administrative level, and are based on free market capitalism.
|
113 |
+
|
114 |
+
The World Bank categorizes Honduras as a low middle-income nation.[65] The nation's per capita income sits at around 600 US dollars making it one of the lowest in North America.[66]
|
115 |
+
|
116 |
+
In 2010, 50% of the population were living below the poverty line.[67] By 2016 more than 66% were living below the poverty line.[65]
|
117 |
+
|
118 |
+
Economic growth in the last few years has averaged 7% a year, one of the highest rates in Latin America (2010).[65] Despite this, Honduras has seen the least development amongst all Central American countries.[68] Honduras is ranked 130 of 188 countries with a Human Development Index of .625 that classifies the nation as having medium development (2015).[11] The three factors that go into Honduras' HDI (an extended and healthy life, accessibility of knowledge and standard of living) have all improved since 1990 but still remain relatively low with life expectancy at birth being 73.3, expected years of schooling being 11.2 (mean of 6.2 years) and GNI per capita being $4,466 (2015).[11] The HDI for Latin America and the Caribbean overall is 0.751 with life expectancy at birth being 68.6, expected years of schooling being 11.5 (mean of 6.6) and GNI per capita being $6,281 (2015).[11]
|
119 |
+
|
120 |
+
The 2009 Honduran coup d'état led to a variety of economic trends in the nation.[69] Overall growth has slowed, averaging 5.7 percent from 2006–2008 but slowing to 3.5 percent annually between 2010 and 2013.[69] Following the coup trends of decreasing poverty and extreme poverty were reversed. The nation saw a poverty increase of 13.2 percent and in extreme poverty of 26.3 percent in just 3 years.[69] Furthermore, unemployment grew between 2008 and 2012 from 6.8 percent to 14.1 percent.[69]
|
121 |
+
|
122 |
+
Because much of the Honduran economy is based on small scale agriculture of only a few exports, natural disasters have a particularly devastating impact. Natural disasters, such as 1998 Hurricane Mitch, have contributed to this inequality as they particularly affect poor rural areas.[70] Additionally, they are a large contributor to food insecurity in the country as farmers are left unable to provide for their families.[70] A study done by Honduras NGO, World Neighbors, determined the terms "increased workload, decreased basic grains, expensive food, and fear" were most associated with Hurricane Mitch.[71]
|
123 |
+
|
124 |
+
The rural and urban poor were hit hardest by Hurricane Mitch.[70] Those in southern and western regions specifically were considered most vulnerable as they both were subject to environmental destruction and home to many subsistence farmers.[70] Due to disasters such as Hurricane Mitch, the agricultural economic sector has declined a third in the past twenty years.[70] This is mostly due to a decline in exports, such as bananas and coffee, that were affected by factors such as natural disasters.[70] Indigenous communities along the Patuca River were hit extremely hard as well.[9] The mid-Pataca region was almost completely destroyed.[9] Over 80% of rice harvest and all of banana, plantain, and manioc harvests were lost.[9] Relief and reconstruction efforts following the storm were partial and incomplete, reinforcing existing levels of poverty rather than reversing those levels, especially for indigenous communities.[9] The period between the end of food donations and the following harvest led to extreme hunger, causing deaths amongst the Tawahka population.[9] Those that were considered the most "land-rich" lost 36% of their total land on average.[9] Those that were the most "land-poor", lost less total land but a greater share of their overall total.[9] This meant that those hit hardest were single women as they constitute the majority of this population.[9]
|
125 |
+
|
126 |
+
Since the 1970s when Honduras was designated a "food priority country" by the UN, organizations such as The World Food Program (WFP) have worked to decrease malnutrition and food insecurity.[72] A large majority of Honduran farmers live in extreme poverty, or below 180 US dollars per capita.[73] Currently one fourth of children are affected by chronic malnutrition.[72] WFP is currently working with the Honduran government on a School Feeding Program which provides meals for 21,000 Honduran schools, reaching 1.4 million school children.[72] WFP also participates in disaster relief through reparations and emergency response in order to aid in quick recovery that tackles the effects of natural disasters on agricultural production.[72]
|
127 |
+
|
128 |
+
Honduras' Poverty Reduction Strategy was implemented in 1999 and aimed to cut extreme poverty in half by 2015.[74] While spending on poverty-reduction aid increased there was only a 2.5% increase in GDP between 1999 and 2002.[75] This improvement left Honduras still below that of countries that lacked aid through Poverty Reduction Strategy behind those without it.[75] The World Bank believes that this inefficiency stems from a lack of focus on infrastructure and rural development.[75] Extreme poverty saw a low of 36.2 percent only two years after the implementation of the strategy but then increased to 66.5 percent by 2012.[69]
|
129 |
+
|
130 |
+
Poverty Reduction Strategies were also intended to affect social policy through increased investment in education and health sectors.[76] This was expected to lift poor communities out of poverty while also increasing the workforce as a means of stimulating the Honduran economy.[76] Conditional cash transfers were used to do this by the Family Assistance Program.[76] This program was restructured in 1998 in an attempt to increase effectiveness of cash transfers for health and education specifically for those in extreme poverty.[76] Overall spending within Poverty Reduction Strategies have been focused on education and health sectors increasing social spending from 44% of Honduras' GDP in 2000 to 51% in 2004.[76]
|
131 |
+
|
132 |
+
Critics of aid from International Finance Institutions believe that the World Bank's Poverty Reduction Strategy result in little substantive change to Honduran policy.[76] Poverty Reduction Strategies also excluded clear priorities, specific intervention strategy, strong commitment to the strategy and more effective macro-level economic reforms according to Jose Cuesta of Cambridge University.[75] Due to this he believes that the strategy did not provide a pathway for economic development that could lift Honduras out of poverty resulting in neither lasting economic growth of poverty reduction.[75]
|
133 |
+
|
134 |
+
Prior to its 2009 coup Honduras widely expanded social spending and an extreme increase in minimum wage.[69] Efforts to decrease inequality were swiftly reversed following the coup.[69] When Zelaya was removed from office social spending as a percent of GDP decreased from 13.3 percent in 2009 to 10.9 recent in 2012.[69] This decrease in social spending exacerbated the effects of the recession, which the nation was previously relatively well equipped to deal with.[69]
|
135 |
+
|
136 |
+
The World Bank Group Executive Board approved a plan known as the new Country Partnership Framework (CPF).[70] This plan's objectives are to expand social program coverage, strengthen infrastructure, increase financing accessibility, strengthen regulatory framework and institutional capacity, improve the productivity of rural areas, strengthen natural disaster and climate change resiliency, and the buildup local governments so that violence and crime rates will decrease.[65] The overall aim of the initiative is to decrease inequality and vulnerability of certain populations while increasing economic growth.[70] Additionally the signing of the U.S.–Central America Free Trade Agreement (CAFTA) was meant to diversify the economy in order to promote growth and expand the range of exports the country is reliant on.[74]
|
137 |
+
|
138 |
+
Levels of income inequality in Honduras are higher than in any other Latin American country.[69] Unlike other Latin American countries, inequality steadily increased in Honduras between 1991 and 2005.[74] Between 2006 and 2010 inequality saw a decrease but increased again in 2010.[69]
|
139 |
+
|
140 |
+
When Honduras' Human Development Index is adjusted for inequality (known as the IHDI) Honduras' development index is reduced to .443.[11] The levels of inequality in each aspect of development can also be assessed.[11] In 2015 inequality of life expectancy at birth was 19.6%, inequality in education was 24.4% and inequality in income was 41.5% [11] The overall loss in human development due to inequality was 29.2.[11]
|
141 |
+
|
142 |
+
The IHDI for Latin America and the Caribbean overall is 0.575 with an overall loss of 23.4%.[11] In 2015 for the entire region, inequality of life expectancy at birth was 22.9%, inequality in education was 14.0% and inequality in income was 34.9%.[11] While Honduras has a higher life expectancy than other countries in the region (before and after inequality adjustments), its quality of education and economic standard of living are lower.[11] Income inequality and education inequality have a large impact on the overall development of the nation.[11]
|
143 |
+
|
144 |
+
Inequality also exists between rural and urban areas as it relates to the distribution of resources.[77] Poverty is concentrated in southern, eastern, and western regions where rural and indigenous peoples live. North and central Honduras are home to the country's industries and infrastructure, resulting in low levels of poverty.[66] Poverty is concentrated in rural Honduras, a pattern that is reflected throughout Latin America.[12] The effects of poverty on rural communities are vast. Poor communities typically live in adobe homes, lack material resources, have limited access to medical resources, and live off of basics such as rice, maize and beans.[78]
|
145 |
+
|
146 |
+
The lower class predominantly consists of rural subsistence farmers and landless peasants.[79] Since 1965 there has been an increase in the number of landless peasants in Honduras which has led to a growing class of urban poor individuals.[79] These individuals often migrate to urban centers in search of work in the service sector, manufacturing, or construction.[79] Demographers believe that without social and economic reform, rural to urban migration will increase, resulting in the expansion of urban centers.[79] Within the lower class, underemployment is a major issue.[79] Individuals that are underemployed often only work as part-time laborers on seasonal farms meaning their annual income remains low.[79] In the 1980s peasant organizations and labor unions such as the National Federation of Honduran Peasants, The National Association of Honduran Peasants and the National Union of Peasants formed.[79]
|
147 |
+
|
148 |
+
It is not uncommon for rural individuals to voluntarily enlist in the military, however this often does not offer stable or promising career opportunities.[80] The majority of high-ranking officials in the Honduran army are recruited from elite military academies.[80] Additionally, the majority of enlistment in the military is forced.[80] Forced recruitment largely relies on an alliance between the Honduran government, military and upper class Honduran society.[80] In urban areas males are often sought out from secondary schools while in rural areas roadblocks aided the military in handpicking recruits.[80] Higher socio-economic status enables individuals to more easily evade the draft.[80]
|
149 |
+
|
150 |
+
Middle class Honduras is a small group defines by relatively low membership and income levels.[79] Movement from lower to middle class is typically facilitated by higher education.[79] Professionals, students, farmers, merchants, business employees, and civil servants are all considered a part of the Honduran middle class.[79] Opportunities for employment and the industrial and commercial sectors are slow growing, limiting middle class membership.[79]
|
151 |
+
|
152 |
+
The Honduran upper class has much higher income levels than the rest of the Honduran population reflecting large amounts of income inequality.[79] Much of the upper class affords their success to the growth of cotton and livestock exports post-World War II.[79] The wealthy are not politically unified and differ in political and economic views.[79]
|
153 |
+
|
154 |
+
The currency is the Honduran lempira.
|
155 |
+
|
156 |
+
The government operates both the electrical grid, Empresa Nacional de Energía Eléctrica (ENEE) and the land-line telephone service, Hondutel. ENEE receives heavy subsidies to counter its chronic financial problems, but Hondutel is no longer a monopoly. The telecommunication sector was opened to private investment on 25 December 2005, as required under CAFTA. The price of petroleum is regulated, and the Congress often ratifies temporary price regulation for basic commodities.
|
157 |
+
|
158 |
+
Gold, silver, lead and zinc are mined.[82]
|
159 |
+
|
160 |
+
In 2005 Honduras signed CAFTA, a free trade agreement with the United States. In December 2005, Puerto Cortés, the primary seaport of Honduras, was included in the U.S. Container Security Initiative.[83]
|
161 |
+
|
162 |
+
In 2006 the U.S. Department of Homeland Security and the Department of Energy announced the first phase of the Secure Freight Initiative (SFI), which built upon existing port security measures. SFI gave the U.S. government enhanced authority, allowing it to scan containers from overseas[clarification needed] for nuclear and radiological materials in order to improve the risk assessment of individual US-bound containers. The initial phase of Secure Freight involved deploying of nuclear detection and other devices to six foreign ports:
|
163 |
+
|
164 |
+
Containers in these ports have been scanned since 2007 for radiation and other risk factors before they are allowed to depart for the United States.[84]
|
165 |
+
|
166 |
+
For economic development a 2012 memorandum of understanding with a group of international investors obtained Honduran government approval to build a zone (city) with its own laws, tax system, judiciary and police, but opponents brought a suit against it in the Supreme Court, calling it a "state within a state".[85] In 2013, Honduras' Congress ratified Decree 120, which led to the establishment of ZEDEs. The government began construction of the first zones in June 2015.[86]
|
167 |
+
|
168 |
+
About half of the electricity sector in Honduras is privately owned. The remaining generation capacity is run by ENEE (Empresa Nacional de Energía Eléctrica).
|
169 |
+
Key challenges in the sector are:
|
170 |
+
|
171 |
+
Infrastructure for transportation in Honduras consists of: 699 kilometres (434 miles) of railways; 13,603 kilometres (8,453 miles) of roadways;[1] seven ports and harbors;[citation needed] and 112 airports altogether (12 Paved, 100 unpaved).[1] The Ministry of Public Works, Transport and Housing (SOPRTRAVI in Spanish acronym) is responsible for transport sector policy.
|
172 |
+
|
173 |
+
Water supply and sanitation in Honduras differ greatly from urban centers to rural villages. Larger population centers generally have modernized water treatment and distribution systems, but water quality is often poor because of lack of proper maintenance and treatment. Rural areas generally have basic drinking water systems with limited capacity for water treatment. Many urban areas have sewer systems in place to collect wastewater, but proper treatment of wastewater is rare. In rural areas sanitary facilities are generally limited to latrines and basic septic pits.
|
174 |
+
|
175 |
+
Water and sanitation services were historically provided by the Servicio Autónomo de Alcantarillas y Aqueductos [es] (SANAA). In 2003, the government enacted a new "water law" which called for the decentralization of water services. Under the 2003 law, local communities have both the right and the responsibility to own, operate, and control their own drinking water and wastewater systems. Since this law passed, many communities have joined together to address water and sanitation issues on a regional basis.
|
176 |
+
|
177 |
+
Many national and international non-government organizations have a history of working on water and sanitation projects in Honduras. International groups include the Red Cross, Water 1st, Rotary Club, Catholic Relief Services, Water for People, EcoLogic Development Fund, CARE, the Canadian Executive Service Organization (CESO-SACO), Engineers Without Borders – USA, Flood The Nations, Students Helping Honduras (SHH), Global Brigades, and Agua para el Pueblo[87] in partnership with AguaClara at Cornell University.
|
178 |
+
|
179 |
+
In addition, many government organizations work on projects in Honduras, including the European Union, the USAID, the Army Corps of Engineers, Cooperacion Andalucia, the government of Japan, and others.
|
180 |
+
|
181 |
+
In recent years Honduras has experienced very high levels of violence and criminality.[88] Homicide violence reached a peak in 2012 with an average of 20 homicides a day.[89] Cities such as San Pedro Sula and the Tegucigalpa have registered homicide rates among the highest in the world. The violence is associated with drug trafficking as Honduras is often a transit point, and with a number of urban gangs, mainly the MS-13 and the 18th Street gang. But as recently as 2017, organizations such as InSight Crime's show figures of 42 per 100,000 inhabitants[90]; a 26% drop from 2016 figures.
|
182 |
+
|
183 |
+
Violence in Honduras increased after Plan Colombia was implemented and after Mexican President Felipe Calderón declared the war against drug trafficking in Mexico.[91] Along with neighboring El Salvador and Guatemala, Honduras forms part of the Northern Triangle of Central America, which has been characterized as one of the most violent regions in the world.[92] As a result of crime and increasing murder rates, the flow of migrants from Honduras to the U.S. also went up. The rise in violence in the region has received international attention.
|
184 |
+
|
185 |
+
Honduras had a population of 9,587,522 in 2018.[2][3] The proportion of the population below the age of 15 in 2010 was 36.8%, 58.9% were between 15 and 65 years old, and 4.3% were 65 years old or older.[93]
|
186 |
+
|
187 |
+
Since 1975, emigration from Honduras has accelerated as economic migrants and political refugees sought a better life elsewhere. A majority of expatriate Hondurans live in the United States. A 2012 US State Department estimate suggested that between 800,000 and one million Hondurans lived in the United States at that time, nearly 15% of the Honduran population.[39] The large uncertainty about numbers is because numerous Hondurans live in the United States without a visa. In the 2010 census in the United States, 617,392 residents identified as Hondurans, up from 217,569 in 2000.[94]
|
188 |
+
|
189 |
+
The ethnic breakdown of Honduran society was 90% Mestizo, 7% American Indian, 2% Black and 1% White (2017).[12] The 1927 Honduran census provides no racial data but in 1930 five classifications were created: white, Indian, Negro, yellow, and mestizo.[95] This system was used in the 1935 and 1940 census.[95] Mestizo was used to describe individuals that did not fit neatly into the categories of white, Indian, negro or yellow or who are of mixed white-Indian descent.[95]
|
190 |
+
|
191 |
+
John Gillin considers Honduras to be one of thirteen "Mestizo countries" (Mexico, Guatemala, El Salvador, Nicaragua, Panama, Colombia, Venezuela, Cuba, Ecuador, Peru, Bolivia, Paraguay).[96] He claims that in much as Spanish America little attention is paid to race and race mixture resulting in social status having little reliance on one's physical features.[96] However, in "Mestizo countries" such as Honduras, this is not the case.[96] Social stratification from Spain was able to develop in these countries through colonization.[96]
|
192 |
+
|
193 |
+
During colonization the majority of Honduras' indigenous population died of diseases like smallpox and measles resulting in a more homogenous indigenous population compared to other colonies.[79] Nine indigenous and African American groups are recognized by the government in Honduras.[97] The majority of Amerindians in Honduras are Lenca, followed by the Miskito, Cho'rti', Tolupan, Pech and Sumo.[97] Around 50,000 Lenca individuals live in the west and western interior of Honduras while the other small native groups are located throughout the country.[79]
|
194 |
+
|
195 |
+
The majority of blacks in Honduran are culturally ladino, meaning they are culturally Hispanic.[79] Non-ladino groups in Honduras include the Black Carib, Miskito, Arab immigrants and the black population of the Islas de la Bahía[79] The Black Carib population descended from freed slaves from Saint Vincent.[79] The Miskito population (about 10,000 individuals) are the descendants of African and British immigrants and are extremely racially diverse.[79] While the Black Carib and Miskito populations have similar origins, Black Caribs are considered black while Miskitos are considered indigenous.[79] This is largely a reflection of cultural differences, as Black Caribs have retained much of their original African culture.[79] The majority of Arab Hondurans are of Palestinian and Lebanese descent.[79] They are known as "turcos" in Honduras because of migration during the rule of the Ottoman Empire.[79] They have maintained cultural distinctiveness and prospered economically.[79]
|
196 |
+
|
197 |
+
The male to female ratio of the Honduran population is 1.01. This ratio stands at 1.05 at birth, 1.04 from 15–24 years old, 1.02 from 25–54 years old, .88 from 55–64 years old, and .77 for those 65 years or older.[12]
|
198 |
+
|
199 |
+
The Gender Development Index (GDI) was .942 in 2015 with an HDI of .600 for females and .637 for males.[11] Life expectancy at birth for males is 70.9 and 75.9 for females.[11] Expected years of schooling in Honduras is 10.9 years for males (mean of 6.1) and 11.6 for females (mean of 6.2).[11] These measures do not reveal a large disparity between male and female development levels, however, GNI per capita is vastly different by gender.[11] Males have a GNI per capita of $6,254 while that of females is only $2,680.[11] Honduras' overall GDI is higher than that of other medium HDI nations (.871) but lower than the overall HDI for Latin America and the Caribbean (.981).[11]
|
200 |
+
|
201 |
+
The United Nations Development Program (UNDP) ranks Honduras 116th for measures including women's political power, and female access to resources.[98] The Gender Inequality Index (GII) depicts gender-based inequalities in Honduras according to reproductive health, empowerment, and economic activity.[11] Honduras has a GII of .461 and ranked 101 of 159 countries in 2015.[11] 25.8% of Honduras' parliament is female and 33.4% of adult females have a secondary education or higher while only 31.1% of adult males do.[11] Despite this, while male participation in the labor market is 84.4, female participation is 47.2%.[11] Honduras' maternal mortality ratio is 129 and the adolescent birth rate is 65.0 for women ages 15–19.[11]
|
202 |
+
|
203 |
+
Familialism and machismo carry a lot of weight within Honduran society.[99] Familialism refers to the idea of individual interests being second to that of the family, most often in relation to dating and marriage, abstinence, and parental approval and supervision of dating.[99] Aggression and proof of masculinity through physical dominance are characteristic of machismo.[99]
|
204 |
+
|
205 |
+
Honduras has historically functioned with a patriarchal system like many other Latin American countries.[100] Honduran men claim responsibility for family decisions including reproductive health decisions.[100] Recently Honduras has seen an increase in challenges to this notion as feminist movements and access to global media increases.[100] There has been an increase in educational attainment, labor force participating, urban migration, late-age marriage, and contraceptive use amongst Honduran women.[100]
|
206 |
+
|
207 |
+
Between 1971 and 2001 Honduran total fertility rate decreased from 7.4 births to 4.4 births.[100] This is largely attributable to an increase in educational attainment and workforce participation by women, as well as more widespread use of contraceptives.[100] In 1996 50% of women were using at least one type of contraceptive.[100] By 2001 62% were largely due to female sterilization, birth control in the form of a pill, injectable birth control, and IUDs.[100] A study done in 2001 of Honduran men and women reflect conceptualization of reproductive health and decision making in Honduras.[100] 28% of men and 25% of women surveyed believed men were responsible for decisions regarding family size and family planning uses.[100] 21% of men believed men were responsible for both.[100]
|
208 |
+
|
209 |
+
Sexual violence against women has proven to be a large issue in Honduras that has caused many to migrate to the U.S.[101] The prevalence of child sexual abuse was 7.8% in Honduras with the majority of reports being from children under the age of 11.[102] Women that experienced sexual abuse as children were found to be twice as likely to be in violent relationships.[102] Femicide is widespread in Honduras.[101] In 2014, 40% of unaccompanied refugee minors were female.[101] Gangs are largely responsible for sexual violence against women as they often use sexual violence.[101] Between 2005 and 2013 according to the UN Special Repporteur on Violence Against Women, violent deaths increased 263.4 percent.[101] Impunity for sexual violence and femicide crimes was 95 percent in 2014.[101] Additionally, many girls are forced into human trafficking and prostitution.[101]
|
210 |
+
|
211 |
+
Between 1995 and 1997 Honduras recognized domestic violence as both a public health issue and a punishable offense due to efforts by the Pan American Health Organization (PAHO).[98] PAHO's subcommittee on Women, Health and Development was used as a guide to develop programs that aid in domestic violence prevention and victim assistance programs [98] However, a study done in 2009 showed that while the policy requires health care providers to report cases of sexual violence, emergency contraception, and victim referral to legal institutions and support groups, very few other regulations exist within the realm of registry, examination and follow-up.[103] Unlike other Central American countries such as El Salvador, Guatemala and Nicaragua, Honduras does not have detailed guidelines requiring service providers to be extensively trained and respect the rights of sexual violence victims.[103] Since the study was done the UNFPA and the Health Secretariat of Honduras have worked to develop and implement improved guidelines for handling cases of sexual violence.[103]
|
212 |
+
|
213 |
+
An educational program in Honduras known as Sistema de Aprendizaje Tutorial (SAT) has attempted to "undo gender" through focusing on gender equality in everyday interactions.[104] Honduras' SAT program is one of the largest in the world, second only to Colombia's with 6,000 students.[104] It is currently sponsored by Asociacion Bayan, a Honduran NGO, and the Honduran Ministry of Education.[104] It functions by integrating gender into curriculum topics, linking gender to the ideas of justice and equality, encouraging reflection, dialogue and debate and emphasizing the need for individual and social change.[104] This program was found to increase gender consciousness and a desire for gender equality amongst Honduran women through encouraging discourse surrounding existing gender inequality in the Honduran communities.[104]
|
214 |
+
|
215 |
+
Spanish is the official, national language, spoken by virtually all Hondurans. In addition to Spanish, a number of indigenous languages are spoken in some small communities. Other languages spoken by some include Honduran sign language and Bay Islands Creole English.[105]
|
216 |
+
|
217 |
+
The main indigenous languages are:
|
218 |
+
|
219 |
+
The Lenca isolate lost all its fluent native speakers in the 20th century but is currently undergoing revival efforts among the members of the ethnic population of about 100,000. The largest immigrant languages are Arabic (42,000), Armenian (1,300), Turkish (900), Yue Chinese (1,000).[105]
|
220 |
+
|
221 |
+
Although most Hondurans are nominally Roman Catholic (which would be considered the main religion), membership in the Roman Catholic Church is declining while membership in Protestant churches is increasing. The International Religious Freedom Report, 2008, notes that a CID Gallup poll reported that 51.4% of the population identified themselves as Catholic, 36.2% as evangelical Protestant, 1.3% claiming to be from other religions, including Muslims, Buddhists, Jews, Rastafarians, etc. and 11.1% do not belong to any religion or unresponsive. 8% reported as being either atheistic or agnostic. Customary Catholic church tallies and membership estimates 81% Catholic where the priest (in more than 185 parishes) is required to fill out a pastoral account of the parish each year.[107][108]
|
222 |
+
|
223 |
+
The CIA Factbook lists Honduras as 97% Catholic and 3% Protestant.[1] Commenting on statistical variations everywhere, John Green of Pew Forum on Religion and Public Life notes that: "It isn't that ... numbers are more right than [someone else's] numbers ... but how one conceptualizes the group."[109] Often people attend one church without giving up their "home" church. Many who attend evangelical megachurches in the US, for example, attend more than one church.[110] This shifting and fluidity is common in Brazil where two-fifths of those who were raised evangelical are no longer evangelical and Catholics seem to shift in and out of various churches, often while still remaining Catholic.[111]
|
224 |
+
|
225 |
+
Most pollsters suggest an annual poll taken over a number of years would provide the best method of knowing religious demographics and variations in any single country. Still, in Honduras are thriving Anglican, Presbyterian, Methodist, Seventh-day Adventist, Lutheran, Latter-day Saint (Mormon) and Pentecostal churches. There are Protestant seminaries. The Catholic Church, still the only "church" that is recognized, is also thriving in the number of schools, hospitals, and pastoral institutions (including its own medical school) that it operates. Its archbishop, Óscar Andrés Rodriguez Maradiaga, is also very popular, both with the government, other churches, and in his own church. Practitioners of the Buddhist, Jewish, Islamic, Bahá'í, Rastafari and indigenous denominations and religions exist.[112]
|
226 |
+
|
227 |
+
See Health in Honduras
|
228 |
+
|
229 |
+
About 83.6% of the population are literate and the net primary enrollment rate was 94% in 2004.[113] In 2014, the primary school completion rate was 90.7%.[114] Honduras has bilingual (Spanish and English) and even trilingual (Spanish with English, Arabic, or German) schools and numerous universities.[115]
|
230 |
+
|
231 |
+
The higher education is governed by the National Autonomous University of Honduras which has centers in the most important cities of Honduras.
|
232 |
+
|
233 |
+
Crime in Honduras is rampant and criminals operate with a high degree of impunity. Honduras has one of the highest murder rates in the world. Official statistics from the Honduran Observatory on National Violence show Honduras' homicide rate was 60 per 100,000 in 2015 with the majority of homicide cases unprosecuted.[116]
|
234 |
+
|
235 |
+
Highway assaults and carjackings at roadblocks or checkpoints set up by criminals with police uniforms and equipment occur frequently. Although reports of kidnappings of foreigners are not common, families of kidnapping victims often pay ransoms without reporting the crime to police out of fear of retribution, so kidnapping figures may be underreported.[116]
|
236 |
+
|
237 |
+
Owing to measures taken by government and business in 2014 to improve tourist safety, Roatan and the Bay Islands have lower crime rates than the Honduran mainland.[116]
|
238 |
+
|
239 |
+
In the less populated region of Gracias a Dios, narcotics-trafficking is rampant and police presence is scarce. Threats against U.S. citizens by drug traffickers and other criminal organizations have resulted in the U.S. Embassy placing restrictions on the travel of U.S. officials through the region.[116]
|
240 |
+
|
241 |
+
The most renowned Honduran painter is José Antonio Velásquez. Other important painters include Carlos Garay, and Roque Zelaya. Some of Honduras' most notable writers are Lucila Gamero de Medina, Froylán Turcios, Ramón Amaya Amador and Juan Pablo Suazo Euceda, Marco Antonio Rosa,[117] Roberto Sosa, Eduardo Bähr, Amanda Castro, Javier Abril Espinoza, Teófilo Trejo, and Roberto Quesada.
|
242 |
+
|
243 |
+
The José Francisco Saybe theater in San Pedro Sula is home to the Círculo Teatral Sampedrano (Theatrical Circle of San Pedro Sula)
|
244 |
+
|
245 |
+
Honduras has experienced a boom from its film industry for the past two decades. Since the premiere of the movie "Anita la cazadora de insectos" in 2001, the level of Honduran productions has increased, many collaborating with countries such as Mexico, Colombia, and the U.S. The most well known Honduran films are "El Xendra", "Amor y Frijoles", and "Cafe con aroma a mi tierra".
|
246 |
+
|
247 |
+
Honduran cuisine is a fusion of indigenous Lenca cuisine, Spanish cuisine, Caribbean cuisine and African cuisine. There are also dishes from the Garifuna people. Coconut and coconut milk are featured in both sweet and savory dishes. Regional specialties include fried fish, tamales, carne asada and baleadas.
|
248 |
+
|
249 |
+
Other popular dishes include: meat roasted with chismol and carne asada, chicken with rice and corn, and fried fish with pickled onions and jalapeños. Some of the ways seafood and some meats are prepared in coastal areas and in the Bay Islands involve coconut milk.
|
250 |
+
|
251 |
+
The soups Hondurans enjoy include bean soup, mondongo soup (tripe soup), seafood soups and beef soups. Generally these soups are served mixed with plantains, yuca, and cabbage, and served with corn tortillas.
|
252 |
+
|
253 |
+
Other typical dishes are the montucas or corn tamales, stuffed tortillas, and tamales wrapped in plantain leaves. Honduran typical dishes also include an abundant selection of tropical fruits such as papaya, pineapple, plum, sapote, passion fruit and bananas which are prepared in many ways while they are still green.
|
254 |
+
|
255 |
+
At least half of Honduran households have at least one television. Public television has a far smaller role than in most other countries. Honduras' main newspapers are La Prensa, El Heraldo, La Tribuna and Diario Tiempo. The official newspaper is La Gaceta (Honduras) [es].
|
256 |
+
|
257 |
+
Punta is the main music of Honduras, with other sounds such as Caribbean salsa, merengue, reggae, and reggaeton all widely heard, especially in the north, and Mexican rancheras heard in the rural interior of the country. The most well known musicians are Guillermo Anderson and Polache.
|
258 |
+
|
259 |
+
Some of Honduras' national holidays include Honduras Independence Day on 15 September and Children's Day or Día del Niño, which is celebrated in homes, schools and churches on 10 September; on this day, children receive presents and have parties similar to Christmas or birthday celebrations. Some neighborhoods have piñatas on the street. Other holidays are Easter, Maundy Thursday, Good Friday, Day of the Soldier (3 October to celebrate the birth of Francisco Morazán), Christmas, El Dia de Lempira on 20 July,[118] and New Year's Eve.
|
260 |
+
|
261 |
+
Honduras Independence Day festivities start early in the morning with marching bands. Each band wears different colors and features cheerleaders. Fiesta Catracha takes place this same day: typical Honduran foods such as beans, tamales, baleadas, cassava with chicharrón, and tortillas are offered.
|
262 |
+
|
263 |
+
On Christmas Eve people reunite with their families and close friends to have dinner, then give out presents at midnight. In some cities fireworks are seen and heard at midnight. On New Year's Eve there is food and "cohetes", fireworks and festivities. Birthdays are also great events, and include piñatas filled with candies and surprises for the children.
|
264 |
+
|
265 |
+
La Ceiba Carnival is celebrated in La Ceiba, a city located in the north coast, in the second half of May to celebrate the day of the city's patron saint Saint Isidore. People from all over the world come for one week of festivities. Every night there is a little carnaval (carnavalito) in a neighborhood. On Saturday there is a big parade with floats and displays with people from many countries. This celebration is also accompanied by the Milk Fair, where many Hondurans come to show off their farm products and animals.
|
266 |
+
|
267 |
+
The flag of Honduras is composed of three equal horizontal stripes. The blue upper and lower stripes represent the Pacific Ocean and the Caribbean Sea. The central stripe is white. It contains five blue stars representing the five states of the Central American Union. The middle star represents Honduras, located in the center of the Central American Union.
|
268 |
+
|
269 |
+
The coat of arms was established in 1945. It is an equilateral triangle, at the base is a volcano between three castles, over which is a rainbow and the sun shining. The triangle is placed on an area that symbolizes being bathed by both seas. Around all of this an oval containing in golden lettering: "Republic of Honduras, Free, Sovereign and Independent".
|
270 |
+
|
271 |
+
The "National Anthem of Honduras" is a result of a contest carried out in 1914 during the presidency of Manuel Bonilla. In the end, it was the poet Augusto Coello that ended up writing the anthem, with German-born Honduran composer Carlos Hartling writing the music. The anthem was officially adopted on 15 November 1915, during the presidency of Alberto de Jesús Membreño [es]. The anthem is composed of a choir and seven stroonduran.[clarification needed]
|
272 |
+
|
273 |
+
The national flower is the famous orchid, Rhyncholaelia digbyana (formerly known as Brassavola digbyana), which replaced the rose in 1969. The change of the national flower was carried out during the administration of general Oswaldo López Arellano, thinking that Brassavola digbyana "is an indigenous plant of Honduras; having this flower exceptional characteristics of beauty, vigor and distinction", as the decree dictates it.
|
274 |
+
|
275 |
+
The national tree of Honduras was declared in 1928 to be simply "the Pine that appears symbolically in our Coat of Arms" (el Pino que figura simbólicamente en nuestro Escudo),[119] even though pines comprise a genus and not a species, and even though legally there's no specification as for what kind of pine should appear in the coat of arms either. Because of its commonality in the country, the Pinus oocarpa species has become since then the species most strongly associated as the national tree, but legally it is not so. Another species associated as the national tree is the Pinus caribaea.
|
276 |
+
|
277 |
+
The national mammal is the white-tailed deer (Odocoileus virginianus), which was adopted as a measure to avoid excessive depredation.[clarification needed] It is one of two species of deer that live in Honduras.
|
278 |
+
The national bird of Honduras is the scarlet macaw (Ara macao). This bird was much valued by the pre-Columbian civilizations of Honduras.
|
279 |
+
|
280 |
+
Legends and fairy tales are paramount in Honduran culture. Lluvia de Peces (Rain of Fish) is an example of this. The legends of El Cadejo and La Llorona are also popular.
|
281 |
+
|
282 |
+
The major sports in Honduras are football, basketball, rugby, volleyball and cycling, with smaller followings for athletics, softball and handball. Information about some of the sports organisations in Honduras are listed below:
|
en/2617.html.txt
ADDED
@@ -0,0 +1,155 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Hong Kong (/ˌhɒŋˈkɒŋ/ (listen); Chinese: 香港, Cantonese: [hœ́ːŋ.kɔ̌ːŋ] (listen)), officially the Hong Kong Special Administrative Region of the People's Republic of China (HKSAR), is a metropolitan area and special administrative region of the People's Republic of China on the eastern Pearl River Delta of the South China Sea. With over 7.5 million people of various nationalities[d] in a 1,104-square-kilometre (426 sq mi) territory, Hong Kong is one of the most densely populated places in the world.
|
6 |
+
|
7 |
+
Hong Kong became a colony of the British Empire after the Qing Empire ceded Hong Kong Island at the end of the First Opium War in 1842.[16] The colony expanded to the Kowloon Peninsula in 1860 after the Second Opium War and was further extended when Britain obtained a 99-year lease of the New Territories in 1898.[17][18] The whole territory was transferred to China in 1997.[19] As a special administrative region, Hong Kong maintains separate governing and economic systems from that of mainland China under a principle of "one country, two systems".[20]
|
8 |
+
|
9 |
+
Originally a sparsely populated area of farming and fishing villages,[16] the territory has become one of the world's most significant financial centres and commercial ports.[21] It is the world's tenth-largest exporter and ninth-largest importer.[22][23] Hong Kong has a major capitalist service economy characterised by low taxation and free trade, and its currency, the Hong Kong dollar, is the eighth most traded currency in the world.[24] Hong Kong is home to the second-highest number of billionaires of any city in the world,[25] the highest number of billionaires of any city in Asia, and the largest concentration of ultra high-net-worth individuals of any city in the world.[26][27] Although the city has one of the highest per capita incomes in the world, severe income inequality exists among its residents.[28]
|
10 |
+
|
11 |
+
Hong Kong is a highly developed territory and ranks fourth on the UN Human Development Index.[29] The city has the largest number of skyscrapers of any city in the world,[30] and its residents have some of the highest life expectancies in the world.[29] The dense space led to a developed transportation network with public transport rates exceeding 90 percent.[31] Hong Kong is ranked sixth in the Global Financial Centres Index and is ranked fourth in Asia after Tokyo, Shanghai and Singapore.[32]
|
12 |
+
|
13 |
+
The name of the territory, first romanised as "He-Ong-Kong" in 1780,[35] originally referred to a small inlet located between Aberdeen Island and the southern coast of Hong Kong Island. Aberdeen was an initial point of contact between British sailors and local fishermen.[36] Although the source of the romanised name is unknown, it is generally believed to be an early phonetic rendering of the Cantonese pronunciation hēung góng. The name translates as "fragrant harbour" or "incense harbour".[33][34][37] "Fragrant" may refer to the sweet taste of the harbour's freshwater influx from the Pearl River or to the odour from incense factories lining the coast of northern Kowloon. The incense was stored near Aberdeen Harbour for export before Victoria Harbour developed.[37] Sir John Davis (the second colonial governor) offered an alternative origin; Davis said that the name derived from "Hoong-keang" ("red torrent"), reflecting the colour of soil over which a waterfall on the island flowed.[38]
|
14 |
+
|
15 |
+
The simplified name Hong Kong was frequently used by 1810.[39] The name was also commonly written as the single word Hongkong until 1926, when the government officially adopted the two-word name.[40] Some corporations founded during the early colonial era still keep this name, including Hongkong Land, Hongkong Electric Company, Hongkong and Shanghai Hotels and the Hongkong and Shanghai Banking Corporation (HSBC).[41][42]
|
16 |
+
|
17 |
+
The region is first known to have been occupied by humans during the Neolithic period, about 6,000 years ago.[43] However in 2003, stone tools were excavated at the Wong Tei Tung archaeological site, which optical luminescence testing showed date to between 35,000 and 39,000 years ago.[44]
|
18 |
+
|
19 |
+
Early Hong Kong settlers were a semi-coastal people[43] who migrated from inland and brought knowledge of rice cultivation.[45] The Qin dynasty incorporated the Hong Kong area into China for the first time in 214 BCE, after conquering the indigenous Baiyue.[46] The region was consolidated under the Nanyue kingdom (a predecessor state of Vietnam) after the Qin collapse[47] and recaptured by China after the Han conquest.[48] During the Mongol conquest of China in the 13th century, the Southern Song court was briefly located in modern-day Kowloon City (the Sung Wong Toi site) before its final defeat in the 1279 Battle of Yamen.[49] By the end of the Yuan dynasty, seven large families had settled in the region and owned most of the land. Settlers from nearby provinces migrated to Kowloon throughout the Ming dynasty.[50]
|
20 |
+
|
21 |
+
The earliest European visitor was Portuguese explorer Jorge Álvares, who arrived in 1513.[51][52] Portuguese merchants established a trading post called Tamão in Hong Kong waters and began regular trade with southern China. Although the traders were expelled after military clashes in the 1520s,[53] Portuguese-Chinese trade relations were re-established by 1549. Portugal acquired a permanent lease for Macau in 1557.[54]
|
22 |
+
|
23 |
+
After the Qing conquest, maritime trade was banned under the Haijin policies. The Kangxi Emperor lifted the prohibition, allowing foreigners to enter Chinese ports in 1684.[55] Qing authorities established the Canton System in 1757 to regulate trade more strictly, restricting non-Russian ships to the port of Canton.[56] Although European demand for Chinese commodities like tea, silk, and porcelain was high, Chinese interest in European manufactured goods was insignificant, so that Chinese goods could only be bought with precious metals. To reduce the trade imbalance, the British sold large amounts of Indian opium to China. Faced with a drug crisis, Qing officials pursued ever more aggressive actions to halt the opium trade.[57]
|
24 |
+
|
25 |
+
In 1839, the Daoguang Emperor rejected proposals to legalise and tax opium and ordered imperial commissioner Lin Zexu to eradicate the opium trade. The commissioner destroyed opium stockpiles and halted all foreign trade,[58] triggering a British military response and the First Opium War. The Qing surrendered early in the war and ceded Hong Kong Island in the Convention of Chuenpi. However, both countries were dissatisfied and did not ratify the agreement.[59] After more than a year of further hostilities, Hong Kong Island was formally ceded to the United Kingdom in the 1842 Treaty of Nanking.[60]
|
26 |
+
|
27 |
+
Administrative infrastructure was quickly built by early 1842, but piracy, disease, and hostile Qing policies initially prevented the government from attracting commerce. Conditions on the island improved during the Taiping Rebellion in the 1850s, when many Chinese refugees, including wealthy merchants, fled mainland turbulence and settled in the colony.[16] Further tensions between the British and Qing over the opium trade escalated into the Second Opium War. The Qing were again defeated and forced to give up Kowloon Peninsula and Stonecutters Island in the Convention of Peking.[17] By the end of this war, Hong Kong had evolved from a transient colonial outpost into a major entrepôt. Rapid economic improvement during the 1850s attracted foreign investment, as potential stakeholders became more confident in Hong Kong's future.[61]
|
28 |
+
|
29 |
+
The colony was further expanded in 1898 when Britain obtained a 99-year lease of the New Territories.[18] The University of Hong Kong was established in 1911 as the territory's first institution of higher education.[62] Kai Tak Airport began operation in 1924, and the colony avoided a prolonged economic downturn after the 1925–26 Canton–Hong Kong strike.[63][64] At the start of the Second Sino-Japanese War in 1937, Governor Geoffry Northcote declared Hong Kong a neutral zone to safeguard its status as a free port.[65] The colonial government prepared for a possible attack, evacuating all British women and children in 1940.[66] The Imperial Japanese Army attacked Hong King on 8 December 1941, the same morning as its attack on Pearl Harbor.[67] Hong Kong was occupied by Japan for almost four years before Britain resumed control on 30 August 1945.[68]
|
30 |
+
|
31 |
+
Its population rebounded quickly after the war, as skilled Chinese migrants fled from the Chinese Civil War, and more refugees crossed the border when the Communist Party took control of mainland China in 1949.[69] Hong Kong became the first of the Four Asian Tiger economies to industrialise during the 1950s.[70] With a rapidly increasing population, the colonial government began reforms to improve infrastructure and public services. The public-housing estate programme, Independent Commission Against Corruption, and Mass Transit Railway were all established during the post-war decades to provide safer housing, integrity in the civil service, and more-reliable transportation.[71][72] Although the territory's competitiveness in manufacturing gradually declined because of rising labour and property costs, it transitioned to a service-based economy. By the early 1990s, Hong Kong had established itself as a global financial centre and shipping hub.[73]
|
32 |
+
The colony faced an uncertain future as the end of the New Territories lease approached, and Governor Murray MacLehose raised the question of Hong Kong's status with Deng Xiaoping in 1979.[74] Diplomatic negotiations with China resulted in the 1984 Sino-British Joint Declaration, in which the United Kingdom agreed to transfer the colony in 1997 and China would guarantee Hong Kong's economic and political systems for 50 years after the transfer.[75] The impending transfer triggered a wave of mass emigration as residents feared an erosion of civil rights, the rule of law, and quality of life.[76] Over half a million people left the territory during the peak migration period, from 1987 to 1996.[77] Hong Kong was transferred to China on 1 July 1997, after 156 years of British rule.[19]
|
33 |
+
|
34 |
+
Immediately after the transfer, Hong Kong was severely affected by several crises. The government was forced to use substantial foreign exchange reserves to maintain the Hong Kong dollar's currency peg during the 1997 Asian financial crisis,[69] and the recovery from this was muted by an H5N1 avian-flu outbreak[78] and a housing surplus.[79] This was followed by the 2003 SARS epidemic, during which the territory experienced its most serious economic downturn.[80]
|
35 |
+
|
36 |
+
Political debates after the transfer of sovereignty have centred around the region's democratic development and the central government's adherence to the "one country, two systems" principle. After reversal of the last colonial era Legislative Council democratic reforms following the handover,[81] the regional government unsuccessfully attempted to enact national security legislation pursuant to Article 23 of the Basic Law.[82] The central government decision to implement nominee pre-screening before allowing Chief Executive elections triggered a series of protests in 2014 which became known as the Umbrella Revolution.[83] Discrepancies in the electoral registry and disqualification of elected legislators after the 2016 Legislative Council elections[84][85][86] and enforcement of national law in the West Kowloon high-speed railway station raised further concerns about the region's autonomy.[87] In June 2019, large protests again erupted in response to a proposed extradition amendment bill permitting extradition of fugitives to mainland China. The protests continued into December, possibly becoming the largest-scale political protest movement in Hong Kong history,[88] with organisers claiming to have attracted more than one million Hong Kong residents.[89]
|
37 |
+
|
38 |
+
Hong Kong is a special administrative region of China, with executive, legislative, and judicial powers devolved from the national government.[90] The Sino-British Joint Declaration provided for economic and administrative continuity through the transfer of sovereignty,[75] resulting in an executive-led governing system largely inherited from the territory's history as a British colony.[91] Under these terms and the "one country, two systems" principle, the Basic Law of Hong Kong is the regional constitution.[92]
|
39 |
+
|
40 |
+
The regional government is composed of three branches:
|
41 |
+
|
42 |
+
The Chief Executive is the head of government and serves for a maximum of two five-year terms. The State Council (led by the Premier of China) appoints the Chief Executive after nomination by the Election Committee, which is composed of 1,200 business, community, and government leaders.[100][101][102]
|
43 |
+
|
44 |
+
The Legislative Council has 70 members, each serving a four-year term.[103] 35 are directly elected from geographical constituencies and 35 represent functional constituencies (FC). Thirty FC councillors are selected from limited electorates representing sectors of the economy or special interest groups,[104] and the remaining five members are nominated from sitting District Council members and selected in region-wide double direct elections.[105] All popularly elected members are chosen by proportional representation. The 30 limited electorate functional constituencies fill their seats using first-past-the-post or instant-runoff voting.[104]
|
45 |
+
|
46 |
+
Twenty-two political parties had representatives elected to the Legislative Council in the 2016 election.[106] These parties have aligned themselves into three ideological groups: the pro-Beijing camp (the current government), the pro-democracy camp, and localist groups.[107] The Communist Party does not have an official political presence in Hong Kong, and its members do not run in local elections.[108] Hong Kong is represented in the National People's Congress by 36 deputies chosen through an electoral college, and 203 delegates in the Chinese People's Political Consultative Conference appointed by the central government.[7]
|
47 |
+
|
48 |
+
Chinese national law does not generally apply in the region and Hong Kong is treated as a separate jurisdiction.[98] Its judicial system is based on common law, continuing the legal tradition established during British rule.[109] Local courts may refer to precedents set in English law and overseas jurisprudence.[110] However, mainland criminal procedure law applies to cases investigated by the Office for Safeguarding National Security of the CPG in the HKSAR.[111] Interpretative and amending power over the Basic Law and jurisdiction over acts of state lie with the central authority, making regional courts ultimately subordinate to the mainland's socialist civil law system.[112] Decisions made by the Standing Committee of the National People's Congress override any territorial judicial process.[113] Furthermore, in circumstances where the Standing Committee declares a state of emergency in Hong Kong, the State Council may enforce national law in the region.[114]
|
49 |
+
|
50 |
+
The territory's jurisdictional independence is most apparent in its immigration and taxation policies. The Immigration Department issues passports for permanent residents which differ from those of the mainland or Macau,[115] and the region maintains a regulated border with the rest of the country. All travellers between Hong Kong and China and Macau must pass through border controls, regardless of nationality.[116] Mainland Chinese citizens do not have right of abode in Hong Kong and are subject to immigration controls.[117] Public finances are handled separately from the national government; taxes levied in Hong Kong do not fund the central authority.[118][119]
|
51 |
+
|
52 |
+
The Hong Kong Garrison of the People's Liberation Army is responsible for the region's defence.[120] Although the Chairman of the Central Military Commission is supreme commander of the armed forces,[121] the regional government may request assistance from the garrison.[122] Hong Kong residents are not required to perform military service and current law has no provision for local enlistment, so its defence is composed entirely of non-Hongkongers.[123]
|
53 |
+
|
54 |
+
The central government and Ministry of Foreign Affairs handle diplomatic matters, but Hong Kong retains the ability to maintain separate economic and cultural relations with foreign nations.[124] The territory actively participates in the World Trade Organization, the Asia-Pacific Economic Cooperation forum, the International Olympic Committee, and many United Nations agencies.[125][126][127] The regional government maintains trade offices in Greater China and other nations.[128]
|
55 |
+
|
56 |
+
The territory is divided into 18 districts, each represented by a district council. These advise the government on local issues such as public facility provisioning, community programme maintenance, cultural promotion, and environmental policy. There are a total of 479 district council seats, 452 of which are directly elected.[129] Rural committee chairmen, representing outlying villages and towns, fill the 27 non-elected seats.[130]
|
57 |
+
|
58 |
+
Hong Kong is governed by a hybrid regime that is not fully representative of the population. Legislative Council members elected by functional constituencies composed of professional and special interest groups are accountable to those narrow corporate electorates and not the general public. This electoral arrangement has guaranteed a pro-establishment majority in the legislature since the transfer of sovereignty. Similarly, the Chief Executive is selected by establishment politicians and corporate members of the Election Committee rather than directly elected.[131] Although universal suffrage for Chief Executive and all Legislative Council elections are defined goals of Basic Law Articles 45 and 68,[132] the legislature is only partially directly elected and the executive continues to be nominated by an unrepresentative body.[131] The government has been repeatedly petitioned to introduce direct elections for these positions.[133][134]
|
59 |
+
|
60 |
+
Ethnic minorities (except those of European ancestry) have marginal representation in government and often experience discrimination in housing, education, and employment.[135][136] Employment vacancies and public service appointments frequently have language requirements which minority job seekers do not meet, and language education resources remain inadequate for Chinese learners.[137][138] Foreign domestic helpers, predominantly women from the Philippines and Indonesia, have little protection under regional law. Although they live and work in Hong Kong, these workers are not treated as ordinary residents and are ineligible for right of abode in the territory.[139] Sex trafficking in Hong Kong is an issue. Hongkonger and foreign women and girls are forced into prostitution in brothels, homes, and businesses in the city.[140][141][142][143]
|
61 |
+
|
62 |
+
The Joint Declaration guarantees the Basic Law for 50 years after the transfer of sovereignty.[75] It does not specify how Hong Kong will be governed after 2047, and the central government's role in determining the territory's future system of government is the subject of political debate and speculation. Hong Kong's political and judicial systems may be integrated with China's at that time, or the territory may continue to be administered separately.[144][145]
|
63 |
+
|
64 |
+
In 2020, in a period of large-scale protests, the Standing Committee of the National People's Congress passed the controversial Hong Kong national security law.[146] The law criminalises acts that were previously considered protected speech under Hong Kong law and establishes the Office for Safeguarding National Security of the CPG in the HKSAR, an investigative office under Central People's Government authority immune from HKSAR jurisdiction.[111][147] The United Kingdom considers the law to be a serious violation of the Joint Declaration.[148]
|
65 |
+
|
66 |
+
Hong Kong is on China's southern coast, 60 km (37 mi) east of Macau, on the east side of the mouth of the Pearl River estuary. It is surrounded by the South China Sea on all sides except the north, which neighbours the Guangdong city of Shenzhen along the Sham Chun River. The territory's 2,755 km2 (1,064 sq mi) area consists of Hong Kong Island, the Kowloon Peninsula, the New Territories, Lantau Island, and over 200 other islands. Of the total area, 1,073 km2 (414 sq mi) is land and 35 km2 (14 sq mi) is water.[29] The territory's highest point is Tai Mo Shan, 957 metres (3,140 ft) above sea level.[149] Urban development is concentrated on the Kowloon Peninsula, Hong Kong Island, and in new towns throughout the New Territories.[150] Much of this is built on reclaimed land; 70 km2 (27 sq mi) (six per cent of the total land or about 25 per cent of developed space in the territory) is reclaimed from the sea.[151]
|
67 |
+
|
68 |
+
Undeveloped terrain is hilly to mountainous, with very little flat land, and consists mostly of grassland, woodland, shrubland, or farmland.[152][153] About 40 per cent of the remaining land area are country parks and nature reserves.[154] The territory has a diverse ecosystem; over 3,000 species of vascular plants occur in the region (300 of which are native to Hong Kong), and thousands of insect, avian, and marine species.[155][156]
|
69 |
+
|
70 |
+
Hong Kong has a humid subtropical climate (Köppen Cwa), characteristic of southern China. Summer is hot and humid, with occasional showers and thunderstorms and warm air from the southwest. Typhoons occur most often then, sometimes resulting in floods or landslides. Winters are mild and usually sunny at the beginning, becoming cloudy towards February; an occasional cold front brings strong, cooling winds from the north. The most temperate seasons are spring and autumn, which are generally sunny and dry.[157] When there is snowfall, which is extremely rare, it is usually at high elevations. Hong Kong averages 1,709 hours of sunshine per year;[158] the highest and lowest recorded temperatures at the Hong Kong Observatory are 36.6 °C (97.9 °F) on 22 August 2017 and 0.0 °C (32.0 °F) on 18 January 1893.[159] The highest and lowest recorded temperatures in all of Hong Kong are 39.0 °C (102 °F) at Wetland Park on 22 August 2017,[160] and −6.0 °C (21.2 °F) at Tai Mo Shan on 24 January 2016.
|
71 |
+
|
72 |
+
Hong Kong has the world's largest number of skyscrapers, with 317 towers taller than 150 metres (490 ft),[30] and the third-largest number of high-rise buildings in the world.[163] The lack of available space restricted development to high-density residential tenements and commercial complexes packed closely together on buildable land.[164] Single-family detached homes are extremely rare and generally only found in outlying areas.[165]
|
73 |
+
|
74 |
+
The International Commerce Centre and Two International Finance Centre are the tallest buildings in Hong Kong and among the tallest in the Asia-Pacific region.[166] Other distinctive buildings lining the Hong Kong Island skyline include the HSBC Main Building, the anemometer-topped triangular Central Plaza, the circular Hopewell Centre, and the sharp-edged Bank of China Tower.[167][168]
|
75 |
+
|
76 |
+
Demand for new construction has contributed to frequent demolition of older buildings, freeing space for modern high-rises.[169] However, many examples of European and Lingnan architecture are still found throughout the territory. Older government buildings are examples of colonial architecture. The 1846 Flagstaff House, the former residence of the commanding British military officer, is the oldest Western-style building in Hong Kong.[170] Some (including the Court of Final Appeal Building and the Hong Kong Observatory) retain their original function, and others have been adapted and reused; the Former Marine Police Headquarters was redeveloped into a commercial and retail complex,[171] and Béthanie (built in 1875 as a sanatorium) houses the Hong Kong Academy for Performing Arts.[172] The Tin Hau Temple, dedicated to the sea goddess Mazu (originally built in 1012 and rebuilt in 1266), is the territory's oldest existing structure.[173] The Ping Shan Heritage Trail has architectural examples of several imperial Chinese dynasties, including the Tsui Sing Lau Pagoda (Hong Kong's only remaining pagoda).[174]
|
77 |
+
|
78 |
+
Tong lau, mixed-use tenement buildings constructed during the colonial era, blended southern Chinese architectural styles with European influences. These were especially prolific during the immediate post-war period, when many were rapidly built to house large numbers of Chinese migrants.[175] Examples include Lui Seng Chun, the Blue House in Wan Chai, and the Shanghai Street shophouses in Mong Kok. Mass-produced public-housing estates, built since the 1960s, are mainly constructed in modernist style.[176]
|
79 |
+
|
80 |
+
The Census and Statistics Department estimated Hong Kong's population at 7,482,500 in mid-2019.[177] The overwhelming majority (92 per cent) is Han Chinese,[6] most of whom are Taishanese, Teochew, Hakka, and a number of other Cantonese peoples.[178][179][180] The remaining eight per cent are non-ethnic Chinese minorities, primarily Filipinos, Indonesians, and South Asians.[6][181] About half the population have some form of British nationality, a legacy of colonial rule; 3.4 million residents have British National (Overseas) status, and 260,000 British citizens live in the territory.[182] The vast majority also hold Chinese nationality, automatically granted to all ethnic Chinese residents at the transfer of sovereignty.[183] Headline population density of about 6,800 people/km2 does not reflect true densities since only 6.9% of land is residential, the residential average population density calculates closer to a highly cramped 100,000/km2.
|
81 |
+
|
82 |
+
The predominant language is Cantonese, a variety of Chinese originating in Guangdong. It is spoken by 94.6 per cent of the population, 88.9 per cent as a first language and 5.7 per cent as a second language.[3] Slightly over half the population (53.2 per cent) speaks English, the other official language;[2] 4.3 per cent are native speakers, and 48.9 per cent speak English as a second language.[3] Code-switching, mixing English and Cantonese in informal conversation, is common among the bilingual population.[184] Post-handover governments have promoted Mandarin, which is currently about as prevalent as English; 48.6 per cent of the population speaks Mandarin, with 1.9 per cent native speakers and 46.7 per cent speaking it as a second language.[3] Traditional Chinese characters are used in writing, rather than the simplified characters used on the mainland.[185]
|
83 |
+
|
84 |
+
Among the religious population, the traditional "three teachings" of China, Buddhism, Confucianism, and Taoism, have the most adherents (20 per cent), and are followed by Christianity (12 per cent) and Islam (4 per cent).[186] Followers of other religions, including Sikhism, Hinduism, Judaism, and the Bahá'í Faith, generally originate from regions where their religion predominates.[186]
|
85 |
+
|
86 |
+
Life expectancy in Hong Kong was 82.2 years for males and 87.6 years for females in 2018,[177] the sixth-highest in the world.[187] Cancer, pneumonia, heart disease, cerebrovascular disease, and accidents are the territory's five leading causes of death.[188] The universal public healthcare system is funded by general-tax revenue, and treatment is highly subsidised; on average, 95 per cent of healthcare costs are covered by the government.[189]
|
87 |
+
|
88 |
+
Income inequality has risen since the transfer of sovereignty, as the region's ageing population has gradually added to the number of nonworking people.[190] Although median household income steadily increased during the decade to 2016, the wage gap remained high;[191] the 90th percentile of earners receive 41 per cent of all income.[191] The city has the most billionaires per capita, with one billionaire per 109,657 people.[192] Despite government efforts to reduce the growing disparity,[193] median income for the top 10 per cent of earners is 44 times that of the bottom 10 per cent.[194][195]
|
89 |
+
|
90 |
+
Hong Kong has a capitalist mixed service economy, characterised by low taxation, minimal government market intervention, and an established international financial market.[196] It is the world's 35th-largest economy, with a nominal GDP of approximately US$373 billion.[11] Although Hong Kong's economy has ranked at the top of the Heritage Foundation's economic freedom index since 1995,[197][198] the territory has a relatively high level of income disparity.[12] The Hong Kong Stock Exchange is the seventh-largest in the world, with a market capitalisation of HK$30.4 trillion (US$3.87 trillion) as of December 2018[update].[199]
|
91 |
+
|
92 |
+
Hong Kong is the tenth-largest trading entity in exports and imports (2017), trading more goods in value than its gross domestic product.[22][23] Over half of its cargo throughput consists of transshipments (goods travelling through Hong Kong). Products from mainland China account for about 40 per cent of that traffic.[200] The city's location allowed it to establish a transportation and logistics infrastructure which includes the world's seventh-busiest container port[201] and the busiest airport for international cargo.[202] The territory's largest export markets are mainland China and the United States.[29]
|
93 |
+
|
94 |
+
It has little arable land and few natural resources, importing most of its food and raw materials. More than 90 per cent of Hong Kong's food is imported, including nearly all its meat and rice.[203] Agricultural activity is 0.1% of GDP, and consists of growing premium food and flower varieties.[204]
|
95 |
+
|
96 |
+
Although the territory had one of Asia's largest manufacturing economies during the latter half of the colonial era, Hong Kong's economy is now dominated by the service sector. The sector generates 92.7 per cent of economic output, with the public sector accounting for about 10 per cent.[205] Between 1961 and 1997 Hong Kong's gross domestic product increased by a factor of 180, and per capita GDP increased by a factor of 87.[206][207] The territory's GDP relative to mainland China's peaked at 27 per cent in 1993; it fell to less than three per cent in 2017, as the mainland developed and liberalised its economy.[208]
|
97 |
+
Economic and infrastructure integration with China has increased significantly since the 1978 start of market liberalisation on the mainland. Since resumption of cross-boundary train service in 1979, many rail and road links have been improved and constructed (facilitating trade between regions).[209][210] The Closer Partnership Economic Arrangement formalised a policy of free trade between the two areas, with each jurisdiction pledging to remove remaining obstacles to trade and cross-boundary investment.[211] A similar economic partnership with Macau details the liberalisation of trade between the special administrative regions.[212] Chinese companies have expanded their economic presence in the territory since the transfer of sovereignty. Mainland firms represent over half of the Hang Seng Index value, up from five per cent in 1997.[213][214]
|
98 |
+
|
99 |
+
As the mainland liberalised its economy, Hong Kong's shipping industry faced intense competition from other Chinese ports. Fifty per cent of China's trade goods were routed through Hong Kong in 1997, dropping to about 13 per cent by 2015.[215] The territory's minimal taxation, common law system, and civil service attract overseas corporations wishing to establish a presence in Asia.[215] The city has the second-highest number of corporate headquarters in the Asia-Pacific region.[216] Hong Kong is a gateway for foreign direct investment in China, giving investors open access to mainland Chinese markets through direct links with the Shanghai and Shenzhen stock exchanges. The territory was the first market outside mainland China for renminbi-denominated bonds, and is one of the largest hubs for offshore renminbi trading.[217]
|
100 |
+
|
101 |
+
The government has had a passive role in the economy. Colonial governments had little industrial policy, and implemented almost no trade controls. Under the doctrine of "positive non-interventionism", post-war administrations deliberately avoided the direct allocation of resources; active intervention was considered detrimental to economic growth.[218] While the economy transitioned to a service basis during the 1980s,[218] late colonial governments introduced interventionist policies. Post-handover administrations continued and expanded these programmes, including export-credit guarantees, a compulsory pension scheme, a minimum wage, anti-discrimination laws, and a state mortgage backer.[219]
|
102 |
+
|
103 |
+
Tourism is a major part of the economy, accounting for five per cent of GDP.[171] In 2016, 26.6 million visitors contributed HK$258 billion (US$32.9 billion) to the territory, making Hong Kong the 14th most popular destination for international tourists. It is the most popular Chinese city for tourists, receiving over 70 per cent more visitors than its closest competitor (Macau).[220] The city is ranked as one of the most expensive cities for expatriates.[221][222]
|
104 |
+
|
105 |
+
Hong Kong has a highly developed, sophisticated transport network. Over 90 per cent of daily trips are made on public transport, the highest percentage in the world.[31] The Octopus card, a contactless smart payment card, is widely accepted on railways, buses and ferries, and can be used for payment in most retail stores.[223]
|
106 |
+
|
107 |
+
The Mass Transit Railway (MTR) is an extensive passenger rail network, connecting 93 metro stations throughout the territory.[224] With a daily ridership of over five million, the system serves 41 per cent of all public transit passengers in the city[225] and has an on-time rate of 99.9 per cent.[226] Cross-boundary train service to Shenzhen is offered by the East Rail line, and longer-distance inter-city trains to Guangzhou, Shanghai, and Beijing are operated from Hung Hom Station.[227] Connecting service to the national high-speed rail system is provided at West Kowloon railway station.[228]
|
108 |
+
|
109 |
+
Although public transport systems handle most passenger traffic, there are over 500,000 private vehicles registered in Hong Kong.[229] Automobiles drive on the left (unlike in mainland China), due to historical influence of the British Empire.[230] Vehicle traffic is extremely congested in urban areas, exacerbated by limited space to expand roads and an increasing number of vehicles.[231] More than 18,000 taxicabs, easily identifiable by their bright colour, are licensed to carry riders in the territory.[232] Bus services operate more than 700 routes across the territory,[225] with smaller public light buses (also known as minibuses) serving areas standard buses do not reach as frequently or directly.[233] Highways, organised with the Hong Kong Strategic Route and Exit Number System, connect all major areas of the territory.[234] The Hong Kong–Zhuhai–Macau Bridge provides a direct route to the western side of the Pearl River estuary.[210]
|
110 |
+
|
111 |
+
Hong Kong International Airport is the territory's primary airport. Over 100 airlines operate flights from the airport, including locally based Cathay Pacific (flag carrier), Hong Kong Airlines, regional carrier Cathay Dragon, low-cost airline HK Express and cargo airline Air Hong Kong.[235] It is the eighth-busiest airport by passenger traffic,[236] and handles the most air-cargo traffic in the world.[237] Most private recreational aviation traffic flies through Shek Kong Airfield, under the supervision of the Hong Kong Aviation Club.[238]
|
112 |
+
|
113 |
+
The Star Ferry operates two lines across Victoria Harbour for its 53,000 daily passengers.[239] Ferries also serve outlying islands inaccessible by other means. Smaller kai-to boats serve the most remote coastal settlements.[240] Ferry travel to Macau and mainland China is also available.[241] Junks, once common in Hong Kong waters, are no longer widely available and are used privately and for tourism.[242]
|
114 |
+
|
115 |
+
The Peak Tram, Hong Kong's first public transport system, has provided funicular rail transport between Central and Victoria Peak since 1888.[243] The Central and Western District has an extensive system of escalators and moving pavements, including the Mid-Levels escalator (the world's longest outdoor covered escalator system).[244] Hong Kong Tramways covers a portion of Hong Kong Island. The MTR operates its Light Rail system, serving the northwestern New Territories.[224]
|
116 |
+
|
117 |
+
Hong Kong generates most of its electricity locally.[245] The vast majority of this energy comes from fossil fuels, with 46 per cent from coal and 47 per cent from petroleum.[246] The rest is from other imports, including nuclear energy generated on the mainland.[247] Renewable sources account for a negligible amount of energy generated for the territory.[248] Small-scale wind-power sources have been developed,[245] and a small number of private homes have installed solar panels.[249]
|
118 |
+
|
119 |
+
With few natural lakes and rivers, high population density, inaccessible groundwater sources, and extremely seasonal rainfall, the territory does not have a reliable source of freshwater. The Dongjiang River in Guangdong supplies 70 per cent of the city's water,[250] and the remaining demand is filled by harvesting rainwater.[251] Toilets flush with seawater, greatly reducing freshwater use.[250]
|
120 |
+
|
121 |
+
Broadband Internet access is widely available, with 92.6 per cent of households connected. Connections over fibre-optic infrastructure are increasingly prevalent,[252] contributing to the high regional average connection speed of 21.9 Mbit/s (the world's fourth-fastest).[253] Mobile-phone use is ubiquitous;[254] there are more than 18 million mobile-phone accounts,[255] more than double the territory's population.[177]
|
122 |
+
|
123 |
+
Hong Kong is characterised as a hybrid of East and West. Traditional Chinese values emphasising family and education blend with Western ideals, including economic liberty and the rule of law.[256] Although the vast majority of the population is ethnically Chinese, Hong Kong has developed a distinct identity. The territory diverged from the mainland due to its long period of colonial administration and a different pace of economic, social, and cultural development. Mainstream culture is derived from immigrants originating from various parts of China. This was influenced by British-style education, a separate political system, and the territory's rapid development during the late 20th century.[257][258] Most migrants of that era fled poverty and war, reflected in the prevailing attitude toward wealth; Hongkongers tend to link self-image and decision-making to material benefits.[259][260] Residents' sense of local identity has markedly increased post-handover: 53 per cent of the population identify as "Hongkongers", while 11 per cent describe themselves as "Chinese". The remaining population purport mixed identities, 23 per cent as "Hongkonger in China" and 12 per cent as "Chinese in Hong Kong".[261]
|
124 |
+
|
125 |
+
Traditional Chinese family values, including family honour, filial piety, and a preference for sons, are prevalent.[262] Nuclear families are the most common households, although multi-generational and extended families are not unusual.[263] Spiritual concepts such as feng shui are observed; large-scale construction projects often hire consultants to ensure proper building positioning and layout. The degree of its adherence to feng shui is believed to determine the success of a business.[167] Bagua mirrors are regularly used to deflect evil spirits,[264] and buildings often lack floor numbers with a 4;[265] the number has a similar sound to the word for "die" in Cantonese.[266]
|
126 |
+
|
127 |
+
Food in Hong Kong is primarily based on Cantonese cuisine, despite the territory's exposure to foreign influences and its residents' varied origins. Rice is the staple food, and is usually served plain with other dishes.[267] Freshness of ingredients is emphasised. Poultry and seafood are commonly sold live at wet markets, and ingredients are used as quickly as possible.[268] There are five daily meals: breakfast, lunch, afternoon tea, dinner, and siu yeh.[269] Dim sum, as part of yum cha (brunch), is a dining-out tradition with family and friends. Dishes include congee, cha siu bao, siu yuk, egg tarts, and mango pudding. Local versions of Western food are served at cha chaan teng (fast, casual restaurants). Common cha chaan teng menu items include macaroni in soup, deep-fried French toast, and Hong Kong-style milk tea.[267]
|
128 |
+
|
129 |
+
Hong Kong developed into a filmmaking hub during the late 1940s as a wave of Shanghai filmmakers migrated to the territory, and these movie veterans helped rebuild the colony's entertainment industry over the next decade.[270] By the 1960s, the city was well known to overseas audiences through films such as The World of Suzie Wong.[271] When Bruce Lee's Way of the Dragon was released in 1972, local productions became popular outside Hong Kong. During the 1980s, films such as A Better Tomorrow, As Tears Go By, and Zu Warriors from the Magic Mountain expanded global interest beyond martial arts films; locally made gangster films, romantic dramas, and supernatural fantasies became popular.[272] Hong Kong cinema continued to be internationally successful over the following decade with critically acclaimed dramas such as Farewell My Concubine, To Live, and Chungking Express. The city's martial arts film roots are evident in the roles of the most prolific Hong Kong actors. Jackie Chan, Donnie Yen, Jet Li, Chow Yun-fat, and Michelle Yeoh frequently play action-oriented roles in foreign films. At the height of the local movie industry in the early 1990s, over 400 films were produced each year; since then, industry momentum shifted to mainland China. The number of films produced annually has declined to about 60 in 2017.[273]
|
130 |
+
|
131 |
+
Cantopop is a genre of Cantonese popular music which emerged in Hong Kong during the 1970s. Evolving from Shanghai-style shidaiqu, it is also influenced by Cantonese opera and Western pop.[274] Local media featured songs by artists such as Sam Hui, Anita Mui, Leslie Cheung, and Alan Tam; during the 1980s, exported films and shows exposed Cantopop to a global audience.[275] The genre's popularity peaked in the 1990s, when the Four Heavenly Kings dominated Asian record charts.[276] Despite a general decline since late in the decade,[277] Cantopop remains dominant in Hong Kong; contemporary artists such as Eason Chan, Joey Yung, and Twins are popular in and beyond the territory.[278]
|
132 |
+
|
133 |
+
Western classical music has historically had a strong presence in Hong Kong, and remains a large part of local musical education.[279] The publicly funded Hong Kong Philharmonic Orchestra, the territory's oldest professional symphony orchestra, frequently host musicians and conductors from overseas. The Hong Kong Chinese Orchestra, composed of classical Chinese instruments, is the leading Chinese ensemble and plays a significant role in promoting traditional music in the community.[280]
|
134 |
+
|
135 |
+
Despite its small area, the territory is home to a variety of sports and recreational facilities. The city has hosted a number of major sporting events, including the 2009 East Asian Games, the 2008 Summer Olympics equestrian events, and the 2007 Premier League Asia Trophy.[281] The territory regularly hosts the Hong Kong Sevens, Hong Kong Marathon, Hong Kong Tennis Classic and Lunar New Year Cup, and hosted the inaugural AFC Asian Cup and the 1995 Dynasty Cup.[282][283]
|
136 |
+
|
137 |
+
Hong Kong represents itself separately from mainland China, with its own sports teams in international competitions.[281] The territory has participated in almost every Summer Olympics since 1952, and has earned three medals. Lee Lai-shan won the territory's first and only Olympic gold medal at the 1996 Atlanta Olympics.[284] Hong Kong athletes have won 126 medals at the Paralympic Games and 17 at the Commonwealth Games. No longer part of the Commonwealth of Nations, the city's last appearance in the latter was in 1994.[285]
|
138 |
+
|
139 |
+
Dragon boat races originated as a religious ceremony conducted during the annual Tuen Ng Festival. The race was revived as a modern sport as part of the Tourism Board's efforts to promote Hong Kong's image abroad. The first modern competition was organised in 1976, and overseas teams began competing in the first international race in 1993.[286]
|
140 |
+
|
141 |
+
The Hong Kong Jockey Club, the territory's largest taxpayer,[287] has a monopoly on gambling and provides over seven per cent of government revenue.[288] Three forms of gambling are legal in Hong Kong: lotteries and betting on horse racing and football.[287]
|
142 |
+
|
143 |
+
Education in Hong Kong is largely modelled after that of the United Kingdom, particularly the English system.[289] Children are required to attend school from the age of six until completion of secondary education, generally at age 18.[290][291] At the end of secondary schooling, all students take a public examination and awarded the Hong Kong Diploma of Secondary Education on successful completion.[292] Of residents aged 15 and older, 81.3 per cent completed lower-secondary education, 66.4 per cent graduated from an upper secondary school, 31.6 per cent attended a non-degree tertiary program, and 24 per cent earned a bachelor's degree or higher.[293] Mandatory education has contributed to an adult literacy rate of 95.7 per cent.[294] Lower than that of other developed economies, the rate is due to the influx of refugees from mainland China during the post-war colonial era. Much of the elderly population were not formally educated due to war and poverty.[295][296]
|
144 |
+
|
145 |
+
Comprehensive schools fall under three categories: public schools, which are government-run; subsidised schools, including government aid-and-grant schools; and private schools, often those run by religious organisations and that base admissions on academic merit. These schools are subject to the curriculum guidelines as provided by the Education Bureau. Private schools subsidised under the Direct Subsidy Scheme and international schools fall outside of this system and may elect to use differing curricula and teach using other languages.[291]
|
146 |
+
|
147 |
+
The government maintains a policy of "mother tongue instruction"; most schools use Cantonese as the medium of instruction, with written education in both Chinese and English. Secondary schools emphasise "bi-literacy and tri-lingualism", which has encouraged the proliferation of spoken Mandarin language education.[297]
|
148 |
+
|
149 |
+
Hong Kong has eleven universities. The University of Hong Kong was founded as the city's first institute of higher education during the early colonial period in 1911.[298] The Chinese University of Hong Kong was established in 1963 to fill the need for a university that taught using Chinese as its primary language of instruction.[299] Along with the Hong Kong University of Science and Technology and City University of Hong Kong, these universities are ranked among the best in Asia.[300] The Hong Kong Polytechnic University,[301] Hong Kong Baptist University,[302] Lingnan University,[303] Education University of Hong Kong,[304] Open University of Hong Kong,[305] Hong Kong Shue Yan University and The Hang Seng University of Hong Kong were all established in subsequent years.[306]
|
150 |
+
|
151 |
+
Hong Kong's major English-language newspaper is the South China Morning Post, with The Standard serving as a business-oriented alternative. A variety of Chinese-language newspapers are published daily; the most prominent are Ming Pao, Oriental Daily News, and Apple Daily. Local publications are often politically affiliated, with pro-Beijing or pro-democracy sympathies. The central government has a print-media presence in the territory through the state-owned Ta Kung Pao and Wen Wei Po.[307] Several international publications have regional operations in Hong Kong, including The Wall Street Journal, The Financial Times, The New York Times International Edition, USA Today, Yomiuri Shimbun, and The Nikkei.[308]
|
152 |
+
|
153 |
+
Three free-to-air television broadcasters operate in the territory; TVB, HKTVE, and Hong Kong Open TV air three analogue and eight digital channels.[309] TVB, Hong Kong's dominant television network, has an 80 per cent viewer share.[310] Pay TV services operated by Cable TV Hong Kong and PCCW offer hundreds of additional channels and cater to a variety of audiences.[309] RTHK is the public broadcaster, providing seven radio channels and three television channels.[311] Ten non-domestic broadcasters air programming for the territory's foreign population.[309] Access to media and information over the Internet is not subject to mainland Chinese regulations, including the Great Firewall.[312]
|
154 |
+
|
155 |
+
Coordinates: 22°18′N 114°12′E / 22.3°N 114.2°E / 22.3; 114.2
|
en/2618.html.txt
ADDED
@@ -0,0 +1,155 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Hong Kong (/ˌhɒŋˈkɒŋ/ (listen); Chinese: 香港, Cantonese: [hœ́ːŋ.kɔ̌ːŋ] (listen)), officially the Hong Kong Special Administrative Region of the People's Republic of China (HKSAR), is a metropolitan area and special administrative region of the People's Republic of China on the eastern Pearl River Delta of the South China Sea. With over 7.5 million people of various nationalities[d] in a 1,104-square-kilometre (426 sq mi) territory, Hong Kong is one of the most densely populated places in the world.
|
6 |
+
|
7 |
+
Hong Kong became a colony of the British Empire after the Qing Empire ceded Hong Kong Island at the end of the First Opium War in 1842.[16] The colony expanded to the Kowloon Peninsula in 1860 after the Second Opium War and was further extended when Britain obtained a 99-year lease of the New Territories in 1898.[17][18] The whole territory was transferred to China in 1997.[19] As a special administrative region, Hong Kong maintains separate governing and economic systems from that of mainland China under a principle of "one country, two systems".[20]
|
8 |
+
|
9 |
+
Originally a sparsely populated area of farming and fishing villages,[16] the territory has become one of the world's most significant financial centres and commercial ports.[21] It is the world's tenth-largest exporter and ninth-largest importer.[22][23] Hong Kong has a major capitalist service economy characterised by low taxation and free trade, and its currency, the Hong Kong dollar, is the eighth most traded currency in the world.[24] Hong Kong is home to the second-highest number of billionaires of any city in the world,[25] the highest number of billionaires of any city in Asia, and the largest concentration of ultra high-net-worth individuals of any city in the world.[26][27] Although the city has one of the highest per capita incomes in the world, severe income inequality exists among its residents.[28]
|
10 |
+
|
11 |
+
Hong Kong is a highly developed territory and ranks fourth on the UN Human Development Index.[29] The city has the largest number of skyscrapers of any city in the world,[30] and its residents have some of the highest life expectancies in the world.[29] The dense space led to a developed transportation network with public transport rates exceeding 90 percent.[31] Hong Kong is ranked sixth in the Global Financial Centres Index and is ranked fourth in Asia after Tokyo, Shanghai and Singapore.[32]
|
12 |
+
|
13 |
+
The name of the territory, first romanised as "He-Ong-Kong" in 1780,[35] originally referred to a small inlet located between Aberdeen Island and the southern coast of Hong Kong Island. Aberdeen was an initial point of contact between British sailors and local fishermen.[36] Although the source of the romanised name is unknown, it is generally believed to be an early phonetic rendering of the Cantonese pronunciation hēung góng. The name translates as "fragrant harbour" or "incense harbour".[33][34][37] "Fragrant" may refer to the sweet taste of the harbour's freshwater influx from the Pearl River or to the odour from incense factories lining the coast of northern Kowloon. The incense was stored near Aberdeen Harbour for export before Victoria Harbour developed.[37] Sir John Davis (the second colonial governor) offered an alternative origin; Davis said that the name derived from "Hoong-keang" ("red torrent"), reflecting the colour of soil over which a waterfall on the island flowed.[38]
|
14 |
+
|
15 |
+
The simplified name Hong Kong was frequently used by 1810.[39] The name was also commonly written as the single word Hongkong until 1926, when the government officially adopted the two-word name.[40] Some corporations founded during the early colonial era still keep this name, including Hongkong Land, Hongkong Electric Company, Hongkong and Shanghai Hotels and the Hongkong and Shanghai Banking Corporation (HSBC).[41][42]
|
16 |
+
|
17 |
+
The region is first known to have been occupied by humans during the Neolithic period, about 6,000 years ago.[43] However in 2003, stone tools were excavated at the Wong Tei Tung archaeological site, which optical luminescence testing showed date to between 35,000 and 39,000 years ago.[44]
|
18 |
+
|
19 |
+
Early Hong Kong settlers were a semi-coastal people[43] who migrated from inland and brought knowledge of rice cultivation.[45] The Qin dynasty incorporated the Hong Kong area into China for the first time in 214 BCE, after conquering the indigenous Baiyue.[46] The region was consolidated under the Nanyue kingdom (a predecessor state of Vietnam) after the Qin collapse[47] and recaptured by China after the Han conquest.[48] During the Mongol conquest of China in the 13th century, the Southern Song court was briefly located in modern-day Kowloon City (the Sung Wong Toi site) before its final defeat in the 1279 Battle of Yamen.[49] By the end of the Yuan dynasty, seven large families had settled in the region and owned most of the land. Settlers from nearby provinces migrated to Kowloon throughout the Ming dynasty.[50]
|
20 |
+
|
21 |
+
The earliest European visitor was Portuguese explorer Jorge Álvares, who arrived in 1513.[51][52] Portuguese merchants established a trading post called Tamão in Hong Kong waters and began regular trade with southern China. Although the traders were expelled after military clashes in the 1520s,[53] Portuguese-Chinese trade relations were re-established by 1549. Portugal acquired a permanent lease for Macau in 1557.[54]
|
22 |
+
|
23 |
+
After the Qing conquest, maritime trade was banned under the Haijin policies. The Kangxi Emperor lifted the prohibition, allowing foreigners to enter Chinese ports in 1684.[55] Qing authorities established the Canton System in 1757 to regulate trade more strictly, restricting non-Russian ships to the port of Canton.[56] Although European demand for Chinese commodities like tea, silk, and porcelain was high, Chinese interest in European manufactured goods was insignificant, so that Chinese goods could only be bought with precious metals. To reduce the trade imbalance, the British sold large amounts of Indian opium to China. Faced with a drug crisis, Qing officials pursued ever more aggressive actions to halt the opium trade.[57]
|
24 |
+
|
25 |
+
In 1839, the Daoguang Emperor rejected proposals to legalise and tax opium and ordered imperial commissioner Lin Zexu to eradicate the opium trade. The commissioner destroyed opium stockpiles and halted all foreign trade,[58] triggering a British military response and the First Opium War. The Qing surrendered early in the war and ceded Hong Kong Island in the Convention of Chuenpi. However, both countries were dissatisfied and did not ratify the agreement.[59] After more than a year of further hostilities, Hong Kong Island was formally ceded to the United Kingdom in the 1842 Treaty of Nanking.[60]
|
26 |
+
|
27 |
+
Administrative infrastructure was quickly built by early 1842, but piracy, disease, and hostile Qing policies initially prevented the government from attracting commerce. Conditions on the island improved during the Taiping Rebellion in the 1850s, when many Chinese refugees, including wealthy merchants, fled mainland turbulence and settled in the colony.[16] Further tensions between the British and Qing over the opium trade escalated into the Second Opium War. The Qing were again defeated and forced to give up Kowloon Peninsula and Stonecutters Island in the Convention of Peking.[17] By the end of this war, Hong Kong had evolved from a transient colonial outpost into a major entrepôt. Rapid economic improvement during the 1850s attracted foreign investment, as potential stakeholders became more confident in Hong Kong's future.[61]
|
28 |
+
|
29 |
+
The colony was further expanded in 1898 when Britain obtained a 99-year lease of the New Territories.[18] The University of Hong Kong was established in 1911 as the territory's first institution of higher education.[62] Kai Tak Airport began operation in 1924, and the colony avoided a prolonged economic downturn after the 1925–26 Canton–Hong Kong strike.[63][64] At the start of the Second Sino-Japanese War in 1937, Governor Geoffry Northcote declared Hong Kong a neutral zone to safeguard its status as a free port.[65] The colonial government prepared for a possible attack, evacuating all British women and children in 1940.[66] The Imperial Japanese Army attacked Hong King on 8 December 1941, the same morning as its attack on Pearl Harbor.[67] Hong Kong was occupied by Japan for almost four years before Britain resumed control on 30 August 1945.[68]
|
30 |
+
|
31 |
+
Its population rebounded quickly after the war, as skilled Chinese migrants fled from the Chinese Civil War, and more refugees crossed the border when the Communist Party took control of mainland China in 1949.[69] Hong Kong became the first of the Four Asian Tiger economies to industrialise during the 1950s.[70] With a rapidly increasing population, the colonial government began reforms to improve infrastructure and public services. The public-housing estate programme, Independent Commission Against Corruption, and Mass Transit Railway were all established during the post-war decades to provide safer housing, integrity in the civil service, and more-reliable transportation.[71][72] Although the territory's competitiveness in manufacturing gradually declined because of rising labour and property costs, it transitioned to a service-based economy. By the early 1990s, Hong Kong had established itself as a global financial centre and shipping hub.[73]
|
32 |
+
The colony faced an uncertain future as the end of the New Territories lease approached, and Governor Murray MacLehose raised the question of Hong Kong's status with Deng Xiaoping in 1979.[74] Diplomatic negotiations with China resulted in the 1984 Sino-British Joint Declaration, in which the United Kingdom agreed to transfer the colony in 1997 and China would guarantee Hong Kong's economic and political systems for 50 years after the transfer.[75] The impending transfer triggered a wave of mass emigration as residents feared an erosion of civil rights, the rule of law, and quality of life.[76] Over half a million people left the territory during the peak migration period, from 1987 to 1996.[77] Hong Kong was transferred to China on 1 July 1997, after 156 years of British rule.[19]
|
33 |
+
|
34 |
+
Immediately after the transfer, Hong Kong was severely affected by several crises. The government was forced to use substantial foreign exchange reserves to maintain the Hong Kong dollar's currency peg during the 1997 Asian financial crisis,[69] and the recovery from this was muted by an H5N1 avian-flu outbreak[78] and a housing surplus.[79] This was followed by the 2003 SARS epidemic, during which the territory experienced its most serious economic downturn.[80]
|
35 |
+
|
36 |
+
Political debates after the transfer of sovereignty have centred around the region's democratic development and the central government's adherence to the "one country, two systems" principle. After reversal of the last colonial era Legislative Council democratic reforms following the handover,[81] the regional government unsuccessfully attempted to enact national security legislation pursuant to Article 23 of the Basic Law.[82] The central government decision to implement nominee pre-screening before allowing Chief Executive elections triggered a series of protests in 2014 which became known as the Umbrella Revolution.[83] Discrepancies in the electoral registry and disqualification of elected legislators after the 2016 Legislative Council elections[84][85][86] and enforcement of national law in the West Kowloon high-speed railway station raised further concerns about the region's autonomy.[87] In June 2019, large protests again erupted in response to a proposed extradition amendment bill permitting extradition of fugitives to mainland China. The protests continued into December, possibly becoming the largest-scale political protest movement in Hong Kong history,[88] with organisers claiming to have attracted more than one million Hong Kong residents.[89]
|
37 |
+
|
38 |
+
Hong Kong is a special administrative region of China, with executive, legislative, and judicial powers devolved from the national government.[90] The Sino-British Joint Declaration provided for economic and administrative continuity through the transfer of sovereignty,[75] resulting in an executive-led governing system largely inherited from the territory's history as a British colony.[91] Under these terms and the "one country, two systems" principle, the Basic Law of Hong Kong is the regional constitution.[92]
|
39 |
+
|
40 |
+
The regional government is composed of three branches:
|
41 |
+
|
42 |
+
The Chief Executive is the head of government and serves for a maximum of two five-year terms. The State Council (led by the Premier of China) appoints the Chief Executive after nomination by the Election Committee, which is composed of 1,200 business, community, and government leaders.[100][101][102]
|
43 |
+
|
44 |
+
The Legislative Council has 70 members, each serving a four-year term.[103] 35 are directly elected from geographical constituencies and 35 represent functional constituencies (FC). Thirty FC councillors are selected from limited electorates representing sectors of the economy or special interest groups,[104] and the remaining five members are nominated from sitting District Council members and selected in region-wide double direct elections.[105] All popularly elected members are chosen by proportional representation. The 30 limited electorate functional constituencies fill their seats using first-past-the-post or instant-runoff voting.[104]
|
45 |
+
|
46 |
+
Twenty-two political parties had representatives elected to the Legislative Council in the 2016 election.[106] These parties have aligned themselves into three ideological groups: the pro-Beijing camp (the current government), the pro-democracy camp, and localist groups.[107] The Communist Party does not have an official political presence in Hong Kong, and its members do not run in local elections.[108] Hong Kong is represented in the National People's Congress by 36 deputies chosen through an electoral college, and 203 delegates in the Chinese People's Political Consultative Conference appointed by the central government.[7]
|
47 |
+
|
48 |
+
Chinese national law does not generally apply in the region and Hong Kong is treated as a separate jurisdiction.[98] Its judicial system is based on common law, continuing the legal tradition established during British rule.[109] Local courts may refer to precedents set in English law and overseas jurisprudence.[110] However, mainland criminal procedure law applies to cases investigated by the Office for Safeguarding National Security of the CPG in the HKSAR.[111] Interpretative and amending power over the Basic Law and jurisdiction over acts of state lie with the central authority, making regional courts ultimately subordinate to the mainland's socialist civil law system.[112] Decisions made by the Standing Committee of the National People's Congress override any territorial judicial process.[113] Furthermore, in circumstances where the Standing Committee declares a state of emergency in Hong Kong, the State Council may enforce national law in the region.[114]
|
49 |
+
|
50 |
+
The territory's jurisdictional independence is most apparent in its immigration and taxation policies. The Immigration Department issues passports for permanent residents which differ from those of the mainland or Macau,[115] and the region maintains a regulated border with the rest of the country. All travellers between Hong Kong and China and Macau must pass through border controls, regardless of nationality.[116] Mainland Chinese citizens do not have right of abode in Hong Kong and are subject to immigration controls.[117] Public finances are handled separately from the national government; taxes levied in Hong Kong do not fund the central authority.[118][119]
|
51 |
+
|
52 |
+
The Hong Kong Garrison of the People's Liberation Army is responsible for the region's defence.[120] Although the Chairman of the Central Military Commission is supreme commander of the armed forces,[121] the regional government may request assistance from the garrison.[122] Hong Kong residents are not required to perform military service and current law has no provision for local enlistment, so its defence is composed entirely of non-Hongkongers.[123]
|
53 |
+
|
54 |
+
The central government and Ministry of Foreign Affairs handle diplomatic matters, but Hong Kong retains the ability to maintain separate economic and cultural relations with foreign nations.[124] The territory actively participates in the World Trade Organization, the Asia-Pacific Economic Cooperation forum, the International Olympic Committee, and many United Nations agencies.[125][126][127] The regional government maintains trade offices in Greater China and other nations.[128]
|
55 |
+
|
56 |
+
The territory is divided into 18 districts, each represented by a district council. These advise the government on local issues such as public facility provisioning, community programme maintenance, cultural promotion, and environmental policy. There are a total of 479 district council seats, 452 of which are directly elected.[129] Rural committee chairmen, representing outlying villages and towns, fill the 27 non-elected seats.[130]
|
57 |
+
|
58 |
+
Hong Kong is governed by a hybrid regime that is not fully representative of the population. Legislative Council members elected by functional constituencies composed of professional and special interest groups are accountable to those narrow corporate electorates and not the general public. This electoral arrangement has guaranteed a pro-establishment majority in the legislature since the transfer of sovereignty. Similarly, the Chief Executive is selected by establishment politicians and corporate members of the Election Committee rather than directly elected.[131] Although universal suffrage for Chief Executive and all Legislative Council elections are defined goals of Basic Law Articles 45 and 68,[132] the legislature is only partially directly elected and the executive continues to be nominated by an unrepresentative body.[131] The government has been repeatedly petitioned to introduce direct elections for these positions.[133][134]
|
59 |
+
|
60 |
+
Ethnic minorities (except those of European ancestry) have marginal representation in government and often experience discrimination in housing, education, and employment.[135][136] Employment vacancies and public service appointments frequently have language requirements which minority job seekers do not meet, and language education resources remain inadequate for Chinese learners.[137][138] Foreign domestic helpers, predominantly women from the Philippines and Indonesia, have little protection under regional law. Although they live and work in Hong Kong, these workers are not treated as ordinary residents and are ineligible for right of abode in the territory.[139] Sex trafficking in Hong Kong is an issue. Hongkonger and foreign women and girls are forced into prostitution in brothels, homes, and businesses in the city.[140][141][142][143]
|
61 |
+
|
62 |
+
The Joint Declaration guarantees the Basic Law for 50 years after the transfer of sovereignty.[75] It does not specify how Hong Kong will be governed after 2047, and the central government's role in determining the territory's future system of government is the subject of political debate and speculation. Hong Kong's political and judicial systems may be integrated with China's at that time, or the territory may continue to be administered separately.[144][145]
|
63 |
+
|
64 |
+
In 2020, in a period of large-scale protests, the Standing Committee of the National People's Congress passed the controversial Hong Kong national security law.[146] The law criminalises acts that were previously considered protected speech under Hong Kong law and establishes the Office for Safeguarding National Security of the CPG in the HKSAR, an investigative office under Central People's Government authority immune from HKSAR jurisdiction.[111][147] The United Kingdom considers the law to be a serious violation of the Joint Declaration.[148]
|
65 |
+
|
66 |
+
Hong Kong is on China's southern coast, 60 km (37 mi) east of Macau, on the east side of the mouth of the Pearl River estuary. It is surrounded by the South China Sea on all sides except the north, which neighbours the Guangdong city of Shenzhen along the Sham Chun River. The territory's 2,755 km2 (1,064 sq mi) area consists of Hong Kong Island, the Kowloon Peninsula, the New Territories, Lantau Island, and over 200 other islands. Of the total area, 1,073 km2 (414 sq mi) is land and 35 km2 (14 sq mi) is water.[29] The territory's highest point is Tai Mo Shan, 957 metres (3,140 ft) above sea level.[149] Urban development is concentrated on the Kowloon Peninsula, Hong Kong Island, and in new towns throughout the New Territories.[150] Much of this is built on reclaimed land; 70 km2 (27 sq mi) (six per cent of the total land or about 25 per cent of developed space in the territory) is reclaimed from the sea.[151]
|
67 |
+
|
68 |
+
Undeveloped terrain is hilly to mountainous, with very little flat land, and consists mostly of grassland, woodland, shrubland, or farmland.[152][153] About 40 per cent of the remaining land area are country parks and nature reserves.[154] The territory has a diverse ecosystem; over 3,000 species of vascular plants occur in the region (300 of which are native to Hong Kong), and thousands of insect, avian, and marine species.[155][156]
|
69 |
+
|
70 |
+
Hong Kong has a humid subtropical climate (Köppen Cwa), characteristic of southern China. Summer is hot and humid, with occasional showers and thunderstorms and warm air from the southwest. Typhoons occur most often then, sometimes resulting in floods or landslides. Winters are mild and usually sunny at the beginning, becoming cloudy towards February; an occasional cold front brings strong, cooling winds from the north. The most temperate seasons are spring and autumn, which are generally sunny and dry.[157] When there is snowfall, which is extremely rare, it is usually at high elevations. Hong Kong averages 1,709 hours of sunshine per year;[158] the highest and lowest recorded temperatures at the Hong Kong Observatory are 36.6 °C (97.9 °F) on 22 August 2017 and 0.0 °C (32.0 °F) on 18 January 1893.[159] The highest and lowest recorded temperatures in all of Hong Kong are 39.0 °C (102 °F) at Wetland Park on 22 August 2017,[160] and −6.0 °C (21.2 °F) at Tai Mo Shan on 24 January 2016.
|
71 |
+
|
72 |
+
Hong Kong has the world's largest number of skyscrapers, with 317 towers taller than 150 metres (490 ft),[30] and the third-largest number of high-rise buildings in the world.[163] The lack of available space restricted development to high-density residential tenements and commercial complexes packed closely together on buildable land.[164] Single-family detached homes are extremely rare and generally only found in outlying areas.[165]
|
73 |
+
|
74 |
+
The International Commerce Centre and Two International Finance Centre are the tallest buildings in Hong Kong and among the tallest in the Asia-Pacific region.[166] Other distinctive buildings lining the Hong Kong Island skyline include the HSBC Main Building, the anemometer-topped triangular Central Plaza, the circular Hopewell Centre, and the sharp-edged Bank of China Tower.[167][168]
|
75 |
+
|
76 |
+
Demand for new construction has contributed to frequent demolition of older buildings, freeing space for modern high-rises.[169] However, many examples of European and Lingnan architecture are still found throughout the territory. Older government buildings are examples of colonial architecture. The 1846 Flagstaff House, the former residence of the commanding British military officer, is the oldest Western-style building in Hong Kong.[170] Some (including the Court of Final Appeal Building and the Hong Kong Observatory) retain their original function, and others have been adapted and reused; the Former Marine Police Headquarters was redeveloped into a commercial and retail complex,[171] and Béthanie (built in 1875 as a sanatorium) houses the Hong Kong Academy for Performing Arts.[172] The Tin Hau Temple, dedicated to the sea goddess Mazu (originally built in 1012 and rebuilt in 1266), is the territory's oldest existing structure.[173] The Ping Shan Heritage Trail has architectural examples of several imperial Chinese dynasties, including the Tsui Sing Lau Pagoda (Hong Kong's only remaining pagoda).[174]
|
77 |
+
|
78 |
+
Tong lau, mixed-use tenement buildings constructed during the colonial era, blended southern Chinese architectural styles with European influences. These were especially prolific during the immediate post-war period, when many were rapidly built to house large numbers of Chinese migrants.[175] Examples include Lui Seng Chun, the Blue House in Wan Chai, and the Shanghai Street shophouses in Mong Kok. Mass-produced public-housing estates, built since the 1960s, are mainly constructed in modernist style.[176]
|
79 |
+
|
80 |
+
The Census and Statistics Department estimated Hong Kong's population at 7,482,500 in mid-2019.[177] The overwhelming majority (92 per cent) is Han Chinese,[6] most of whom are Taishanese, Teochew, Hakka, and a number of other Cantonese peoples.[178][179][180] The remaining eight per cent are non-ethnic Chinese minorities, primarily Filipinos, Indonesians, and South Asians.[6][181] About half the population have some form of British nationality, a legacy of colonial rule; 3.4 million residents have British National (Overseas) status, and 260,000 British citizens live in the territory.[182] The vast majority also hold Chinese nationality, automatically granted to all ethnic Chinese residents at the transfer of sovereignty.[183] Headline population density of about 6,800 people/km2 does not reflect true densities since only 6.9% of land is residential, the residential average population density calculates closer to a highly cramped 100,000/km2.
|
81 |
+
|
82 |
+
The predominant language is Cantonese, a variety of Chinese originating in Guangdong. It is spoken by 94.6 per cent of the population, 88.9 per cent as a first language and 5.7 per cent as a second language.[3] Slightly over half the population (53.2 per cent) speaks English, the other official language;[2] 4.3 per cent are native speakers, and 48.9 per cent speak English as a second language.[3] Code-switching, mixing English and Cantonese in informal conversation, is common among the bilingual population.[184] Post-handover governments have promoted Mandarin, which is currently about as prevalent as English; 48.6 per cent of the population speaks Mandarin, with 1.9 per cent native speakers and 46.7 per cent speaking it as a second language.[3] Traditional Chinese characters are used in writing, rather than the simplified characters used on the mainland.[185]
|
83 |
+
|
84 |
+
Among the religious population, the traditional "three teachings" of China, Buddhism, Confucianism, and Taoism, have the most adherents (20 per cent), and are followed by Christianity (12 per cent) and Islam (4 per cent).[186] Followers of other religions, including Sikhism, Hinduism, Judaism, and the Bahá'í Faith, generally originate from regions where their religion predominates.[186]
|
85 |
+
|
86 |
+
Life expectancy in Hong Kong was 82.2 years for males and 87.6 years for females in 2018,[177] the sixth-highest in the world.[187] Cancer, pneumonia, heart disease, cerebrovascular disease, and accidents are the territory's five leading causes of death.[188] The universal public healthcare system is funded by general-tax revenue, and treatment is highly subsidised; on average, 95 per cent of healthcare costs are covered by the government.[189]
|
87 |
+
|
88 |
+
Income inequality has risen since the transfer of sovereignty, as the region's ageing population has gradually added to the number of nonworking people.[190] Although median household income steadily increased during the decade to 2016, the wage gap remained high;[191] the 90th percentile of earners receive 41 per cent of all income.[191] The city has the most billionaires per capita, with one billionaire per 109,657 people.[192] Despite government efforts to reduce the growing disparity,[193] median income for the top 10 per cent of earners is 44 times that of the bottom 10 per cent.[194][195]
|
89 |
+
|
90 |
+
Hong Kong has a capitalist mixed service economy, characterised by low taxation, minimal government market intervention, and an established international financial market.[196] It is the world's 35th-largest economy, with a nominal GDP of approximately US$373 billion.[11] Although Hong Kong's economy has ranked at the top of the Heritage Foundation's economic freedom index since 1995,[197][198] the territory has a relatively high level of income disparity.[12] The Hong Kong Stock Exchange is the seventh-largest in the world, with a market capitalisation of HK$30.4 trillion (US$3.87 trillion) as of December 2018[update].[199]
|
91 |
+
|
92 |
+
Hong Kong is the tenth-largest trading entity in exports and imports (2017), trading more goods in value than its gross domestic product.[22][23] Over half of its cargo throughput consists of transshipments (goods travelling through Hong Kong). Products from mainland China account for about 40 per cent of that traffic.[200] The city's location allowed it to establish a transportation and logistics infrastructure which includes the world's seventh-busiest container port[201] and the busiest airport for international cargo.[202] The territory's largest export markets are mainland China and the United States.[29]
|
93 |
+
|
94 |
+
It has little arable land and few natural resources, importing most of its food and raw materials. More than 90 per cent of Hong Kong's food is imported, including nearly all its meat and rice.[203] Agricultural activity is 0.1% of GDP, and consists of growing premium food and flower varieties.[204]
|
95 |
+
|
96 |
+
Although the territory had one of Asia's largest manufacturing economies during the latter half of the colonial era, Hong Kong's economy is now dominated by the service sector. The sector generates 92.7 per cent of economic output, with the public sector accounting for about 10 per cent.[205] Between 1961 and 1997 Hong Kong's gross domestic product increased by a factor of 180, and per capita GDP increased by a factor of 87.[206][207] The territory's GDP relative to mainland China's peaked at 27 per cent in 1993; it fell to less than three per cent in 2017, as the mainland developed and liberalised its economy.[208]
|
97 |
+
Economic and infrastructure integration with China has increased significantly since the 1978 start of market liberalisation on the mainland. Since resumption of cross-boundary train service in 1979, many rail and road links have been improved and constructed (facilitating trade between regions).[209][210] The Closer Partnership Economic Arrangement formalised a policy of free trade between the two areas, with each jurisdiction pledging to remove remaining obstacles to trade and cross-boundary investment.[211] A similar economic partnership with Macau details the liberalisation of trade between the special administrative regions.[212] Chinese companies have expanded their economic presence in the territory since the transfer of sovereignty. Mainland firms represent over half of the Hang Seng Index value, up from five per cent in 1997.[213][214]
|
98 |
+
|
99 |
+
As the mainland liberalised its economy, Hong Kong's shipping industry faced intense competition from other Chinese ports. Fifty per cent of China's trade goods were routed through Hong Kong in 1997, dropping to about 13 per cent by 2015.[215] The territory's minimal taxation, common law system, and civil service attract overseas corporations wishing to establish a presence in Asia.[215] The city has the second-highest number of corporate headquarters in the Asia-Pacific region.[216] Hong Kong is a gateway for foreign direct investment in China, giving investors open access to mainland Chinese markets through direct links with the Shanghai and Shenzhen stock exchanges. The territory was the first market outside mainland China for renminbi-denominated bonds, and is one of the largest hubs for offshore renminbi trading.[217]
|
100 |
+
|
101 |
+
The government has had a passive role in the economy. Colonial governments had little industrial policy, and implemented almost no trade controls. Under the doctrine of "positive non-interventionism", post-war administrations deliberately avoided the direct allocation of resources; active intervention was considered detrimental to economic growth.[218] While the economy transitioned to a service basis during the 1980s,[218] late colonial governments introduced interventionist policies. Post-handover administrations continued and expanded these programmes, including export-credit guarantees, a compulsory pension scheme, a minimum wage, anti-discrimination laws, and a state mortgage backer.[219]
|
102 |
+
|
103 |
+
Tourism is a major part of the economy, accounting for five per cent of GDP.[171] In 2016, 26.6 million visitors contributed HK$258 billion (US$32.9 billion) to the territory, making Hong Kong the 14th most popular destination for international tourists. It is the most popular Chinese city for tourists, receiving over 70 per cent more visitors than its closest competitor (Macau).[220] The city is ranked as one of the most expensive cities for expatriates.[221][222]
|
104 |
+
|
105 |
+
Hong Kong has a highly developed, sophisticated transport network. Over 90 per cent of daily trips are made on public transport, the highest percentage in the world.[31] The Octopus card, a contactless smart payment card, is widely accepted on railways, buses and ferries, and can be used for payment in most retail stores.[223]
|
106 |
+
|
107 |
+
The Mass Transit Railway (MTR) is an extensive passenger rail network, connecting 93 metro stations throughout the territory.[224] With a daily ridership of over five million, the system serves 41 per cent of all public transit passengers in the city[225] and has an on-time rate of 99.9 per cent.[226] Cross-boundary train service to Shenzhen is offered by the East Rail line, and longer-distance inter-city trains to Guangzhou, Shanghai, and Beijing are operated from Hung Hom Station.[227] Connecting service to the national high-speed rail system is provided at West Kowloon railway station.[228]
|
108 |
+
|
109 |
+
Although public transport systems handle most passenger traffic, there are over 500,000 private vehicles registered in Hong Kong.[229] Automobiles drive on the left (unlike in mainland China), due to historical influence of the British Empire.[230] Vehicle traffic is extremely congested in urban areas, exacerbated by limited space to expand roads and an increasing number of vehicles.[231] More than 18,000 taxicabs, easily identifiable by their bright colour, are licensed to carry riders in the territory.[232] Bus services operate more than 700 routes across the territory,[225] with smaller public light buses (also known as minibuses) serving areas standard buses do not reach as frequently or directly.[233] Highways, organised with the Hong Kong Strategic Route and Exit Number System, connect all major areas of the territory.[234] The Hong Kong–Zhuhai–Macau Bridge provides a direct route to the western side of the Pearl River estuary.[210]
|
110 |
+
|
111 |
+
Hong Kong International Airport is the territory's primary airport. Over 100 airlines operate flights from the airport, including locally based Cathay Pacific (flag carrier), Hong Kong Airlines, regional carrier Cathay Dragon, low-cost airline HK Express and cargo airline Air Hong Kong.[235] It is the eighth-busiest airport by passenger traffic,[236] and handles the most air-cargo traffic in the world.[237] Most private recreational aviation traffic flies through Shek Kong Airfield, under the supervision of the Hong Kong Aviation Club.[238]
|
112 |
+
|
113 |
+
The Star Ferry operates two lines across Victoria Harbour for its 53,000 daily passengers.[239] Ferries also serve outlying islands inaccessible by other means. Smaller kai-to boats serve the most remote coastal settlements.[240] Ferry travel to Macau and mainland China is also available.[241] Junks, once common in Hong Kong waters, are no longer widely available and are used privately and for tourism.[242]
|
114 |
+
|
115 |
+
The Peak Tram, Hong Kong's first public transport system, has provided funicular rail transport between Central and Victoria Peak since 1888.[243] The Central and Western District has an extensive system of escalators and moving pavements, including the Mid-Levels escalator (the world's longest outdoor covered escalator system).[244] Hong Kong Tramways covers a portion of Hong Kong Island. The MTR operates its Light Rail system, serving the northwestern New Territories.[224]
|
116 |
+
|
117 |
+
Hong Kong generates most of its electricity locally.[245] The vast majority of this energy comes from fossil fuels, with 46 per cent from coal and 47 per cent from petroleum.[246] The rest is from other imports, including nuclear energy generated on the mainland.[247] Renewable sources account for a negligible amount of energy generated for the territory.[248] Small-scale wind-power sources have been developed,[245] and a small number of private homes have installed solar panels.[249]
|
118 |
+
|
119 |
+
With few natural lakes and rivers, high population density, inaccessible groundwater sources, and extremely seasonal rainfall, the territory does not have a reliable source of freshwater. The Dongjiang River in Guangdong supplies 70 per cent of the city's water,[250] and the remaining demand is filled by harvesting rainwater.[251] Toilets flush with seawater, greatly reducing freshwater use.[250]
|
120 |
+
|
121 |
+
Broadband Internet access is widely available, with 92.6 per cent of households connected. Connections over fibre-optic infrastructure are increasingly prevalent,[252] contributing to the high regional average connection speed of 21.9 Mbit/s (the world's fourth-fastest).[253] Mobile-phone use is ubiquitous;[254] there are more than 18 million mobile-phone accounts,[255] more than double the territory's population.[177]
|
122 |
+
|
123 |
+
Hong Kong is characterised as a hybrid of East and West. Traditional Chinese values emphasising family and education blend with Western ideals, including economic liberty and the rule of law.[256] Although the vast majority of the population is ethnically Chinese, Hong Kong has developed a distinct identity. The territory diverged from the mainland due to its long period of colonial administration and a different pace of economic, social, and cultural development. Mainstream culture is derived from immigrants originating from various parts of China. This was influenced by British-style education, a separate political system, and the territory's rapid development during the late 20th century.[257][258] Most migrants of that era fled poverty and war, reflected in the prevailing attitude toward wealth; Hongkongers tend to link self-image and decision-making to material benefits.[259][260] Residents' sense of local identity has markedly increased post-handover: 53 per cent of the population identify as "Hongkongers", while 11 per cent describe themselves as "Chinese". The remaining population purport mixed identities, 23 per cent as "Hongkonger in China" and 12 per cent as "Chinese in Hong Kong".[261]
|
124 |
+
|
125 |
+
Traditional Chinese family values, including family honour, filial piety, and a preference for sons, are prevalent.[262] Nuclear families are the most common households, although multi-generational and extended families are not unusual.[263] Spiritual concepts such as feng shui are observed; large-scale construction projects often hire consultants to ensure proper building positioning and layout. The degree of its adherence to feng shui is believed to determine the success of a business.[167] Bagua mirrors are regularly used to deflect evil spirits,[264] and buildings often lack floor numbers with a 4;[265] the number has a similar sound to the word for "die" in Cantonese.[266]
|
126 |
+
|
127 |
+
Food in Hong Kong is primarily based on Cantonese cuisine, despite the territory's exposure to foreign influences and its residents' varied origins. Rice is the staple food, and is usually served plain with other dishes.[267] Freshness of ingredients is emphasised. Poultry and seafood are commonly sold live at wet markets, and ingredients are used as quickly as possible.[268] There are five daily meals: breakfast, lunch, afternoon tea, dinner, and siu yeh.[269] Dim sum, as part of yum cha (brunch), is a dining-out tradition with family and friends. Dishes include congee, cha siu bao, siu yuk, egg tarts, and mango pudding. Local versions of Western food are served at cha chaan teng (fast, casual restaurants). Common cha chaan teng menu items include macaroni in soup, deep-fried French toast, and Hong Kong-style milk tea.[267]
|
128 |
+
|
129 |
+
Hong Kong developed into a filmmaking hub during the late 1940s as a wave of Shanghai filmmakers migrated to the territory, and these movie veterans helped rebuild the colony's entertainment industry over the next decade.[270] By the 1960s, the city was well known to overseas audiences through films such as The World of Suzie Wong.[271] When Bruce Lee's Way of the Dragon was released in 1972, local productions became popular outside Hong Kong. During the 1980s, films such as A Better Tomorrow, As Tears Go By, and Zu Warriors from the Magic Mountain expanded global interest beyond martial arts films; locally made gangster films, romantic dramas, and supernatural fantasies became popular.[272] Hong Kong cinema continued to be internationally successful over the following decade with critically acclaimed dramas such as Farewell My Concubine, To Live, and Chungking Express. The city's martial arts film roots are evident in the roles of the most prolific Hong Kong actors. Jackie Chan, Donnie Yen, Jet Li, Chow Yun-fat, and Michelle Yeoh frequently play action-oriented roles in foreign films. At the height of the local movie industry in the early 1990s, over 400 films were produced each year; since then, industry momentum shifted to mainland China. The number of films produced annually has declined to about 60 in 2017.[273]
|
130 |
+
|
131 |
+
Cantopop is a genre of Cantonese popular music which emerged in Hong Kong during the 1970s. Evolving from Shanghai-style shidaiqu, it is also influenced by Cantonese opera and Western pop.[274] Local media featured songs by artists such as Sam Hui, Anita Mui, Leslie Cheung, and Alan Tam; during the 1980s, exported films and shows exposed Cantopop to a global audience.[275] The genre's popularity peaked in the 1990s, when the Four Heavenly Kings dominated Asian record charts.[276] Despite a general decline since late in the decade,[277] Cantopop remains dominant in Hong Kong; contemporary artists such as Eason Chan, Joey Yung, and Twins are popular in and beyond the territory.[278]
|
132 |
+
|
133 |
+
Western classical music has historically had a strong presence in Hong Kong, and remains a large part of local musical education.[279] The publicly funded Hong Kong Philharmonic Orchestra, the territory's oldest professional symphony orchestra, frequently host musicians and conductors from overseas. The Hong Kong Chinese Orchestra, composed of classical Chinese instruments, is the leading Chinese ensemble and plays a significant role in promoting traditional music in the community.[280]
|
134 |
+
|
135 |
+
Despite its small area, the territory is home to a variety of sports and recreational facilities. The city has hosted a number of major sporting events, including the 2009 East Asian Games, the 2008 Summer Olympics equestrian events, and the 2007 Premier League Asia Trophy.[281] The territory regularly hosts the Hong Kong Sevens, Hong Kong Marathon, Hong Kong Tennis Classic and Lunar New Year Cup, and hosted the inaugural AFC Asian Cup and the 1995 Dynasty Cup.[282][283]
|
136 |
+
|
137 |
+
Hong Kong represents itself separately from mainland China, with its own sports teams in international competitions.[281] The territory has participated in almost every Summer Olympics since 1952, and has earned three medals. Lee Lai-shan won the territory's first and only Olympic gold medal at the 1996 Atlanta Olympics.[284] Hong Kong athletes have won 126 medals at the Paralympic Games and 17 at the Commonwealth Games. No longer part of the Commonwealth of Nations, the city's last appearance in the latter was in 1994.[285]
|
138 |
+
|
139 |
+
Dragon boat races originated as a religious ceremony conducted during the annual Tuen Ng Festival. The race was revived as a modern sport as part of the Tourism Board's efforts to promote Hong Kong's image abroad. The first modern competition was organised in 1976, and overseas teams began competing in the first international race in 1993.[286]
|
140 |
+
|
141 |
+
The Hong Kong Jockey Club, the territory's largest taxpayer,[287] has a monopoly on gambling and provides over seven per cent of government revenue.[288] Three forms of gambling are legal in Hong Kong: lotteries and betting on horse racing and football.[287]
|
142 |
+
|
143 |
+
Education in Hong Kong is largely modelled after that of the United Kingdom, particularly the English system.[289] Children are required to attend school from the age of six until completion of secondary education, generally at age 18.[290][291] At the end of secondary schooling, all students take a public examination and awarded the Hong Kong Diploma of Secondary Education on successful completion.[292] Of residents aged 15 and older, 81.3 per cent completed lower-secondary education, 66.4 per cent graduated from an upper secondary school, 31.6 per cent attended a non-degree tertiary program, and 24 per cent earned a bachelor's degree or higher.[293] Mandatory education has contributed to an adult literacy rate of 95.7 per cent.[294] Lower than that of other developed economies, the rate is due to the influx of refugees from mainland China during the post-war colonial era. Much of the elderly population were not formally educated due to war and poverty.[295][296]
|
144 |
+
|
145 |
+
Comprehensive schools fall under three categories: public schools, which are government-run; subsidised schools, including government aid-and-grant schools; and private schools, often those run by religious organisations and that base admissions on academic merit. These schools are subject to the curriculum guidelines as provided by the Education Bureau. Private schools subsidised under the Direct Subsidy Scheme and international schools fall outside of this system and may elect to use differing curricula and teach using other languages.[291]
|
146 |
+
|
147 |
+
The government maintains a policy of "mother tongue instruction"; most schools use Cantonese as the medium of instruction, with written education in both Chinese and English. Secondary schools emphasise "bi-literacy and tri-lingualism", which has encouraged the proliferation of spoken Mandarin language education.[297]
|
148 |
+
|
149 |
+
Hong Kong has eleven universities. The University of Hong Kong was founded as the city's first institute of higher education during the early colonial period in 1911.[298] The Chinese University of Hong Kong was established in 1963 to fill the need for a university that taught using Chinese as its primary language of instruction.[299] Along with the Hong Kong University of Science and Technology and City University of Hong Kong, these universities are ranked among the best in Asia.[300] The Hong Kong Polytechnic University,[301] Hong Kong Baptist University,[302] Lingnan University,[303] Education University of Hong Kong,[304] Open University of Hong Kong,[305] Hong Kong Shue Yan University and The Hang Seng University of Hong Kong were all established in subsequent years.[306]
|
150 |
+
|
151 |
+
Hong Kong's major English-language newspaper is the South China Morning Post, with The Standard serving as a business-oriented alternative. A variety of Chinese-language newspapers are published daily; the most prominent are Ming Pao, Oriental Daily News, and Apple Daily. Local publications are often politically affiliated, with pro-Beijing or pro-democracy sympathies. The central government has a print-media presence in the territory through the state-owned Ta Kung Pao and Wen Wei Po.[307] Several international publications have regional operations in Hong Kong, including The Wall Street Journal, The Financial Times, The New York Times International Edition, USA Today, Yomiuri Shimbun, and The Nikkei.[308]
|
152 |
+
|
153 |
+
Three free-to-air television broadcasters operate in the territory; TVB, HKTVE, and Hong Kong Open TV air three analogue and eight digital channels.[309] TVB, Hong Kong's dominant television network, has an 80 per cent viewer share.[310] Pay TV services operated by Cable TV Hong Kong and PCCW offer hundreds of additional channels and cater to a variety of audiences.[309] RTHK is the public broadcaster, providing seven radio channels and three television channels.[311] Ten non-domestic broadcasters air programming for the territory's foreign population.[309] Access to media and information over the Internet is not subject to mainland Chinese regulations, including the Great Firewall.[312]
|
154 |
+
|
155 |
+
Coordinates: 22°18′N 114°12′E / 22.3°N 114.2°E / 22.3; 114.2
|