de-francophones commited on
Commit
484e82b
1 Parent(s): e5c1e69

19c004395af989bd7914aabe12f0e30562fa2531f6ceba23888002fdfc635ebc

Browse files
en/586.html.txt ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The bassoon is a woodwind instrument in the double reed family that plays music written in the bass and tenor clefs, and occasionally the treble. Appearing in its modern form in the 19th century, the bassoon figures prominently in orchestral, concert band, and chamber music literature. It is known for its distinctive tone colour, wide range, variety of character, and agility. The modern bassoon exists in two forms; Buffet (or French) and Heckel (or German) systems. One who plays a bassoon of either system is called a bassoonist.
4
+
5
+ Plucked
6
+
7
+ The word bassoon comes from French basson and from Italian bassone (basso with the augmentative suffix -one).[1] However, the Italian name for the same instrument is fagotto, in Spanish and Romanian it is fagot,[2] and in German Fagott. Fagot is an Old French word meaning a bundle of sticks.[3]
8
+ The dulcian came to be known as fagotto in Italy. However, the usual etymology that equates fagotto with "bundle of sticks" is somewhat misleading, as the latter term did not come into general use until later. However an early English variation, "faget", was used as early as 1450 to refer to firewood, which is 100 years before the earliest recorded use of the dulcian (1550). Further citation is needed to prove the lack of relation between the meaning "bundle of sticks" and "fagotto" (Italian) or variants. Some think that it may resemble the Roman Fasces, a standard of bound sticks with an ax. A further discrepancy lies in the fact that the dulcian was carved out of a single block of wood—in other words, a single "stick" and not a bundle.
9
+
10
+ The range of the bassoon begins at B♭1 (the first one below the bass staff) and extends upward over three octaves, roughly to the G above the treble staff (G5).[4]
11
+
12
+ Most orchestral and concert band parts rarely go higher than C5 or D5; even Stravinsky's famously difficult opening solo in The Rite of Spring only ascends to D5. Notes higher than this are possible, but seldom written, as they are usually strenuous and difficult to produce depending to the construction and behaviour of the reed, and at any rate are quite homogeneous in timbre to the same pitches on cor anglais, which can produce them with relative ease. French bassoon (see below) has greater facility in the extreme high register, and so repertoire written for it is somewhat likelier to include very high notes, but repertoire for French system can be executed on German system without alterations, and vice-versa.
13
+
14
+ Like the other woodwinds, the lowest note is fixed, but A1 is possible with a special extension to the instrument—see "Extended techniques" below.
15
+
16
+ Although the primary tone hole pitches are a perfect 5th lower than standard woodwinds, effectively an octave beneath English horn, the bassoon is non-transposing, meaning that notes sounded match the written pitch.
17
+
18
+ The bassoon disassembles into six main pieces, including the reed. The bell (6), extending upward; the bass joint (or long joint) (5), connecting the bell and the boot; the boot (or butt) (4), at the bottom of the instrument and folding over on itself; the wing joint (or tenor joint) (3), which extends from boot to bocal; and the bocal (or crook) (2), a crooked metal tube that attaches the wing joint to a reed (1) (listen (help·info)).
19
+
20
+ The bore of the bassoon is conical, like that of the oboe and the saxophone, and the two adjoining bores of the boot joint are connected at the bottom of the instrument with a U-shaped metal connector. Both bore and tone holes are precision-machined, and each instrument is finished by hand for proper tuning. The walls of the bassoon are thicker at various points along the bore; here, the tone holes are drilled at an angle to the axis of the bore, which reduces the distance between the holes on the exterior. This ensures coverage by the fingers of the average adult hand. Playing is facilitated by closing the distance between the widely spaced holes with a complex system of key work, which extends throughout nearly the entire length of the instrument. The overall height of the bassoon stretches to 1.34 m (4 ft 5 in) tall, but the total sounding length is 2.54 m (8 ft 4 in) considering that the tube is doubled back on itself. There are also short-reach bassoons made for the benefit of young or petite players.
21
+
22
+ A modern beginner's bassoon is generally made of maple, with medium-hardness types such as sycamore maple and sugar maple preferred. Less-expensive models are also made of materials such as polypropylene and ebonite, primarily for student and outdoor use. Metal bassoons were made in the past but have not been produced by any major manufacturer since 1889.
23
+
24
+ The art of reed-making has been practiced for several hundred years, some of the earliest known reeds having been made for the dulcian, a predecessor of the bassoon.[5] Current methods of reed-making consist of a set of basic methods; however, individual bassoonists' playing styles vary greatly and thus require that reeds be customized to best suit their respective bassoonist. Advanced players even go as far as making their own reeds to specifically match their individual playing style. With regards to commercially made reeds, many companies and individuals offer pre-made reeds for sale, but players often find that such reeds still require adjustments to suit their particular playing style.
25
+
26
+ Modern bassoon reeds, made of Arundo donax cane,[6] are often made by the players themselves, although beginner bassoonists tend to buy their reeds from professional reed makers or use reeds made by their teachers. Reeds begin with a length of tube cane that is split into three or four pieces using a tool called a cane splitter. The cane is then trimmed and gouged to the desired thickness, leaving the bark attached. After soaking, the gouged cane is cut to the proper shape and milled to the desired thickness, or profiled, by removing material from the bark side. This can be done by hand with a file; more frequently it is done with a machine or tool designed for the purpose. After the profiled cane has soaked once again it is folded over in the middle. Prior to soaking, the reed maker will have lightly scored the bark with parallel lines with a knife; this ensures that the cane will assume a cylindrical shape during the forming stage.
27
+
28
+ On the bark portion, the reed maker binds on one, two, or three coils or loops of brass wire to aid in the final forming process. The exact placement of these loops can vary somewhat depending on the reed maker. The bound reed blank is then wrapped with thick cotton or linen thread to protect it, and a conical steel mandrel (which sometimes has been heated in a flame) is quickly inserted in between the blades. Using a special pair of pliers, the reed maker presses down the cane, making it conform to the shape of the mandrel. (The steam generated by the heated mandrel causes the cane to permanently assume the shape of the mandrel.) The upper portion of the cavity thus created is called the "throat", and its shape has an influence on the final playing characteristics of the reed. The lower, mostly cylindrical portion will be reamed out with a special tool called a reamer, allowing the reed to fit on the bocal.
29
+
30
+ After the reed has dried, the wires are tightened around the reed, which has shrunk after drying, or replaced completely. The lower part is sealed (a nitrocellulose-based cement such as Duco may be used) and then wrapped with thread to ensure both that no air leaks out through the bottom of the reed and that the reed maintains its shape. The wrapping itself is often sealed with Duco or clear nail varnish (polish). Electrical tape can also be used as a wrapping for amateur reed makers. The bulge in the wrapping is sometimes referred to as the "Turk's head"—it serves as a convenient handle when inserting the reed on the bocal. Recently, more players are choosing the more modern heat-shrink tubing instead of the time-consuming and fiddly thread. The thread wrapping (commonly known as a "Turban" due to the criss-crossing fabric) is still more common in commercially sold reeds.
31
+
32
+ To finish the reed, the end of the reed blank, originally at the center of the unfolded piece of cane, is cut off, creating an opening. The blades above the first wire are now roughly 27–30 mm (1.1–1.2 in) long. For the reed to play, a slight bevel must be created at the tip with a knife, although there is also a machine that can perform this function. Other adjustments with the reed knife may be necessary, depending on the hardness, the profile of the cane, and the requirements of the player. The reed opening may also need to be adjusted by squeezing either the first or second wire with the pliers. Additional material may be removed from the sides (the "channels") or tip to balance the reed. Additionally, if the "e" in the bass clef staff is sagging in pitch, it may be necessary to "clip" the reed by removing 1–2 mm (0.039–0.079 in) from its length using a pair of very sharp scissors or the equivalent.[7][8]
33
+
34
+ Music historians generally consider the dulcian to be the forerunner of the modern bassoon,[9] as the two instruments share many characteristics: a double reed fitted to a metal crook, obliquely drilled tone holes and a conical bore that doubles back on itself. The origins of the dulcian are obscure, but by the mid-16th century it was available in as many as eight different sizes, from soprano to great bass. A full consort of dulcians was a rarity; its primary function seems to have been to provide the bass in the typical wind band of the time, either loud (shawms) or soft (recorders), indicating a remarkable ability to vary dynamics to suit the need. Otherwise, dulcian technique was rather primitive, with eight finger holes and two keys, indicating that it could play in only a limited number of key signatures.
35
+
36
+ Circumstantial evidence indicates that the baroque bassoon was a newly invented instrument, rather than a simple modification of the old dulcian. The dulcian was not immediately supplanted, but continued to be used well into the 18th century by Bach and others; and, presumably for reasons of interchangeability, repertoire from this time is very unlikely to go beyond the smaller compass of the dulcian. The man most likely responsible for developing the true bassoon was Martin Hotteterre (d.1712), who may also have invented the three-piece flûte traversière (transverse flute) and the hautbois (baroque oboe). Some historians believe that sometime in the 1650s, Hotteterre conceived the bassoon in four sections (bell, bass joint, boot and wing joint), an arrangement that allowed greater accuracy in machining the bore compared to the one-piece dulcian. He also extended the compass down to B♭ by adding two keys.[10] An alternate view maintains Hotteterre was one of several craftsmen responsible for the development of the early bassoon. These may have included additional members of the Hotteterre family, as well as other French makers active around the same time.[11] No original French bassoon from this period survives, but if it did, it would most likely resemble the earliest extant bassoons of Johann Christoph Denner and Richard Haka from the 1680s. Sometime around 1700, a fourth key (G♯) was added, and it was for this type of instrument that composers such as Antonio Vivaldi, Bach, and Georg Philipp Telemann wrote their demanding music. A fifth key, for the low E♭, was added during the first half of the 18th century. Notable makers of the 4-key and 5-key baroque bassoon include J.H. Eichentopf (c. 1678–1769), J. Poerschmann (1680–1757), Thomas Stanesby, Jr. (1668–1734), G.H. Scherer (1703–1778), and Prudent Thieriot (1732–1786).
37
+
38
+ Increasing demands on capabilities of instruments and players in the 19th century—particularly larger concert halls requiring greater volume and the rise of virtuoso composer-performers—spurred further refinement. Increased sophistication, both in manufacturing techniques and acoustical knowledge, made possible great improvements in the instrument's playability.
39
+
40
+ The modern bassoon exists in two distinct primary forms, the Buffet (or "French") system and the Heckel ("German") system. Most of the world plays the Heckel system, while the Buffet system is primarily played in France, Belgium, and parts of Latin America. A number of other types of bassoons have been constructed by various instrument makers, such as the rare Galandronome. Owing to the ubiquity of the Heckel system in English-speaking countries, references in English to the contemporary bassoon always mean the Heckel system, with the Buffet system being explicitly qualified where it appears.
41
+
42
+ The design of the modern bassoon owes a great deal to the performer, teacher, and composer Carl Almenräder. Assisted by the German acoustic researcher Gottfried Weber, he developed the 17-key bassoon with a range spanning four octaves. Almenräder's improvements to the bassoon began with an 1823 treatise describing ways of improving intonation, response, and technical ease of playing by augmenting and rearranging the keywork. Subsequent articles further developed his ideas. His employment at Schott gave him the freedom to construct and test instruments according to these new designs, and he published the results in Caecilia, Schott's house journal. Almenräder continued publishing and building instruments until his death in 1846, and Ludwig van Beethoven himself requested one of the newly made instruments after hearing of the papers. In 1831, Almenräder left Schott to start his own factory with a partner, Johann Adam Heckel.
43
+
44
+ Heckel and two generations of descendants continued to refine the bassoon, and their instruments became the standard, with other makers following. Because of their superior singing tone quality (an improvement upon one of the main drawbacks of the Almenräder instruments), the Heckel instruments competed for prominence with the reformed Wiener system, a Boehm-style bassoon, and a completely keyed instrument devised by Charles-Joseph Sax, father of Adolphe Sax. F.W. Kruspe implemented a latecomer attempt in 1893 to reform the fingering system, but it failed to catch on. Other attempts to improve the instrument included a 24-keyed model and a single-reed mouthpiece, but both these had adverse effects on tone and were abandoned.
45
+
46
+ Coming into the 20th century, the Heckel-style German model of bassoon dominated the field. Heckel himself had made over 1,100 instruments by the turn of the 20th century (serial numbers begin at 3,000), and the British makers' instruments were no longer desirable for the changing pitch requirements of the symphony orchestra, remaining primarily in military band use.
47
+
48
+ Except for a brief 1940s wartime conversion to ball bearing manufacture, the Heckel concern has produced instruments continuously to the present day. Heckel bassoons are considered by many to be the best, although a range of Heckel-style instruments is available from several other manufacturers, all with slightly different playing characteristics.
49
+
50
+ Because its mechanism is primitive compared to most modern woodwinds, makers have occasionally attempted to "reinvent" the bassoon. In the 1960s, Giles Brindley began to develop what he called the "logical bassoon", which aimed to improve intonation and evenness of tone through use of an electrically activated mechanism, making possible key combinations too complex for the human hand to manage. Brindley's logical bassoon was never marketed.
51
+
52
+ The Buffet system bassoon achieved its basic acoustical properties somewhat earlier than the Heckel. Thereafter, it continued to develop in a more conservative manner. While the early history of the Heckel bassoon included a complete overhaul of the instrument in both acoustics and key work, the development of the Buffet system consisted primarily of incremental improvements to the key work. This minimalist approach of the Buffet deprived it of improved consistency of intonation, ease of operation, and increased power, which is found in Heckel bassoons, but the Buffet is considered by some to have a more vocal and expressive quality. The conductor John Foulds lamented in 1934 the dominance of the Heckel-style bassoon, considering them too homogeneous in sound with the horn. The modern Buffet system has 22 keys with its range being the same as the Heckel; although Buffet instruments have greater facility in the upper registers, reaching E5 and F5 with far greater ease and less air resistance.
53
+
54
+ Compared to the Heckel bassoon, Buffet system bassoons have a narrower bore and simpler mechanism, requiring different, and often more complex fingerings for many notes. Switching between Heckel and Buffet, or vice versa, requires extensive retraining. French woodwind instruments' tone in general exhibits a certain amount of "edge", with more of a vocal quality than is usual elsewhere, and the Buffet bassoon is no exception. This sound has been utilised effectively in writing for Buffet bassoon, but is less inclined to blend than the tone of the Heckel bassoon. As with all bassoons, the tone varies considerably, depending on individual instrument, reed, and performer. In the hands of a lesser player, the Heckel bassoon can sound flat and woody, but good players succeed in producing a vibrant, singing tone. Conversely, a poorly played Buffet can sound buzzy and nasal, but good players succeed in producing a warm, expressive sound.
55
+
56
+ Though the United Kingdom once favored the French system,[12] Buffet-system instruments are no longer made there and the last prominent British player of the French system retired in the 1980s. However, with continued use in some regions and its distinctive tone, the Buffet continues to have a place in modern bassoon playing, particularly in France, where it originated. Buffet-model bassoons are currently made in Paris by Buffet Crampon and the atelier Ducasse (Romainville, France). The Selmer Company stopped fabrication of French system bassoons around the year 2012.[13] Some players, for example the late Gerald Corey in Canada, have learned to play both types and will alternate between them depending on the repertoire.
57
+
58
+ Orchestras first used the bassoon to reinforce the bass line, and as the bass of the double reed choir (oboes and taille). Baroque composer Jean-Baptiste Lully and his Les Petits Violons included oboes and bassoons along with the strings in the 16-piece (later 21-piece) ensemble, as one of the first orchestras to include the newly invented double reeds. Antonio Cesti included a bassoon in his 1668 opera Il pomo d'oro (The Golden Apple). However, use of bassoons in concert orchestras was sporadic until the late 17th century when double reeds began to make their way into standard instrumentation. This was largely due to the spread of the hautbois to countries outside France. Increasing use of the bassoon as a basso continuo instrument meant that it began to be included in opera orchestras, first in France and later in Italy, Germany and England. Meanwhile, composers such as Joseph Bodin de Boismortier, Michel Corrette, Johann Ernst Galliard, Jan Dismas Zelenka, Johann Friedrich Fasch and Telemann wrote demanding solo and ensemble music for the instrument. Antonio Vivaldi brought the bassoon to prominence by featuring it in 37 concerti for the instrument.
59
+
60
+ By the mid-18th century, the bassoon's function in the orchestra was still mostly limited to that of a continuo instrument—since scores often made no specific mention of the bassoon, its use was implied, particularly if there were parts for oboes or other winds. Beginning in the early Rococo era, composers such as Joseph Haydn, Michael Haydn, Johann Christian Bach, Giovanni Battista Sammartini and Johann Stamitz included parts that exploited the bassoon for its unique color, rather than for its perfunctory ability to double the bass line. Orchestral works with fully independent parts for the bassoon would not become commonplace until the Classical era. Wolfgang Amadeus Mozart's Jupiter symphony is a prime example, with its famous bassoon solos in the first movement. The bassoons were generally paired, as in current practice, though the famed Mannheim orchestra boasted four.
61
+
62
+ Another important use of the bassoon during the Classical era was in the Harmonie, a chamber ensemble consisting of pairs of oboes, horns and bassoons; later, two clarinets would be added to form an octet. The Harmonie was an ensemble maintained by German and Austrian noblemen for private music-making, and was a cost-effective alternative to a full orchestra. Haydn, Mozart, Ludwig van Beethoven and Franz Krommer all wrote considerable amounts of music for the Harmonie.
63
+
64
+ The formation of the modern wind section in the late Classical, particularly the dominance of smaller clarinets instead of basset horn, created a preponderance of high-pitched woodwind instruments in the section, with lower auxiliaries such as bass clarinet not yet included. Therefore, scoring for the wind section meant that the bassoons would often serve as both bass and tenor, as in the chorales of Beethoven symphonies. Thus, over the Classical period and into the Romantic, although bassoon retained its function as bass, it also came to be used as a lyrical tenor as well, particularly in solos (somewhat parallel to the treatment of the cello in the strings). The introduction of contrabassoon around this time, along with lower horn writing and expanded lower brass, also alleviated the bassoons (particularly the principal) of the need to serve as a bass. The increasingly sophisticated mechanism of the instrument throughout this time also meant that it could produce higher pitches with greater facility and more expression, which also factored into the increasing frequency of bassoon solos in orchestral writing.
65
+
66
+ The modern symphony orchestra, fully established in the Romantic, typically calls for two bassoons, often with a third playing or doubling on the contrabassoon. Some works call for four or more players, typically for greater power and diversity of character. The first player is frequently called upon to perform solo passages. In the Romantic and later styles, the versatility of the bassoon's range of character meant that it would be scored in diverse styles, often particular to a composer or national culture
67
+ and their notion of how to use it. It has been used for lyrical roles such as Maurice Ravel's Boléro, vocal (and often plaintive or melancholy) ones such as the symphonies of Tchaikovsky, anguished wailing as in Shostakovich's 9th, more comical characters, like the grandfather's theme in Peter and the Wolf, or sinister and dark ones, as in the later movements of Symphonie Fantastique.
68
+
69
+ Its agility suits it for passages such as the famous running line (doubled in the violas and cellos) in the overture to The Marriage of Figaro. The bassoons' role in the orchestra has changed little since the Romantic; with frequent bass and tenor roles common, and, with the expanded tessitura of the 20th century, occasionally alto (or countertenor) too. The bassoons often double the celli and double basses, and provide harmonic support along with the French horns.
70
+
71
+ A wind ensemble will usually also include two bassoons and sometimes contrabassoon, each with independent parts; other types of concert wind ensembles will often have larger sections, with many players on each of first or second parts; in simpler arrangements there will be only one bassoon part (sometimes played in unison by multiple bassoonists) and no contrabassoon part. The bassoon's role in the concert band is similar to its role in the orchestra, though when scoring is thick it often cannot be heard above the brass instruments also in its range. La Fiesta Mexicana, by H. Owen Reed, features the instrument prominently, as does the transcription of Malcolm Arnold's Four Scottish Dances, which has become a staple of the concert band repertoire.
72
+
73
+ The bassoon is part of the standard wind quintet instrumentation, along with the flute, oboe, clarinet, and horn; it is also frequently combined in various ways with other woodwinds. Richard Strauss's "Duet-Concertino" pairs it with the clarinet as concertante instruments, with string orchestra in support. An ensemble known as the "reed quintet" also makes use of the bassoon. A reed quintet is made up of an oboe, clarinet, saxophone, bass clarinet, and bassoon. In small ensembles such as this, bassoon's bass function is in greater demand, although in repertoire from the 20th century (when bassoon's top octave and bass-register horn writing became more frequently employed) bassoon writing may call for it to play with the same agility (and often in the same register) as the smaller woodwinds, as seen in cornerstone works like Summer Music.
74
+
75
+ The bassoon quartet has also gained favor in recent times. The bassoon's wide range and variety of tone colors make it well suited to grouping in a like-instrument ensemble. Peter Schickele's "Last Tango in Bayreuth" (after themes from Tristan und Isolde) is a popular work; Schickele's fictional alter ego P. D. Q. Bach exploits the more humorous aspects with his quartet "Lip My Reeds", which at one point calls for players to perform on the reed alone. It also calls for a low A at the very end of the prelude section in the fourth bassoon part. It is written so that the first bassoon does not play; instead, the player's role is to place an extension in the bell of the fourth bassoon so that the note can be played.
76
+
77
+ The bassoon is infrequently used as a jazz instrument and rarely seen in a jazz ensemble. It first began appearing in the 1920s, including specific calls for its use in Paul Whiteman's group, the unusual octets of Alec Wilder, and a few other session appearances. The next few decades saw the instrument used only sporadically, as symphonic jazz fell out of favor, but the 1960s saw artists such as Yusef Lateef and Chick Corea incorporate bassoon into their recordings. Lateef's diverse and eclectic instrumentation saw the bassoon as a natural addition (see, e.g., The Centaur and the Phoenix (1960) which features bassoon as part of a 6-man horn section, including a few solos) while Corea employed the bassoon in combination with flautist Hubert Laws.
78
+
79
+ More recently, Illinois Jacquet, Ray Pizzi, Frank Tiberi, and Marshall Allen have both doubled on bassoon in addition to their saxophone performances. Bassoonist Karen Borca, a performer of free jazz, is one of the few jazz musicians to play only bassoon; Michael Rabinowitz, the Spanish bassoonist Javier Abad, and James Lassen, an American resident in Bergen, Norway, are others. Katherine Young plays the bassoon in the ensembles of Anthony Braxton. Lindsay Cooper, Paul Hanson, the Brazilian bassoonist Alexandre Silvério, Trent Jacobs and Daniel Smith are also currently using the bassoon in jazz. French bassoonists Jean-Jacques Decreux[14] and Alexandre Ouzounoff[15] have both recorded jazz, exploiting the flexibility of the Buffet system instrument to good effect.
80
+
81
+ The bassoon is even rarer as a regular member of rock bands. However, several 1960s pop music hits feature the bassoon, including "The Tears of a Clown" by Smokey Robinson and the Miracles (the bassoonist was Charles R. Sirard[16]), "Jennifer Juniper" by Donovan, "59th Street Bridge Song" by Harpers Bizarre, and the oompah bassoon underlying The New Vaudeville Band's "Winchester Cathedral". From 1974 to 1978, the bassoon was played by Lindsay Cooper in the British avant-garde band Henry Cow. The Leonard Nimoy song The Ballad of Bilbo Baggins features the Bassoon. In the 1970s it was played, in the British medieval/progressive rock band Gryphon, by Brian Gulland, as well as by the American band Ambrosia, where it was played by drummer Burleigh Drummond. The Belgian Rock in Opposition-band Univers Zero is also known for its use of the bassoon.
82
+
83
+ In the 1990s, Madonna Wayne Gacy provided bassoon for the alternative metal band Marilyn Manson as did Aimee DeFoe, in what is self-described as "grouchily lilting garage bassoon" in the indie-rock band Blogurt from Pittsburgh, Pennsylvania;[17] and Bengt Lagerberg, drummer with The Cardigans, played bassoon on several tracks on the band's album Emmerdale.
84
+
85
+ More recently, These New Puritans's 2010 album Hidden makes heavy use of the instrument throughout; their principal songwriter, Jack Barnett, claimed repeatedly to be "writing a lot of music for bassoon" in the run-up to its recording.[18] In early 2011, American hip-hop artist Kanye West updated his Twitter account to inform followers that he recently added the bassoon to a yet unnamed song.[19]
86
+ The rock band Better Than Ezra took their name from a passage in Ernest Hemingway's A Moveable Feast in which the author comments that listening to an annoyingly talkative person is still "better than Ezra learning how to play the bassoon", referring to Ezra Pound.
87
+
88
+ British psychedelic/progressive rock band Knifeworld features the bassoon playing of Chloe Herrington, who also plays for experimental chamber rock orchestra Chrome Hoof.
89
+
90
+ In 2016, the bassoon was featured on the album Gang Signs and Prayers by UK ”grime” artist Stormzy. Played by UK bassoonist Louise Watson, the bassoon is heard in the tracks "Cold" and "Mr Skeng" as a complement to the electronic synthesizer bass lines typically found in this genre.
91
+
92
+ The indie rock/pop/folk band, Dr. Bones Revival, based in Cleveland, Ohio features the bassoon in many of their songs. This instrument made its debut with the band in their 2020 charity concert in the Tremont neighborhood. The band members include four resident physicians in the Cleveland metropolitan area.
93
+
94
+ The bassoon is held diagonally in front of the player, but unlike the flute, oboe and clarinet, it cannot be easily supported by the player's hands alone. Some means of additional support is usually required; the most common ones are a seat strap attached to the base of the boot joint, which is laid across the chair seat prior to sitting down, or a neck strap or shoulder harness attached to the top of the boot joint. Occasionally a spike similar to those used for the cello or the bass clarinet is attached to the bottom of the boot joint and rests on the floor. It is possible to play while standing up if the player uses a neck strap or similar harness, or if the seat strap is tied to the belt. Sometimes a device called a balance hanger is used when playing in a standing position. This is installed between the instrument and the neck strap, and shifts the point of support closer to the center of gravity, adjusting the distribution of weight between the two hands.
95
+
96
+ The bassoon is played with both hands in a stationary position, the left above the right, with five main finger holes on the front of the instrument (nearest the audience) plus a sixth that is activated by an open-standing key. Five additional keys on the front are controlled by the little fingers of each hand. The back of the instrument (nearest the player) has twelve or more keys to be controlled by the thumbs, the exact number varying depending on model.
97
+
98
+ To stabilize the right hand, many bassoonists use an adjustable comma-shaped apparatus called a "crutch", or a hand rest, which mounts to the boot joint. The crutch is secured with a thumb screw, which also allows the distance that it protrudes from the bassoon to be adjusted. Players rest the curve of the right hand where the thumb joins the palm against the crutch. The crutch also keeps the right hand from tiring and enables the player to keep the finger pads flat on the finger holes and keys.
99
+
100
+ An aspect of bassoon technique not found on any other woodwind is called flicking. It involves the left hand thumb momentarily pressing, or "flicking" the high A, C and D keys at the beginning of certain notes in the middle octave to achieve a clean slur from a lower note. This eliminates cracking, or brief multiphonics that happens without the use of this technique. The alternative method is "venting", which requires that the register key be used as part of the full fingering as opposed to being open momentarily at the start of the note. This is sometimes called the "European style"; venting raises the intonation of the notes slightly, and it can be advantageous when tuning to higher frequencies. Some bassoonists flick A and Bb when tongued, for clarity of articulation, but flicking (or venting) is practically ubiquitous for slurs.
101
+
102
+ While flicking is used to slur up to higher notes, the whisper key is used for lower notes. From the A♭ right below middle C and lower, the whisper key is pressed with the left thumb and held for the duration of the note. This prevents cracking, as low notes can sometimes crack into a higher octave. Both flicking and using the whisper key is especially important to ensure notes speak properly during slurring between high and low registers.
103
+
104
+ While bassoons are usually critically tuned at the factory, the player nonetheless has a great degree of flexibility of pitch control through the use of breath support, embouchure, and reed profile. Players can also use alternate fingerings to adjust the pitch of many notes. Similar to other woodwind instruments, the length of the bassoon can be increased to lower pitch or decreased to raise pitch. On the bassoon, this is done preferably by changing the bocal to one of a different length, (lengths are denoted by a number on the bocal, usually starting at 0 for the shortest length, and 3 for the longest, but there are some manufacturers who will use other numbers) but it is possible to push the bocal in or out slightly to grossly adjust the pitch.[20]
105
+
106
+ The bassoon embouchure is a very important aspect of producing a full, round, and rich sound on the instrument. The lips are both rolled over the teeth, often with the upper lip further along in an "overbite". The lips provide micromuscular pressure on the entire circumference of the reed, which grossly controls intonation and harmonic excitement, and thus must be constantly modulated with every change of note. How far along the reed the lips are placed affects both tone (with less reed in the mouth making the sound more edged or "reedy", and more reed making it smooth and less projectile) and the way the reed will respond to pressure.
107
+
108
+ The musculature employed in a bassoon embouchure is primarily around the lips, which pressure the reed into the shapes needed for the desired sound. The jaw is raised or lowered to adjust the oral cavity for better reed control, but the jaw muscles are used much less for upward vertical pressure than in single reeds, only being substantially employed in the very high register. However, double reed students often "bite" the reed with these muscles because the control and tone of the labial and other muscles is still developing, but this generally makes the sound sharp and "choked" as it contracts the aperture of the reed and stifles the vibration of its blades.
109
+
110
+ Apart from the embouchure proper, students must also develop substantial muscle tone and control in the diaphragm, throat, neck and upper chest, which are all employed to increase and direct air pressure. Air pressure is a very important aspect of the tone, intonation and projection of double reed instruments, affecting these qualities as much, or more, than the embouchure does.
111
+
112
+ Attacking a note on the bassoon with imprecise amounts of muscle or air pressure for the desired pitch will result in poor intonation, cracking or multiphonics, accidentally producing the incorrect partial, or the reed not speaking at all. These problems are compounded by the individual qualities of reeds, which are categorically inconsistent in behaviour for inherent and exherent reasons.
113
+
114
+ The muscle requirements and variability of reeds mean it takes some time for bassoonists (and oboists) to develop an embouchure that exhibits consistent control across all reeds, dynamics and playing environments.
115
+
116
+ The fingering technique of the bassoon varies more between players, by a wide margin, than that of any other orchestral woodwind. The complex mechanism and acoustics mean the bassoon lacks simple fingerings of good sound quality or intonation for some notes (especially in the higher range), but, conversely, there is a great variety of superior, but generally more complicated, fingerings for them. Typically, the simpler fingerings for such notes are used as alternate or trill fingerings, and the bassoonist will use as "full fingering" one or several of the more complex executions possible, for optimal sound quality. The fingerings used are at the discretion of the bassoonist, and, for particular passages, he or she may experiment to find new alternate fingerings that are thus idiomatic to the player.
117
+
118
+ These elements have resulted in both "full" and alternate fingerings differing extensively between bassoonists, and are further informed by factors such as cultural difference in what sound is sought, how reeds are made, and regional variation in tuning frequencies (necessitating sharper or flatter fingerings). Regional enclaves of bassoonists tend to have some uniformity in technique, but on a global scale, technique differs such that two given bassoonists may share no fingerings for certain notes. Owing to these factors, ubiquitous bassoon technique can only be partially notated.
119
+
120
+ The left thumb operates nine keys: B♭1, B1, C2, D2, D5, C5 (also B4), two keys when combined create A4, and the whisper key. The whisper key should be held down for notes between and including F2 and G♯3 and certain other notes; it can be omitted, but the pitch will destabilise. Additional notes can be created with the left thumb keys; the D2 and bottom key above the whisper key on the tenor joint (C♯ key) together create both C♯3 and C♯4. The same bottom tenor-joint key is also used, with additional fingering, to create E5 and F5. D5 and C5 together create C♯5. When the two keys on the tenor joint to create A4 are used with slightly altered fingering on the boot joint, B♭4 is created. The whisper key may also be used at certain points throughout the instrument's high register, along with other fingerings, to alter sound quality as desired.
121
+
122
+ The right thumb operates four keys. The uppermost key is used to produce B♭2 and B♭3, and may be used in B4,F♯4, C5, D5, F5, and E♭5. The large circular key, otherwise known as the "pancake key", is held down for all the lowest notes from E2 down to B♭1. It is also used, like the whisper key, in additional fingerings for muting the sound. For example, in Ravel's "Boléro", the bassoon is asked to play the ostinato on G4. This is easy to perform with the normal fingering for G4, but Ravel directs that the player should also depress the E2 key (pancake key) to mute the sound (this being written with Buffet system in mind; the G fingering on which involves the Bb key – sometimes called "French" G on Heckel). The next key operated by the right thumb is known as the "spatula key": its primary use is to produce F♯2 and F♯3. The lowermost key is used less often: it is used to produce A♭2 (G♯2) and A♭3 (G♯3), in a manner that avoids sliding the right fourth finger from another note.
123
+
124
+ The four fingers of the left hand can each be used in two different positions. The key normally operated by the index finger is primarily used for E5, also serving for trills in the lower register. Its main assignment is the upper tone hole. This hole can be closed fully, or partially by rolling down the finger. This half-holing technique is used to overblow F♯3, G3 and G♯3. The middle finger typically stays on the centre hole on the tenor joint. It can also move to a lever used for E♭5, also a trill key. The ring finger operates, on most models, one key. Some bassoons have an alternate E♭ key above the tone hole, predominantly for trills, but many do not. The smallest finger operates two side keys on the bass joint. The lower key is typically used for C♯2, but can be used for muting or flattening notes in the tenor register. The upper key is used for E♭2, E4, F4, F♯4, A4, B♭4, B4, C5, C♯5, and D5; it flattens G3 and is the standard fingering for it in many places that tune to lower Hertz levels such as A440.
125
+
126
+ The four fingers of the right hand have at least one assignment each. The index finger stays over one hole, except that when E♭5 is played a side key at the top of the boot is used (this key also provides a C♯3 trill, albeit sharp on D). The middle finger remains stationary over the hole with a ring around it, and this ring and other pads are lifted when the smallest finger on the right hand pushes a lever. The ring finger typically remains stationary on the lower ring-finger key. However, the upper ring-finger key can be used, typically for B♭2 and B♭3, in place of the top thumb key on the front of the boot joint; this key comes from the oboe, and some bassoons do not have it because the thumb fingering is practically universal. The smallest finger operates three keys. The backmost one, closest to the bassoonist, is held down throughout most of the bass register. F♯4 may be created with this key, as well as G4, B♭4, B4, and C5 (the latter three employing solely it to flatten and stabilise the pitch). The lowest key for the smallest finger on the right hand is primarily used for A♭2 (G♯2) and A♭3 (G♯3) but can be used to improve D5, E♭5, and F5. The frontmost key is used, in addition to the thumb key, to create G♭2 and G♭3; on many bassoons this key operates a different tone hole to the thumb key and produces a slightly flatter F♯ ("duplicated F♯"); some techniques use one as standard for both octaves and the other for utility, but others use the thumb key for the lower and the fourth finger for the higher.
127
+
128
+ Many extended techniques can be performed on the bassoon, such as multiphonics, flutter-tonguing, circular breathing, double tonguing, and harmonics. In the case of the bassoon, flutter-tonguing may be accomplished by "gargling" in the back of the throat as well as by the conventional method of rolling Rs. Multiphonics on the bassoon are plentiful, and can be achieved by using particular alternative fingerings, but are generally heavily influenced by embouchure position. Also, again using certain fingerings, notes may be produced on the instrument that sound lower pitches than the actual range of the instrument. These notes tend to sound very gravelly and out of tune, but technically sound below the low B♭.
129
+
130
+ The bassoonist may also produce lower notes than the bottom B♭ by extending the length of bell. This can be achieved by inserting a specially made "low A extension" into the bell, but may also be achieved with a small paper or rubber tube or a clarinet/cor anglais bell sitting inside the bassoon bell (although the note may tend sharp). The effect of this is to convert the lower B♭ into a lower note, almost always A natural; this broadly lowers the pitch of the instrument (most noticeably in the lower register) and will often accordingly convert the lowest B to B♭ (and render the neighbouring C very flat). The idea of using low A was begun by Richard Wagner, who wanted to extend the range of the bassoon. Many passages in his later operas require the low A as well as the B-flat immediately above it - this is possible on a normal bassoon using an extension which also flattens low B to B♭, but all extensions to the bell have significant effects on intonation and sound quality in the bottom register of the instrument, and passages such as this are more often realised with comparative ease by the contrabassoon.
131
+
132
+ Some bassoons have been specially made to allow bassoonists to realize similar passages. These bassoons are made with a "Wagner bell" which is an extended bell with a key for both the low A and the low B-flat, but they are not widespread; bassoons with Wagner bells suffer similar intonational problems as a bassoon with an ordinary A extension, and a bassoon must be constructed specifically to accommodate one, making the extension option far less complicated. Extending the bassoon's range even lower than the A, though possible, would have even stronger effects on pitch and make the instrument effectively unusable.
133
+
134
+ Despite the logistic difficulties of the note, Wagner was not the only composer to write the low A. Another composer who has required the bassoon to be chromatic down to low A is Gustav Mahler. Richard Strauss also calls for the low A in his opera Intermezzo. Some works have optional low As, as in Carl Nielsen's Wind Quintet, op. 43, which includes an optional low A for the final cadence of the work.
135
+
136
+ The complicated fingering and the problem of reeds make the bassoon more of a challenge to learn than some of the other woodwind instruments.[21] Cost is another big factor in a person's decision to pursue the bassoon. Prices range from US$7,000 to over $45,000 for a good-quality instrument.[22] In North America, schoolchildren typically take up bassoon only after starting on another reed instrument, such as clarinet or saxophone.[23]
137
+
138
+ Students in America often begin to pursue the study of bassoon performance and technique in the middle years of their music education. Students are often provided with a school instrument and encouraged to pursue lessons with private instructors. Students typically receive instruction in proper posture, hand position, embouchure, and tone production.
en/5860.html.txt ADDED
@@ -0,0 +1,359 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The European Union (EU) is a political and economic union of 27 member states that are located primarily in Europe.[11] Its members have a combined area of 4,233,255.3 km2 (1,634,469.0 sq mi) and an estimated total population of about 447 million. The EU has developed an internal single market through a standardised system of laws that apply in all member states in those matters, and only those matters, where members have agreed to act as one. EU policies aim to ensure the free movement of people, goods, services and capital within the internal market;[12] enact legislation in justice and home affairs; and maintain common policies on trade,[13] agriculture,[14] fisheries and regional development.[15] Passport controls have been abolished for travel within the Schengen Area.[16] A monetary union was established in 1999, coming into full force in 2002, and is composed of 19 EU member states which use the euro currency. The EU has often been described as a sui generis political entity (without precedent or comparison).[17][18]
6
+
7
+ The EU and European citizenship were established when the Maastricht Treaty came into force in 1993.[19] The EU traces its origins to the European Coal and Steel Community (ECSC) and the European Economic Community (EEC), established, respectively, by the 1951 Treaty of Paris and 1957 Treaty of Rome. The original members of what came to be known as the European Communities were the Inner Six: Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany. The Communities and their successors have grown in size by the accession of new member states and in power by the addition of policy areas to their remit. The latest major amendment to the constitutional basis of the EU, the Treaty of Lisbon, came into force in 2009.
8
+
9
+ On 31 January 2020, the United Kingdom became the first member state to leave the EU.[20] Following a 2016 referendum, the UK signified its intention to leave and negotiated a withdrawal agreement. The UK is in a transitional phase until at least 31 December 2020, during which it remains subject to EU law and part of the EU single market and customs union. Before this, three territories of member states had left the EU or its forerunners, these being French Algeria (in 1962, upon independence), Greenland (in 1985, following a referendum) and Saint Barthélemy (in 2012).
10
+
11
+ Containing in 2020 some 5.8% of the world population,[d] the EU (excluding the United Kingdom) had generated a nominal gross domestic product (GDP) of around US$15.5 trillion in 2019,[8] constituting approximately 18% of global nominal GDP.[22] Additionally, all EU countries have a very high Human Development Index according to the United Nations Development Programme. In 2012, the EU was awarded the Nobel Peace Prize.[23] Through the Common Foreign and Security Policy, the union has developed a role in external relations and defence. It maintains permanent diplomatic missions throughout the world and represents itself at the United Nations, the World Trade Organization, the G7 and the G20. Due to its global influence, the European Union has been described as an emerging superpower.[24]
12
+
13
+ The following timeline outlines the legal inception of the European Union (EU), from the post-war period until the union's founding and consolidation³ in 1993 and 2009, respectively. This integration ― also referred to as the European project or the construction of Europe (French: la construction européenne) ― involved treaty-based European cooperation in various policy areas, including the European Communities that were founded in the 1950s in the spirit of the Schuman Declaration.
14
+
15
+
16
+
17
+ During the centuries following the fall of Rome in 476, several European states viewed themselves as translatio imperii ("transfer of rule") of the defunct Roman Empire: the Frankish Empire (481–843) and the Holy Roman Empire (962–1806) were thereby attempts to resurrect Rome in the West.[e] This political philosophy of a supra-national rule over the continent, similar to the example of the ancient Roman Empire, resulted in the early Middle Ages in the concept of a renovatio imperii ("restoration of the empire"),[26] either in the forms of the Reichsidee ("imperial idea") or the religiously inspired Imperium Christianum ("christian empire").[27][28] Medieval Christendom[29][30] and the political power of the Papacy[31][32] are often cited as conducive to European integration and unity.
18
+
19
+ In the oriental parts of the continent, the Russian Tsardom, and ultimately the Empire (1547–1917), declared Moscow to be Third Rome and inheritor of the Eastern tradition after the fall of Constantinople in 1453.[33] The gap between Greek East and Latin West had already been widened by the political scission of the Roman Empire in the 4th century and the Great Schism of 1054, and would be eventually widened again by the Iron Curtain (1945–1991).[34]
20
+
21
+ Pan-European political thought truly emerged during the 19th century, inspired by the liberal ideas of the French and American Revolutions after the demise of Napoléon's Empire (1804–1815). In the decades following the outcomes of the Congress of Vienna, ideals of European unity flourished across the continent, especially in the writings of Wojciech Jastrzębowski,[35] Giuseppe Mazzini,[36] or Theodore de Korwin Szymanowski.[37] The term United States of Europe (French: États-Unis d'Europe) was used at that time by Victor Hugo during a speech at the International Peace Congress held in Paris in 1849:[38]
22
+
23
+ A day will come when all nations on our continent will form a European brotherhood ... A day will come when we shall see ... the United States of America and the United States of Europe face to face, reaching out for each other across the seas.
24
+
25
+ During the interwar period, the consciousness that national markets in Europe were interdependent though confrontational, along with the observation of a larger and growing US market on the other side of the ocean, nourished the urge for the economic integration of the continent.[39] In 1920, advocating the creation of a European economic union, British economist John Maynard Keynes wrote that "a Free Trade Union should be established ... to impose no protectionist tariffs whatever against the produce of other members of the Union."[40] During the same decade, Richard von Coudenhove-Kalergi, one of the first to imagine of a modern political union of Europe, founded the Pan-Europa Movement.[41] His ideas influenced his contemporaries, among which then Prime Minister of France Aristide Briand. In 1929, the latter gave a speech in favour of a European Union before the assembly of the League of Nations, precursor of the United Nations.[42] In a radio address in March 1943, with war still raging, Britain's leader Sir Winston Churchill spoke warmly of "restoring the true greatness of Europe" once victory had been achieved, and mused on the post-war creation of a "Council of Europe" which would bring the European nations together to build peace.[43][44]
26
+
27
+ After World War II, European integration was seen as an antidote to the extreme nationalism which had devastated parts of the continent.[45] In a speech delivered on 19 September 1946 at the University of Zürich, Switzerland, Winston Churchill went further and advocated the emergence of a United States of Europe.[46] The 1948 Hague Congress was a pivotal moment in European federal history, as it led to the creation of the European Movement International and of the College of Europe, where Europe's future leaders would live and study together.[47]
28
+
29
+ It also led directly to the founding of the Council of Europe in 1949, the first great effort to bring the nations of Europe together, initially ten of them. The Council focused primarily on values—human rights and democracy—rather than on economic or trade issues, and was always envisaged as a forum where sovereign governments could choose to work together, with no supra-national authority. It raised great hopes of further European integration, and there were fevered debates in the two years that followed as to how this could be achieved.
30
+
31
+ But in 1952, disappointed at what they saw as the lack of progress within the Council of Europe, six nations decided to go further and created the European Coal and Steel Community, which was declared to be "a first step in the federation of Europe".[48] This community helped to economically integrate and coordinate the large number of Marshall Plan funds from the United States.[49] European leaders Alcide De Gasperi from Italy, Jean Monnet and Robert Schuman from France, and Paul-Henri Spaak from Belgium understood that coal and steel were the two industries essential for waging war, and believed that by tying their national industries together, future war between their nations became much less likely.[50] These men and others are officially credited as the founding fathers of the European Union.
32
+
33
+ In 1957, Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany signed the Treaty of Rome, which created the European Economic Community (EEC) and established a customs union. They also signed another pact creating the European Atomic Energy Community (Euratom) for co-operation in developing nuclear energy. Both treaties came into force in 1958.[50]
34
+
35
+ The EEC and Euratom were created separately from the ECSC and they shared the same courts and the Common Assembly. The EEC was headed by Walter Hallstein (Hallstein Commission) and Euratom was headed by Louis Armand (Armand Commission) and then Étienne Hirsch. Euratom was to integrate sectors in nuclear energy while the EEC would develop a customs union among members.[51][52]
36
+
37
+ During the 1960s, tensions began to show, with France seeking to limit supranational power. Nevertheless, in 1965 an agreement was reached and on 1 July 1967 the Merger Treaty created a single set of institutions for the three communities, which were collectively referred to as the European Communities.[53][54] Jean Rey presided over the first merged Commission (Rey Commission).[55]
38
+
39
+ In 1973, the Communities were enlarged to include Denmark (including Greenland, which later left the Communities in 1985, following a dispute over fishing rights), Ireland, and the United Kingdom.[56] Norway had negotiated to join at the same time, but Norwegian voters rejected membership in a referendum. In 1979, the first direct elections to the European Parliament were held.[57]
40
+
41
+ Greece joined in 1981, Portugal and Spain following in 1986.[58] In 1985, the Schengen Agreement paved the way for the creation of open borders without passport controls between most member states and some non-member states.[59] In 1986, the European flag began to be used by the EEC[60] and the Single European Act was signed.
42
+
43
+ In 1990, after the fall of the Eastern Bloc, the former East Germany became part of the Communities as part of a reunified Germany.[61]
44
+
45
+ The European Union was formally established when the Maastricht Treaty—whose main architects were Helmut Kohl and François Mitterrand—came into force on 1 November 1993.[19][62] The treaty also gave the name European Community to the EEC, even if it was referred as such before the treaty. With further enlargement planned to include the former communist states of Central and Eastern Europe, as well as Cyprus and Malta, the Copenhagen criteria for candidate members to join the EU were agreed upon in June 1993. The expansion of the EU introduced a new level of complexity and discord.[63] In 1995, Austria, Finland, and Sweden joined the EU.
46
+
47
+ In 2002, euro banknotes and coins replaced national currencies in 12 of the member states. Since then, the eurozone has increased to encompass 19 countries. The euro currency became the second largest reserve currency in the world. In 2004, the EU saw its biggest enlargement to date when Cyprus, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia, and Slovenia joined the Union.[64]
48
+
49
+ In 2007, Bulgaria and Romania became EU members. The same year, Slovenia adopted the euro,[64] followed in 2008 by Cyprus and Malta, by Slovakia in 2009, by Estonia in 2011, by Latvia in 2014, and by Lithuania in 2015.
50
+
51
+ On 1 December 2009, the Lisbon Treaty entered into force and reformed many aspects of the EU. In particular, it changed the legal structure of the European Union, merging the EU three pillars system into a single legal entity provisioned with a legal personality, created a permanent President of the European Council, the first of which was Herman Van Rompuy, and strengthened the position of the High Representative of the Union for Foreign Affairs and Security Policy.[65][66]
52
+
53
+ In 2012, the EU received the Nobel Peace Prize for having "contributed to the advancement of peace and reconciliation, democracy, and human rights in Europe."[67][68] In 2013, Croatia became the 28th EU member.[69]
54
+
55
+ From the beginning of the 2010s, the cohesion of the European Union has been tested by several issues, including a debt crisis in some of the Eurozone countries, increasing migration from Africa and Asia, and the United Kingdom's withdrawal from the EU.[70] A referendum in the UK on its membership of the European Union was held in 2016, with 51.9% of participants voting to leave.[71] The UK formally notified the European Council of its decision to leave on 29 March 2017, initiating the formal withdrawal procedure for leaving the EU; following extensions to the process, the UK left the European Union on 31 January 2020, though most areas of EU law will continue to apply to the UK for a transition period lasting until the end of 2020 at the earliest.[72]
56
+
57
+ As of 1 February 2020[update], the population of the European Union was about 447 million people (5.8% of the world population).[73][74] In 2015, 5.1 million children were born in the EU-28 corresponding to a birth rate of 10 per 1,000, which is 8 births below the world average.[75] For comparison, the EU-28 birth rate had stood at 10.6 in 2000, 12.8 in 1985 and 16.3 in 1970.[76] Its population growth rate was positive at an estimated 0.23% in 2016.[77]
58
+
59
+ In 2010, 47.3 million people who lived in the EU were born outside their resident country. This corresponds to 9.4% of the total EU population. Of these, 31.4 million (6.3%) were born outside the EU and 16.0 million (3.2%) were born in another EU member state. The largest absolute numbers of people born outside the EU were in Germany (6.4 million), France (5.1 million), the United Kingdom (4.7 million), Spain (4.1 million), Italy (3.2 million), and the Netherlands (1.4 million).[78] In 2017, approximately 825,000 people acquired citizenship of a member state of the European Union. The largest groups were nationals of Morocco, Albania, India, Turkey and Pakistan.[79] 2.4 million immigrants from non-EU countries entered the EU in 2017.[80][81]
60
+
61
+ The EU contains about 40 urban areas with populations of over one million. The largest metropolitan area in the EU is Paris.[82] These are followed by Madrid, Barcelona, Berlin, Rhine-Ruhr, Rome, and Milan, all with a metropolitan population of over 4 million.[83]
62
+
63
+ The EU also has numerous polycentric urbanised regions like Rhine-Ruhr (Cologne, Dortmund, Düsseldorf et al.), Randstad (Amsterdam, Rotterdam, The Hague, Utrecht et al.), Frankfurt Rhine-Main (Frankfurt), the Flemish Diamond (Antwerp, Brussels, Leuven, Ghent et al.) and Upper Silesian area (Katowice, Ostrava et al.).[82]
64
+
65
+
66
+
67
+ The European Union has 24 official languages: Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Italian, Irish, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovene, Spanish, and Swedish. Important documents, such as legislation, are translated into every official language and the European Parliament provides translation for documents and plenary sessions.[89][90][91]
68
+
69
+ Due to the high number of official languages, most of the institutions use only a handful of working languages.[92] The European Commission conducts its internal business in three procedural languages: English, French, and German. Similarly, the Court of Justice of the European Union uses French as the working language,[93] while the European Central Bank conducts its business primarily in English.[94][95]
70
+
71
+ Even though language policy is the responsibility of member states, EU institutions promote multilingualism among its citizens.[h][96] English is the most widely spoken language in the EU, being understood by 51% of the EU population when counting both native and non-native speakers.[97] German is the most widely spoken mother tongue (18% of the EU population), and the second most widely understood foreign language, followed by French (13% of the EU population). In addition, both are official languages of several EU member states. More than half (56%) of EU citizens are able to engage in a conversation in a language other than their mother tongue.[98]
72
+
73
+ A total of twenty official languages of the EU belong to the Indo-European language family, represented by the Balto-Slavic,[i] the Italic,[j] the Germanic,[k] the Hellenic,[l] and the Celtic[m] branches. Only four languages, namely Hungarian, Finnish, Estonian (all three Uralic), and Maltese (Semitic), are not Indo-European languages.[99] The three official alphabets of the European Union (Cyrillic, Latin, and modern Greek) all derive from the Archaic Greek scripts.[3][100]
74
+
75
+ Luxembourgish (in Luxembourg) and Turkish (in Cyprus) are the only two national languages that are not official languages of the EU. On 26 February 2016 it was made public that Cyprus has asked to make Turkish an official EU language, in a “gesture” that could help solve the division of the country.[101] Already in 2004, it was planned that Turkish would become an official language when Cyprus reunites.[102]
76
+
77
+ Besides the 24 official languages, there are about 150 regional and minority languages, spoken by up to 50 million people.[99] Catalan, Galician and Basque are not recognised official languages of the European Union but have semi-official status in one member state (Spain): therefore, official translations of the treaties are made into them and citizens have the right to correspond with the institutions in these languages.[103][104] The European Charter for Regional or Minority Languages ratified by most EU states provides general guidelines that states can follow to protect their linguistic heritage. The European Day of Languages is held annually on 26 September and is aimed at encouraging language learning across Europe.[105]
78
+
79
+ The EU has no formal connection to any religion. The Article 17 of the Treaty on the Functioning of the European Union[106] recognises the "status under national law of churches and religious associations" as well as that of "philosophical and non-confessional organisations".[107]
80
+
81
+ The preamble to the Treaty on European Union mentions the "cultural, religious and humanist inheritance of Europe".[107] Discussion over the draft texts of the European Constitution and later the Treaty of Lisbon included proposals to mention Christianity or a god, or both, in the preamble of the text, but the idea faced opposition and was dropped.[108]
82
+
83
+ Christians in the European Union are divided among members of Catholicism (both Roman and Eastern Rite), numerous Protestant denominations (Anglicans, Lutherans, and Reformed forming the bulk of this category), and the Eastern Orthodox Church. In 2009, the EU had an estimated Muslim population of 13 million,[109] and an estimated Jewish population of over a million.[110] The other world religions of Buddhism, Hinduism, and Sikhism are also represented in the EU population.
84
+
85
+ According to new polls about religiosity in the European Union in 2015 by Eurobarometer, Christianity is the largest religion in the European Union, accounting for 71.6% of the EU population. Catholics are the largest Christian group, accounting for 45.3% of the EU population, while Protestants make up 11.1%, Eastern Orthodox make up 9.6%, and other Christians make up 5.6%.[4]
86
+
87
+ Eurostat's Eurobarometer opinion polls showed in 2005 that 52% of EU citizens believed in a god, 27% in "some sort of spirit or life force", and 18% had no form of belief.[111] Many countries have experienced falling church attendance and membership in recent years.[112] The countries where the fewest people reported a religious belief were Estonia (16%) and the Czech Republic (19%).[111] The most religious countries were Malta (95%, predominantly Catholic) as well as Cyprus and Romania (both predominantly Orthodox) each with about 90% of citizens professing a belief in their respective god. Across the EU, belief was higher among women, older people, those with religious upbringing, those who left school at 15 or 16, and those "positioning themselves on the right of the political scale".[111]
88
+
89
+ Through successive enlargements, the European Union has grown from the six founding states (Belgium, France, West Germany, Italy, Luxembourg, and the Netherlands) to the current 27. Countries accede to the union by becoming party to the founding treaties, thereby subjecting themselves to the privileges and obligations of EU membership. This entails a partial delegation of sovereignty to the institutions in return for representation within those institutions, a practice often referred to as "pooling of sovereignty".[113][114]
90
+
91
+ To become a member, a country must meet the Copenhagen criteria, defined at the 1993 meeting of the European Council in Copenhagen. These require a stable democracy that respects human rights and the rule of law; a functioning market economy; and the acceptance of the obligations of membership, including EU law. Evaluation of a country's fulfilment of the criteria is the responsibility of the European Council.[115] Article 50 of the Lisbon Treaty provides the basis for a member to leave the Union. Two territories have left the Union: Greenland (an autonomous province of Denmark) withdrew in 1985;[116] the United Kingdom formally invoked Article 50 of the Consolidated Treaty on European Union in 2017, and became the only sovereign state to leave when it withdrew from the EU in 2020.
92
+
93
+ There are six countries that are recognised as candidates for membership: Albania, Iceland, North Macedonia,[n] Montenegro, Serbia, and Turkey,[117] though Iceland suspended negotiations in 2013.[118] Bosnia and Herzegovina and Kosovo are officially recognised as potential candidates,[117] with Bosnia and Herzegovina having submitted a membership application.
94
+
95
+ The four countries forming the European Free Trade Association (EFTA) are not EU members, but have partly committed to the EU's economy and regulations: Iceland, Liechtenstein and Norway, which are a part of the single market through the European Economic Area, and Switzerland, which has similar ties through bilateral treaties.[119][120] The relationships of the European microstates, Andorra, Monaco, San Marino, and Vatican City include the use of the euro and other areas of co-operation.[121] The following 27 sovereign states (of which the map only shows territories situated in and around Europe) constitute the European Union:[122]
96
+
97
+ The EU's member states cover an area of 4,233,262 square kilometres (1,634,472 sq mi).[p] The EU's highest peak is Mont Blanc in the Graian Alps, 4,810.45 metres (15,782 ft) above sea level.[123] The lowest points in the EU are Lammefjorden, Denmark and Zuidplaspolder, Netherlands, at 7 m (23 ft) below sea level.[124]
98
+ The landscape, climate, and economy of the EU are influenced by its coastline, which is 65,993 kilometres (41,006 mi) long.
99
+
100
+ 65,993 km (41,006 mi) coastline dominates the European climate (Natural Park of Penyal d'Ifac, Spain)
101
+
102
+ Mont Blanc in the Alps is the highest peak in the EU
103
+
104
+ The Danube (pictured in Budapest), is the longest river in the European Union
105
+
106
+ Repovesi National Park in Finland, where there are some 187,888 lakes larger than 500 square metres (5,382 sq ft)
107
+
108
+ Including the overseas territories of France which are located outside the continent of Europe, but which are members of the union, the EU experiences most types of climate from Arctic (north-east Europe) to tropical (French Guiana), rendering meteorological averages for the EU as a whole meaningless. The majority of the population lives in areas with a temperate maritime climate (North-Western Europe and Central Europe), a Mediterranean climate (Southern Europe), or a warm summer continental or hemiboreal climate (Northern Balkans and Central Europe).[125]
109
+
110
+ The EU's population is highly urbanised, with some 75% of inhabitants living in urban areas as of 2006. Cities are largely spread out across the EU with a large grouping in and around the Benelux.[126]
111
+
112
+ The EU operates through a hybrid system of supranational and intergovernmental decision-making,[127][128] and according to the principles of conferral (which says that it should act only within the limits of the competences conferred on it by the treaties) and of subsidiarity (which says that it should act only where an objective cannot be sufficiently achieved by the member states acting alone). Laws made by the EU institutions are passed in a variety of forms.[129] Generally speaking, they can be classified into two groups: those which come into force without the necessity for national implementation measures (regulations) and those which specifically require national implementation measures (directives).[130]
113
+
114
+ Constitutionally, the EU bears some resemblance to both a confederation and a federation,[131][132] but has not formally defined itself as either. (It does not have a formal constitution: its status is defined by the Treaty of European Union and the Treaty on the Functioning of the European Union). It is more integrated than a traditional confederation of states because the general level of government widely employs qualified majority voting in some decision-making among the member states, rather than relying exclusively on unanimity.[133][134] It is less integrated than a federal state because it is not a state in its own right: sovereignty continues to flow 'from the bottom up', from the several peoples of the separate member states, rather than from a single undifferentiated whole. This is reflected in the fact that the member states remain the 'masters of the Treaties', retaining control over the allocation of competences to the Union through constitutional change (thus retaining so-called Kompetenz-kompetenz); in that they retain control of the use of armed force; they retain control of taxation; and in that they retain a right of unilateral withdrawal from the Union under Article 50 of the Treaty on European Union. In addition, the principle of subsidiarity requires that only those matters that need to be determined collectively are so determined.
115
+
116
+ The European Union has seven principal decision-making bodies, its institutions: the European Parliament, the European Council, the Council of the European Union, the European Commission, the Court of Justice of the European Union, the European Central Bank and the European Court of Auditors. Competence in scrutinising and amending legislation is shared between the Council of the European Union and the European Parliament, while executive tasks are performed by the European Commission and in a limited capacity by the European Council (not to be confused with the aforementioned Council of the European Union). The monetary policy of the eurozone is determined by the European Central Bank. The interpretation and the application of EU law and the treaties are ensured by the Court of Justice of the European Union. The EU budget is scrutinised by the European Court of Auditors. There are also a number of ancillary bodies which advise the EU or operate in a specific area.
117
+
118
+ EU policy is in general promulgated by EU directives, which are then implemented in the domestic legislation of its member states, and EU regulations, which are immediately enforceable in all member states. Lobbying at EU level by special interest groups is regulated to try to balance the aspirations of private initiatives with public interest decision-making process[135]
119
+
120
+ The European Parliament is one of three legislative institutions of the EU, which together with the Council of the European Union is tasked with amending and approving the Commission's proposals.
121
+ The 705 Members of the European Parliament (MEPs) are directly elected by EU citizens every five years on the basis of proportional representation. MEPs are elected on a national basis and they sit according to political groups rather than their nationality. Each country has a set number of seats and is divided into sub-national constituencies where this does not affect the proportional nature of the voting system.[136]
122
+
123
+ In the ordinary legislative procedure, the European Commission proposes legislation, which requires the joint approval of the European Parliament and the Council of the European Union to pass. This process applies to nearly all areas, including the EU budget. The Parliament is the final body to approve or reject the proposed membership of the Commission, and can attempt motions of censure on the Commission by appeal to the Court of Justice. The President of the European Parliament (currently David Sassoli) carries out the role of speaker in Parliament and represents it externally. The President and Vice-Presidents are elected by MEPs every two and a half years.[137]
124
+
125
+ The European Council gives political direction to the EU. It convenes at least four times a year and comprises the President of the European Council (currently Charles Michel), the President of the European Commission and one representative per member state (either its head of state or head of government). The High Representative of the Union for Foreign Affairs and Security Policy (currently Josep Borrell) also takes part in its meetings. It has been described by some as the Union's "supreme political authority".[138] It is actively involved in the negotiation of treaty changes and defines the EU's policy agenda and strategies.
126
+
127
+ The European Council uses its leadership role to sort out disputes between member states and the institutions, and to resolve political crises and disagreements over controversial issues and policies. It acts externally as a "collective head of state" and ratifies important documents (for example, international agreements and treaties).[139]
128
+
129
+ Tasks for the President of the European Council are ensuring the external representation of the EU,[140] driving consensus and resolving divergences among member states, both during meetings of the European Council and over the periods between them.
130
+
131
+ The European Council should not be mistaken for the Council of Europe, an international organisation independent of the EU based in Strasbourg.
132
+
133
+ The European Commission acts both as the EU's executive arm, responsible for the day-to-day running of the EU, and also the legislative initiator, with the sole power to propose laws for debate.[141][142][143] The Commission is 'guardian of the Treaties' and is responsible for their efficient operation and policing.[144] It operates de facto as a cabinet government,[citation needed] with 27 Commissioners for different areas of policy, one from each member state, though Commissioners are bound to represent the interests of the EU as a whole rather than their home state.
134
+
135
+ One of the 27 is the President of the European Commission (Jean-Claude Juncker for 2014–2019), appointed by the European Council, subject to the Parliament's approval. After the President, the most prominent Commissioner is the High Representative of the Union for Foreign Affairs and Security Policy, who is ex-officio a Vice-President of the Commission and is also chosen by the European Council.[145] The other 26 Commissioners are subsequently appointed by the Council of the European Union in agreement with the nominated President. The 27 Commissioners as a single body are subject to approval (or otherwise) by vote of the European Parliament.
136
+
137
+ The Council of the European Union (also called the "Council"[146] and the "Council of Ministers", its former title)[147] forms one half of the EU's legislature. It consists of a government minister from each member state and meets in different compositions depending on the policy area being addressed. Notwithstanding its different configurations, it is considered to be one single body.[148] In addition to its legislative functions, the Council also exercises executive functions in relations to the Common Foreign and Security Policy.
138
+
139
+ In some policies, there are several member states that ally with strategic partners within the Union. Examples of such alliances include the Visegrad Group, Benelux, the Baltic Assembly, the New Hanseatic League, and the Craiova Group.
140
+
141
+ The EU had an agreed budget of €120.7 billion for the year 2007 and €864.3 billion for the period 2007–2013,[150] representing 1.10% and 1.05% of the EU-27's GNI forecast for the respective periods. In 1960, the budget of the then European Economic Community was 0.03% of GDP.[151]
142
+
143
+ In the 2010 budget of €141.5 billion, the largest single expenditure item is "cohesion & competitiveness" with around 45% of the total budget.[152] Next comes "agriculture" with approximately 31% of the total.[152] "Rural development, environment and fisheries" takes up around 11%.[152] "Administration" accounts for around 6%.[152] The "EU as a global partner" and "citizenship, freedom, security and justice" bring up the rear with approximately 6% and 1% respectively.[152]
144
+
145
+ The Court of Auditors is legally obliged to provide the Parliament and the Council (specifically, the Economic and Financial Affairs Council) with "a statement of assurance as to the reliability of the accounts and the legality and regularity of the underlying transactions".[153] The Court also gives opinions and proposals on financial legislation and anti-fraud actions.[154] The Parliament uses this to decide whether to approve the Commission's handling of the budget.
146
+
147
+ The European Court of Auditors has signed off the European Union accounts every year since 2007 and, while making it clear that the European Commission has more work to do, has highlighted that most of the errors take place at national level.[155][156] In their report on 2009 the auditors found that five areas of Union expenditure, agriculture and the cohesion fund, were materially affected by error.[157] The European Commission estimated in 2009 that the financial effect of irregularities was €1,863 million.[158]
148
+
149
+ EU member states retain all powers not explicitly handed to the European Union. In some areas the EU enjoys exclusive competence. These are areas in which member states have renounced any capacity to enact legislation. In other areas the EU and its member states share the competence to legislate. While both can legislate, member states can only legislate to the extent to which the EU has not. In other policy areas the EU can only co-ordinate, support and supplement member state action but cannot enact legislation with the aim of harmonising national laws.[159]
150
+
151
+ That a particular policy area falls into a certain category of competence is not necessarily indicative of what legislative procedure is used for enacting legislation within that policy area. Different legislative procedures are used within the same category of competence, and even with the same policy area.
152
+
153
+ The distribution of competences in various policy areas between Member States and the Union is divided in the following three categories:
154
+
155
+ The EU is based on a series of treaties. These first established the European Community and the EU, and then made amendments to those founding treaties.[161] These are power-giving treaties which set broad policy goals and establish institutions with the necessary legal powers to implement those goals. These legal powers include the ability to enact legislation[q] which can directly affect all member states and their inhabitants.[r] The EU has legal personality, with the right to sign agreements and international treaties.[162]
156
+
157
+ Under the principle of supremacy, national courts are required to enforce the treaties that their member states have ratified, and thus the laws enacted under them, even if doing so requires them to ignore conflicting national law, and (within limits) even constitutional provisions.[s]
158
+
159
+ The direct effect and supremacy doctrines were not explicitly set out in the European Treaties but were developed by the Court of Justice itself over the 1960s, apparently under the influence of its then most influential judge, Frenchman Robert Lecourt[163]
160
+
161
+ The judicial branch of the EU—formally called the Court of Justice of the European Union—consists of two courts: the Court of Justice and the General Court.[164]
162
+ The Court of Justice primarily deals with cases taken by member states, the institutions, and cases referred to it by the courts of member states.[165] Because of the doctrines of direct effect and supremacy, many judgments of the Court of Justice are automatically applicable within the internal legal orders of the member states.
163
+
164
+ The General Court mainly deals with cases taken by individuals and companies directly before the EU's courts,[166] and the European Union Civil Service Tribunal adjudicates in disputes between the European Union and its civil service.[167] Decisions from the General Court can be appealed to the Court of Justice but only on a point of law.[168]
165
+
166
+ The treaties declare that the EU itself is "founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities ... in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail."[169]
167
+
168
+ In 2009, the Lisbon Treaty gave legal effect to the Charter of Fundamental Rights of the European Union. The charter is a codified catalogue of fundamental rights against which the EU's legal acts can be judged. It consolidates many rights which were previously recognised by the Court of Justice and derived from the "constitutional traditions common to the member states."[170] The Court of Justice has long recognised fundamental rights and has, on occasion, invalidated EU legislation based on its failure to adhere to those fundamental rights.[171]
169
+
170
+ Signing the European Convention on Human Rights (ECHR) is a condition for EU membership.[t] Previously, the EU itself could not accede to the Convention as it is neither a state[u] nor had the competence to accede.[v] The Lisbon Treaty and Protocol 14 to the ECHR have changed this: the former binds the EU to accede to the Convention while the latter formally permits it.
171
+
172
+ The EU is independent from the Council of Europe, although they share purpose and ideas, especially on the rule of law, human rights and democracy. Furthermore, the European Convention on Human Rights and European Social Charter, as well as the source of law for the Charter of Fundamental Rights are created by the Council of Europe. The EU has also promoted human rights issues in the wider world. The EU opposes the death penalty and has proposed its worldwide abolition. Abolition of the death penalty is a condition for EU membership.[172]
173
+
174
+ The main legal acts of the EU come in three forms: regulations, directives, and decisions. Regulations become law in all member states the moment they come into force, without the requirement for any implementing measures,[w] and automatically override conflicting domestic provisions.[q] Directives require member states to achieve a certain result while leaving them discretion as to how to achieve the result. The details of how they are to be implemented are left to member states.[x] When the time limit for implementing directives passes, they may, under certain conditions, have direct effect in national law against member states.
175
+
176
+ Decisions offer an alternative to the two above modes of legislation. They are legal acts which only apply to specified individuals, companies or a particular member state. They are most often used in competition law, or on rulings on State Aid, but are also frequently used for procedural or administrative matters within the institutions. Regulations, directives, and decisions are of equal legal value and apply without any formal hierarchy.[173]
177
+
178
+ The European Ombudsman was established by the Maastricht Treaty. The ombudsman is elected by the European Parliament for the length of the Parliament's term, and the position is renewable.[174] Any EU citizen or entity may appeal to the ombudsman to investigate an EU institution on the grounds of maladministration (administrative irregularities, unfairness, discrimination, abuse of power, failure to reply, refusal of information or unnecessary delay).[175] Emily O'Reilly is the current ombudsman since 2013.[176]
179
+
180
+ The borders inside the Schengen Area between Germany and Austria
181
+
182
+ Europol Headquarters in The Hague, Netherlands
183
+
184
+ Eurojust Headquarters in The Hague, Netherlands
185
+
186
+ Seat of Frontex in Warsaw, Poland
187
+
188
+ Since the creation of the EU in 1993, it has developed its competencies in the area of justice and home affairs; initially at an intergovernmental level and later by supranationalism. Accordingly, the Union has legislated in areas such as extradition,[177] family law,[178] asylum law,[179] and criminal justice.[180] Prohibitions against sexual and nationality discrimination have a long standing in the treaties.[y] In more recent years, these have been supplemented by powers to legislate against discrimination based on race, religion, disability, age, and sexual orientation.[z] By virtue of these powers, the EU has enacted legislation on sexual discrimination in the work-place, age discrimination, and racial discrimination.[aa]
189
+
190
+ The Union has also established agencies to co-ordinate police, prosecutorial and immigrations controls across the member states: Europol for co-operation of police forces,[181] Eurojust for co-operation between prosecutors,[182] and Frontex for co-operation between border control authorities.[183] The EU also operates the Schengen Information System[16] which provides a common database for police and immigration authorities. This co-operation had to particularly be developed with the advent of open borders through the Schengen Agreement and the associated cross border crime.
191
+
192
+ Foreign policy co-operation between member states dates from the establishment of the Community in 1957, when member states negotiated as a bloc in international trade negotiations under the EU's common commercial policy.[184] Steps for a more wide-ranging co-ordination in foreign relations began in 1970 with the establishment of European Political Cooperation which created an informal consultation process between member states with the aim of forming common foreign policies. In 1987 the European Political Cooperation was introduced on a formal basis by the Single European Act. EPC was renamed as the Common Foreign and Security Policy (CFSP) by the Maastricht Treaty.[185]
193
+
194
+ The aims of the CFSP are to promote both the EU's own interests and those of the international community as a whole, including the furtherance of international co-operation, respect for human rights, democracy, and the rule of law.[186] The CFSP requires unanimity among the member states on the appropriate policy to follow on any particular issue. The unanimity and difficult issues treated under the CFSP sometimes lead to disagreements, such as those which occurred over the war in Iraq.[187]
195
+
196
+ The coordinator and representative of the CFSP within the EU is the High Representative of the Union for Foreign Affairs and Security Policy who speaks on behalf of the EU in foreign policy and defence matters, and has the task of articulating the positions expressed by the member states on these fields of policy into a common alignment. The High Representative heads up the European External Action Service (EEAS), a unique EU department[188] that has been officially implemented and operational since 1 December 2010 on the occasion of the first anniversary of the entry into force of the Treaty of Lisbon.[189] The EEAS will serve as a foreign ministry and diplomatic corps for the European Union.[190]
197
+
198
+ Besides the emerging international policy of the European Union, the international influence of the EU is also felt through enlargement. The perceived benefits of becoming a member of the EU act as an incentive for both political and economic reform in states wishing to fulfil the EU's accession criteria, and are considered an important factor contributing to the reform of European formerly Communist countries.[191]:762 This influence on the internal affairs of other countries is generally referred to as "soft power", as opposed to military "hard power".[192]
199
+
200
+ The predecessors of the European Union were not devised as a military alliance because NATO was largely seen as appropriate and sufficient for defence purposes.[193] 21 EU members are members of NATO[194] while the remaining member states follow policies of neutrality.[195] The Western European Union, a military alliance with a mutual defence clause, was disbanded in 2010 as its role had been transferred to the EU.[196]
201
+
202
+ France is the only member officially recognised as a nuclear weapon state holding a permanent seat on the United Nations Security Council. Most EU member states opposed the Nuclear Weapon Ban Treaty.[197]
203
+
204
+ Following the Kosovo War in 1999, the European Council agreed that "the Union must have the capacity for autonomous action, backed by credible military forces, the means to decide to use them, and the readiness to do so, in order to respond to international crises without prejudice to actions by NATO". To that end, a number of efforts were made to increase the EU's military capability, notably the Helsinki Headline Goal process. After much discussion, the most concrete result was the EU Battlegroups initiative, each of which is planned to be able to deploy quickly about 1500 personnel.[198]
205
+
206
+ EU forces have been deployed on peacekeeping missions from middle and northern Africa to the western Balkans and western Asia.[199] EU military operations are supported by a number of bodies, including the European Defence Agency, European Union Satellite Centre and the European Union Military Staff.[200] Frontex is an agency of the EU established to manage the cooperation between national border guards securing its external borders. It aims to detect and stop illegal immigration, human trafficking and terrorist infiltration. In 2015 the European Commission presented its proposal for a new European Border and Coast Guard Agency having a stronger role and mandate along with national authorities for border management. In an EU consisting of 27 members, substantial security and defence co-operation is increasingly relying on collaboration among all member states.[201]
207
+
208
+ The European Commission's Humanitarian Aid and Civil Protection department, or "ECHO", provides humanitarian aid from the EU to developing countries. In 2012, its budget amounted to €874 million, 51% of the budget went to Africa and 20% to Asia, Latin America, the Caribbean and Pacific, and 20% to the Middle East and Mediterranean.[202]
209
+
210
+ Humanitarian aid is financed directly by the budget (70%) as part of the financial instruments for external action and also by the European Development Fund (30%).[203] The EU's external action financing is divided into 'geographic' instruments and 'thematic' instruments.[203] The 'geographic' instruments provide aid through the Development Cooperation Instrument (DCI, €16.9 billion, 2007–2013), which must spend 95% of its budget on official development assistance (ODA), and from the European Neighbourhood and Partnership Instrument (ENPI), which contains some relevant programmes.[203] The European Development Fund (EDF, €22.7 billion for the period 2008–2013 and €30.5 billion for the period 2014–2020) is made up of voluntary contributions by member states, but there is pressure to merge the EDF into the budget-financed instruments to encourage increased contributions to match the 0.7% target and allow the European Parliament greater oversight.[203][204]
211
+
212
+ In 2016, the average among EU countries was 0.4% and five had met or exceeded the 0.7% target: Denmark, Germany, Luxembourg, Sweden and the United Kingdom.[205] If considered collectively, EU member states are the largest contributor of foreign aid in the world.[206][207]
213
+
214
+ The EU uses foreign relations instruments like the European Neighbourhood Policy which seeks to tie those countries to the east and south of the European territory of the EU to the Union. These countries, primarily developing countries, include some who seek to one day become either a member state of the European Union, or more closely integrated with the European Union. The EU offers financial assistance to countries within the European Neighbourhood, so long as they meet the strict conditions of government reform, economic reform and other issues surrounding positive transformation. This process is normally underpinned by an Action Plan, as agreed by both Brussels and the target country.
215
+
216
+ International recognition of sustainable development as a key element is growing steadily. Its role was recognized in three major UN summits on sustainable development: the 1992 UN Conference on Environment and Development (UNCED) in Rio de Janeiro, Brazil; the 2002 World Summit on Sustainable Development (WSSD) in Johannesburg, South Africa; and the 2012 UN Conference on Sustainable Development (UNCSD) in Rio de Janeiro. Other key global agreements are the Paris Agreement and the 2030 Agenda for Sustainable Development (United Nations, 2015). The SDGs recognize that all countries must stimulate action in the following key areas - people, planet, prosperity, peace and partnership - in order to tackle the global challenges that are crucial for the survival of humanity.
217
+
218
+ EU development action is based on the European Consensus on Development, which was endorsed on 20 December 2005 by EU Member States, the Council, the European Parliament and the Commission.[208] It is applied from the principles of Capability approach and Rights-based approach to development.
219
+
220
+ Partnership and cooperation agreements are bilateral agreements with non-member nations.[209]
221
+
222
+ The European Union is the largest exporter in the world[213] and as of 2008 the largest importer of goods and services.[214][215] Internal trade between the member states is aided by the removal of barriers to trade such as tariffs and border controls. In the eurozone, trade is helped by not having any currency differences to deal with amongst most members.[216]
223
+
224
+ The European Union Association Agreement does something similar for a much larger range of countries, partly as a so-called soft approach ('a carrot instead of a stick') to influence the politics in those countries.
225
+ The European Union represents all its members at the World Trade Organization (WTO), and acts on behalf of member states in any disputes. When the EU negotiates trade related agreement outside the WTO framework, the subsequent agreement must be approved by each individual EU member state government.[216]
226
+
227
+ The European Union has concluded free trade agreements (FTAs)[217] and other agreements with a trade component with many countries worldwide and is negotiating with many others.[218]
228
+
229
+ As a political entity the European Union is represented in the World Trade Organization (WTO). EU member states own the estimated second largest after the United States (US$105 trillion) net wealth in the world, equal to around 20% (~€60 trillion) of the US$360 trillion (~€300 trillion)[219] global wealth.[220]
230
+
231
+ 19 member states have joined a monetary union known as the eurozone, which uses the euro as a single currency. The currency union represents 342 million EU citizens.[221] The euro is the second largest reserve currency as well as the second most traded currency in the world after the United States dollar.[222][223][224]
232
+
233
+ Of the top 500 largest corporations in the world measured by revenue in 2010, 161 had their headquarters in the EU.[225] In 2016, unemployment in the EU stood at 8.9%[226] while inflation was at 2.2%, and the current account balance at −0.9% of GDP. The average annual net earnings in the European Union was around €24,000 (US$30,000)[227] in 2015.
234
+
235
+ There is a significant variation in Nominal GDP per capita within individual EU states. The difference between the richest and poorest regions (281 NUTS-2 regions of the Nomenclature of Territorial Units for Statistics) ranged, in 2017, from 31% (Severozapaden, Bulgaria) of the EU28 average (€30,000) to 253% (Luxembourg), or from €4,600 to €92,600.[228]
236
+
237
+ Two of the original core objectives of the European Economic Community were the development of a common market, subsequently becoming a single market, and a customs union between its member states. The single market involves the free circulation of goods, capital, people, and services within the EU,[221] and the customs union involves the application of a common external tariff on all goods entering the market. Once goods have been admitted into the market they cannot be subjected to customs duties, discriminatory taxes or import quotas, as they travel internally. The non-EU member states of Iceland, Norway, Liechtenstein and Switzerland participate in the single market but not in the customs union.[119] Half the trade in the EU is covered by legislation harmonised by the EU.[229]
238
+
239
+ Free movement of capital is intended to permit movement of investments such as property purchases and buying of shares between countries.[230] Until the drive towards economic and monetary union the development of the capital provisions had been slow. Post-Maastricht there has been a rapidly developing corpus of ECJ judgements regarding this initially neglected freedom. The free movement of capital is unique insofar as it is granted equally to non-member states.
240
+
241
+ The free movement of persons means that EU citizens can move freely between member states to live, work, study or retire in another country. This required the lowering of administrative formalities and recognition of professional qualifications of other states.[231]
242
+
243
+ The free movement of services and of establishment allows self-employed persons to move between member states to provide services on a temporary or permanent basis. While services account for 60–70% of GDP, legislation in the area is not as developed as in other areas. This lacuna has been addressed by the recently passed Directive on services in the internal market which aims to liberalise the cross border provision of services.[232] According to the Treaty the provision of services is a residual freedom that only applies if no other freedom is being exercised.
244
+
245
+ The creation of a European single currency became an official objective of the European Economic Community in 1969. In 1992, having negotiated the structure and procedures of a currency union, the member states signed the Maastricht Treaty and were legally bound to fulfil the agreed-on rules including the convergence criteria if they wanted to join the monetary union. The states wanting to participate had first to join the European Exchange Rate Mechanism.
246
+
247
+ In 1999 the currency union started, first as an accounting currency with eleven member states joining. In 2002, the currency was fully put into place, when euro notes and coins were issued and national currencies began to phase out in the eurozone, which by then consisted of 12 member states. The eurozone (constituted by the EU member states which have adopted the euro) has since grown to 19 countries.[233][ab]
248
+
249
+ The euro, and the monetary policies of those who have adopted it in agreement with the EU, are under the control of the European Central Bank (ECB).[234] The ECB is the central bank for the eurozone, and thus controls monetary policy in that area with an agenda to maintain price stability. It is at the centre of the European System of Central Banks, which comprehends all EU national central banks and is controlled by its General Council, consisting of the President of the ECB, who is appointed by the European Council, the Vice-President of the ECB, and the governors of the national central banks of all 27 EU member states.[235]
250
+
251
+ The European System of Financial Supervision is an institutional architecture of the EU's framework of financial supervision composed by three authorities: the European Banking Authority, the European Insurance and Occupational Pensions Authority and the European Securities and Markets Authority. To complement this framework, there is also a European Systemic Risk Board under the responsibility of the ECB. The aim of this financial control system is to ensure the economic stability of the EU.[236]
252
+
253
+ To prevent the joining states from getting into financial trouble or crisis after entering the monetary union, they were obliged in the Maastricht treaty to fulfil important financial obligations and procedures, especially to show budgetary discipline and a high degree of sustainable economic convergence, as well as to avoid excessive government deficits and limit the government debt to a sustainable level.
254
+
255
+ The European Commission working sectors are: Aeronautics, automotive, biotechnology, chemicals, construction, cosmetics, defense, electronics, firearms, food and drink, gambling, healthcare, maritime, mechanics, medical, postal, raw materials, space, textile, tourism, toys and Social economy (Societas cooperativa Europaea).
256
+
257
+ In 2006, the EU-27 had a gross inland energy consumption of 1,825 million tonnes of oil equivalent (toe).[237] Around 46% of the energy consumed was produced within the member states while 54% was imported.[237] In these statistics, nuclear energy is treated as primary energy produced in the EU, regardless of the source of the uranium, of which less than 3% is produced in the EU.[238]
258
+
259
+ The EU has had legislative power in the area of energy policy for most of its existence; this has its roots in the original European Coal and Steel Community. The introduction of a mandatory and comprehensive European energy policy was approved at the meeting of the European Council in October 2005, and the first draft policy was published in January 2007.[239]
260
+
261
+ The EU has five key points in its energy policy: increase competition in the internal market, encourage investment and boost interconnections between electricity grids; diversify energy resources with better systems to respond to a crisis; establish a new treaty framework for energy co-operation with Russia while improving relations with energy-rich states in Central Asia[240] and North Africa; use existing energy supplies more efficiently while increasing renewable energy commercialisation; and finally increase funding for new energy technologies.[239]
262
+
263
+ In 2007, EU countries as a whole imported 82% of their oil, 57% of their natural gas[241] and 97.48% of their uranium[238] demands. There is a strong dependence on Russian energy that the EU has been attempting to reduce.[242]
264
+
265
+ The EU is working to improve cross-border infrastructure within the EU, for example through the Trans-European Networks (TEN). Projects under TEN include the Channel Tunnel, LGV Est, the Fréjus Rail Tunnel, the Öresund Bridge, the Brenner Base Tunnel and the Strait of Messina Bridge. In 2010 the estimated network covers: 75,200 kilometres (46,700 mi) of roads; 78,000 kilometres (48,000 mi) of railways; 330 airports; 270 maritime harbours; and 210 internal harbours.[243][244]
266
+
267
+ Rail transport in Europe is being synchronised with the European Rail Traffic Management System (ERTMS), an initiative to greatly enhance safety, increase efficiency of trains and enhance cross-border interoperability of rail transport in Europe by replacing signalling equipment with digitised mostly wireless versions and by creating a single Europe-wide standard for train control and command systems.
268
+
269
+ The developing European transport policies will increase the pressure on the environment in many regions by the increased transport network. In the pre-2004 EU members, the major problem in transport deals with congestion and pollution. After the recent enlargement, the new states that joined since 2004 added the problem of solving accessibility to the transport agenda.[245] The Polish road network was upgraded such as the A4 autostrada.[246]
270
+
271
+ The Galileo positioning system is another EU infrastructure project. Galileo is a proposed Satellite navigation system, to be built by the EU and launched by the European Space Agency (ESA). The Galileo project was launched partly to reduce the EU's dependency on the US-operated Global Positioning System, but also to give more complete global coverage and allow for greater accuracy, given the aged nature of the GPS system.[247]
272
+
273
+ The Common Agricultural Policy (CAP) is one of the long lasting policies of the European Community.[248] The policy has the objectives of increasing agricultural production, providing certainty in food supplies, ensuring a high quality of life for farmers, stabilising markets, and ensuring reasonable prices for consumers.[ad] It was, until recently, operated by a system of subsidies and market intervention. Until the 1990s, the policy accounted for over 60% of the then European Community's annual budget, and as of 2013[update] accounts for around 34%.[249]
274
+
275
+ The policy's price controls and market interventions led to considerable overproduction. These were intervention stores of products bought up by the Community to maintain minimum price levels. To dispose of surplus stores, they were often sold on the world market at prices considerably below Community guaranteed prices, or farmers were offered subsidies (amounting to the difference between the Community and world prices) to export their products outside the Community. This system has been criticised for under-cutting farmers outside Europe, especially those in the developing world.[250] Supporters of CAP argue that the economic support which it gives to farmers provides them with a reasonable standard of living.[250]
276
+
277
+ Since the beginning of the 1990s, the CAP has been subject to a series of reforms. Initially, these reforms included the introduction of set-aside in 1988, where a proportion of farm land was deliberately withdrawn from production, milk quotas and, more recently, the 'de-coupling' (or disassociation) of the money farmers receive from the EU and the amount they produce (by the Fischler reforms in 2004). Agriculture expenditure will move away from subsidy payments linked to specific produce, toward direct payments based on farm size. This is intended to allow the market to dictate production levels.[248] One of these reforms entailed the modification of the EU's sugar regime, which previously divided the sugar market between member states and certain African-Caribbean nations with a privileged relationship with the EU.[251]
278
+
279
+ The EU operates a competition policy intended to ensure undistorted competition within the single market.[ae]
280
+
281
+ The Competition Commissioner, currently Margrethe Vestager, is one of the most powerful positions in the Commission, notable for the ability to affect the commercial interests of trans-national corporations.[citation needed] For example, in 2001 the Commission for the first time prevented a merger between two companies based in the United States (GE and Honeywell) which had already been approved by their national authority.[252] Another high-profile case against Microsoft, resulted in the Commission fining Microsoft over €777 million following nine years of legal action.[253]
282
+
283
+ The EU seasonally adjusted unemployment rate was 6.7% in September 2018.[254] The euro area unemployment rate was 8.1%.[254] Among the member states, the lowest unemployment rates were recorded in the Czech Republic (2.3%), Germany and Poland (both 3.4%), and the highest in Spain (14.9%) and Greece (19.0 in July 2018).[254]
284
+
285
+ The EU has long sought to mitigate the effects of free markets by protecting workers rights and preventing social and environmental dumping. To this end it has adopted laws establishing minimum employment and environmental standards. These included the Working Time Directive and the Environmental Impact Assessment Directive. The EU has also sought to coordinate the social security and health systems of member states to facilitate individuals exercising free movement rights and to ensure they maintain their ability to access social security and health services in other member states.
286
+
287
+ The European Social Charter is the main body that recognizes the social rights of European citizens.
288
+
289
+ A European unemployment insurance has been proposed among others by the commissioner of Jobs Nicolas Schmit.[255]
290
+
291
+ Since 2019 there is a European Commissioner for Equality; a European Institute for Gender Equality has existed since 2007.
292
+
293
+ Housing, youth, childhood, Functional diversity or elderly care are supportive competencies of the European Union and can be financed by the European Social Fund.
294
+
295
+ Structural Funds and Cohesion Funds are supporting the development of underdeveloped regions of the EU. Such regions are primarily located in the states of central and southern Europe.[256][257] Several funds provide emergency aid, support for candidate members to transform their country to conform to the EU's standard (Phare, ISPA, and SAPARD), and support to the Commonwealth of Independent States (TACIS). TACIS has now become part of the worldwide EuropeAid programme.
296
+
297
+ Demographic transition to a society of aging population, low fertility-rates and depopulation of non-metropolitan regions is tackled within this policies.
298
+
299
+ In 1957, when the EEC was founded, it had no environmental policy.[258] Over the past 50 years, an increasingly dense network of legislation has been created, extending to all areas of environmental protection, including air pollution, water quality, waste management, nature conservation, and the control of chemicals, industrial hazards, and biotechnology.[259] According to the Institute for European Environmental Policy, environmental law comprises over 500 Directives, Regulations and Decisions, making environmental policy a core area of European politics.[260]
300
+
301
+ European policy-makers originally increased the EU's capacity to act on environmental issues by defining it as a trade problem.[261]
302
+ Trade barriers and competitive distortions in the Common Market could emerge due to the different environmental standards in each member state.[262] In subsequent years, the environment became a formal policy area, with its own policy actors, principles and procedures. The legal basis for EU environmental policy was established with the introduction of the Single European Act in 1987.[260]
303
+
304
+ Initially, EU environmental policy focused on Europe. More recently, the EU has demonstrated leadership in global environmental governance, e.g. the role of the EU in securing the ratification and coming into force of the Kyoto Protocol despite opposition from the United States. This international dimension is reflected in the EU's Sixth Environmental Action Programme,[263] which recognises that its objectives can only be achieved if key international agreements are actively supported and properly implemented both at EU level and worldwide. The Lisbon Treaty further strengthened the leadership ambitions.[264] EU law has played a significant role in improving habitat and species protection in Europe, as well as contributing to improvements in air and water quality and waste management.[260]
305
+
306
+ Mitigating climate change is one of the top priorities of EU environmental policy. In 2007, member states agreed that, in the future, 20% of the energy used across the EU must be renewable, and carbon dioxide emissions have to be lower in 2020 by at least 20% compared to 1990 levels.[265] The EU has adopted an emissions trading system to incorporate carbon emissions into the economy.[266] The European Green Capital is an annual award given to cities that focuses on the environment, energy efficiency, and quality of life in urban areas to create smart city.
307
+
308
+ In the Elections to the European Parliament in 2019, the green parties increased their power, possibly because of the rise of post materialist values.[267]
309
+
310
+ Proposals to reach a zero carbon economy in the European Union by 2050 were suggested in 2018 - 2019. Almost all member states supported that goal at an EU summit in June 2019. The Czech Republic, Estonia, Hungary, and Poland disagreed.[268]
311
+
312
+ Basic education is an area where the EU's role is limited to supporting national governments. In higher education, the policy was developed in the 1980s in programmes supporting exchanges and mobility. The most visible of these has been the Erasmus Programme, a university exchange programme which began in 1987. In its first 20 years, it supported international exchange opportunities for well over 1.5 million university and college students and became a symbol of European student life.[269]
313
+
314
+ There are similar programmes for school pupils and teachers, for trainees in vocational education and training, and for adult learners in the Lifelong Learning Programme 2007–2013. These programmes are designed to encourage a wider knowledge of other countries and to spread good practices in the education and training fields across the EU.[270][271] Through its support of the Bologna Process, the EU is supporting comparable standards and compatible degrees across Europe.
315
+
316
+ Scientific development is facilitated through the EU's Framework Programmes, the first of which started in 1984. The aims of EU policy in this area are to co-ordinate and stimulate research. The independent European Research Council allocates EU funds to European or national research projects.[272] EU research and technological framework programmes deal in a number of areas, for example energy where the aim is to develop a diverse mix of renewable energy to help the environment and to reduce dependence on imported fuels.[273]
317
+
318
+ The EU has no major competences in the field of health care and Article 35 of the Charter of Fundamental Rights of the European Union affirms that "A high level of human health protection shall be ensured in the definition and implementation of all Union policies and activities". The European Commission's Directorate-General for Health and Consumers seeks to align national laws on the protection of people's health, on the consumers' rights, on the safety of food and other products.[274][275][276]
319
+
320
+ All EU and many other European countries offer their citizens a free European Health Insurance Card which, on a reciprocal basis, provides insurance for emergency medical treatment insurance when visiting other participating European countries.[277] A directive on cross-border healthcare aims at promoting co-operation on health care between member states and facilitating access to safe and high-quality cross-border healthcare for European patients.[278][279][280]
321
+
322
+ The EU has some of the highest levels of life expectancy in the world, with Spain, Italy, Sweden, France, Malta, Ireland, Netherlands, Luxembourg, and Greece all among the world's top 20 countries with the highest life expectancy.[281] In general, life expectancy is lower in Eastern Europe than in Western Europe.[282] In 2018, the EU region with the highest life expectancy was Madrid, Spain at 85.2 years, followed by the Spanish regions of La Rioja and Castilla y León both at 84.3 years, Trentino in Italy at 84.3 years and Île-de-France in France at 84.2 years. The overall life expectancy in the EU in 2018 was 81.0 years, higher than the World average of 72.6 years.[283]
323
+
324
+ Cultural co-operation between member states has been an interest of the EU since its inclusion as a community competency in the Maastricht Treaty.[284] Actions taken in the cultural area by the EU include the Culture 2000 seven-year programme,[284] the European Cultural Month event,[285] and orchestras such as the European Union Youth Orchestra.[286] The European Capital of Culture programme selects one or more cities in every year to assist the cultural development of that city.[287]
325
+
326
+ Association football is by far the most popular sport in the European Union by the number of registered players. The other sports with the most participants in clubs are tennis, basketball, swimming, athletics, golf, gymnastics, equestrian sports, handball, volleyball and sailing.[288]
327
+
328
+ Sport is mainly the responsibility of the member states or other international organisations, rather than of the EU. There are some EU policies that have affected sport, such as the free movement of workers, which was at the core of the Bosman ruling that prohibited national football leagues from imposing quotas on foreign players with European citizenship.[289]
329
+
330
+ The Treaty of Lisbon requires any application of economic rules to take into account the specific nature of sport and its structures based on voluntary activity.[290] This followed lobbying by governing organisations such as the International Olympic Committee and FIFA, due to objections over the application of free market principles to sport, which led to an increasing gap between rich and poor clubs.[291] The EU does fund a programme for Israeli, Jordanian, Irish, and British football coaches, as part of the Football 4 Peace project.[292]
331
+
332
+ The flag used is the Flag of Europe, which consists of a circle of 12 golden stars on a blue background. Originally designed in 1955 for the Council of Europe, the flag was adopted by the European Communities, the predecessors of the present Union, in 1986. The Council of Europe gave the flag a symbolic description in the following terms,[293] though the official symbolic description adopted by the EU omits the reference to the "Western world":[294]
333
+
334
+ Against the blue sky of the Western world, the stars symbolise the peoples of Europe in a form of a circle, the sign of union. The number of stars is invariably twelve, the figure twelve being the symbol of perfection and entirety.
335
+
336
+ United in Diversity was adopted as the motto of the Union in the year 2000, having been selected from proposals submitted by school pupils.[295] Since 1985, the flag day of the Union has been Europe Day, on 9 May (the date of the 1950 Schuman declaration). The anthem of the Union is an instrumental version of the prelude to the Ode to Joy, the 4th movement of Ludwig van Beethoven's ninth symphony. The anthem was adopted by European Community leaders in 1985 and has since been played on official occasions.[296]
337
+ Besides naming the continent, the Greek mythological figure of Europa has frequently been employed as a personification of Europe. Known from the myth in which Zeus seduces her in the guise of a white bull, Europa has also been referred to in relation to the present Union. Statues of Europa and the bull decorate several of the Union's institutions and a portrait of her is seen on the 2013 series of Euro banknotes. The bull is, for its part, depicted on all residence permit cards.[297]
338
+
339
+ Charles the Great, also known as Charlemagne (Latin: Carolus Magnus) and later recognised as Pater Europae ("Father of Europe"),[298][299][300] has a symbolic relevance to Europe. The Commission has named one of its central buildings in Brussels after Charlemagne and the city of Aachen has since 1949 awarded the Charlemagne Prize to champions of European unification.[301] Since 2008, the organisers of this prize, in conjunction with the European Parliament, have awarded the Charlemagne Youth Prize in recognition of similar efforts by young people.[302]
340
+
341
+ Media freedom is a fundamental right that applies to all member states of the European Union and its citizens, as defined in the EU Charter of Fundamental Rights as well as the European Convention on Human Rights.[303]:1 Within the EU enlargement process, guaranteeing media freedom is named a "key indicator of a country's readiness to become part of the EU".[304]
342
+
343
+ The majority of media in the European Union are national-oriented. Some EU-wide media focusing on European affairs have emerged since the early 1990s, such as Euronews, EUobserver, EURACTIV or Politico Europe.[305][306] ARTE is a public Franco-German TV network that promotes programming in the areas of culture and the arts. 80% of its programming are provided in equal proportion by the two member companies, while the remainder is being provided by the European Economic Interest Grouping ARTE GEIE and the channel's European partners.[307]
344
+
345
+ The MEDIA Programme of the European Union has supported the European popular film and audiovisual industries since 1991. It provides support for the development, promotion and distribution of European works within Europe and beyond.[308]
346
+
347
+ The European Union has had a significant positive economic impact on most member states.[309] According to a 2019 study of the member states who joined from 1973 to 2004, "without European integration, per capita incomes would have been, on average, approximately 10% lower in the first ten years after joining the EU."[309] Greece was the exception reported by the study, which analysed up to 2008, "to avoid confounding effects from the global financial crisis".[309]
348
+
349
+ The European Union has contributed to peace in Europe, in particular by pacifying border disputes.[310][311]
350
+
351
+ The European Union has contributed to the spread of democracy, in particular by encouraging democratic reforms in aspiring Eastern European member states after the collapse of the USSR.[312][313] Thomas Risse wrote in 2009, "there is a consensus in the literature on Eastern Europe that the EU membership perspective had a huge anchoring effects for the new democracies."[313] However, R. Daniel Keleman argues that over time, the EU has proved beneficial to leaders who are overseeing democratic backsliding, as the EU is reluctant to intervene in domestic politics, gives the authoritarians funds which they can use to strengthen their regimes, and because freedom of movement within the EU allows dissenting citizens to leave their backsliding countries.[314]
352
+
353
+ Official:
354
+
355
+ Overviews and data:
356
+
357
+ News and interviews:
358
+
359
+ Educational resources:
en/5861.html.txt ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ India, officially the Republic of India (Hindi: Bhārat Gaṇarājya),[23] is a country in South Asia. It is the second-most populous country, the seventh-largest country by area, and the most populous democracy in the world. Bounded by the Indian Ocean on the south, the Arabian Sea on the southwest, and the Bay of Bengal on the southeast, it shares land borders with Pakistan to the west;[f] China, Nepal, and Bhutan to the north; and Bangladesh and Myanmar to the east. In the Indian Ocean, India is in the vicinity of Sri Lanka and the Maldives; its Andaman and Nicobar Islands share a maritime border with Thailand and Indonesia.
6
+
7
+ Modern humans arrived on the Indian subcontinent from Africa no later than 55,000 years ago.[24]
8
+ Their long occupation, initially in varying forms of isolation as hunter-gatherers, has made the region highly diverse, second only to Africa in human genetic diversity.[25] Settled life emerged on the subcontinent in the western margins of the Indus river basin 9,000 years ago, evolving gradually into the Indus Valley Civilisation of the third millennium BCE.[26]
9
+ By 1200 BCE, an archaic form of Sanskrit, an Indo-European language, had diffused into India from the northwest, unfolding as the language of the Rigveda, and recording the dawning of Hinduism in India.[27]
10
+ The Dravidian languages of India were supplanted in the northern regions.[28]
11
+ By 400 BCE, stratification and exclusion by caste had emerged within Hinduism,[29]
12
+ and Buddhism and Jainism had arisen, proclaiming social orders unlinked to heredity.[30]
13
+ Early political consolidations gave rise to the loose-knit Maurya and Gupta Empires based in the Ganges Basin.[31]
14
+ Their collective era was suffused with wide-ranging creativity,[32] but also marked by the declining status of women,[33] and the incorporation of untouchability into an organised system of belief.[g][34] In South India, the Middle kingdoms exported Dravidian-languages scripts and religious cultures to the kingdoms of Southeast Asia.[35]
15
+
16
+ In the early medieval era, Christianity, Islam, Judaism, and Zoroastrianism put down roots on India's southern and western coasts.[36]
17
+ Muslim armies from Central Asia intermittently overran India's northern plains,[37]
18
+ eventually establishing the Delhi Sultanate, and drawing northern India into the cosmopolitan networks of medieval Islam.[38]
19
+ In the 15th century, the Vijayanagara Empire created a long-lasting composite Hindu culture in south India.[39]
20
+ In the Punjab, Sikhism emerged, rejecting institutionalised religion.[40]
21
+ The Mughal Empire, in 1526, ushered in two centuries of relative peace,[41]
22
+ leaving a legacy of luminous architecture.[h][42]
23
+ Gradually expanding rule of the British East India Company followed, turning India into a colonial economy, but also consolidating its sovereignty.[43] British Crown rule began in 1858. The rights promised to Indians were granted slowly,[44] but technological changes were introduced, and ideas of education, modernity and the public life took root.[45]
24
+ A pioneering and influential nationalist movement emerged, which was noted for nonviolent resistance and became the major factor in ending British rule.[46] In 1947 the British Indian Empire was partitioned into two independent dominions, a Hindu-majority Dominion of India and a Muslim-majority Dominion of Pakistan, amid large-scale loss of life and an unprecedented migration.[47][48]
25
+
26
+ India has been a secular federal republic since 1950, governed in a democratic parliamentary system. It is a pluralistic, multilingual and multi-ethnic society. India's population grew from 361 million in 1951 to 1,211 million in 2011.[49]
27
+ During the same time, its nominal per capita income increased from US$64 annually to US$1,498, and its literacy rate from 16.6% to 74%. From being a comparatively destitute country in 1951,[50]
28
+ India has become a fast-growing major economy, a hub for information technology services, with an expanding middle class.[51] It has a space programme which includes several planned or completed extraterrestrial missions. Indian movies, music, and spiritual teachings play an increasing role in global culture.[52]
29
+ India has substantially reduced its rate of poverty, though at the cost of increasing economic inequality.[53]
30
+ India is a nuclear weapons state, which ranks high in military expenditure. It has disputes over Kashmir with its neighbours, Pakistan and China, unresolved since the mid-20th century.[54]
31
+ Among the socio-economic challenges India faces are gender inequality, child malnutrition,[55]
32
+ and rising levels of air pollution.[56]
33
+ India's land is megadiverse, with four biodiversity hotspots.[57] Its forest cover comprises 21.4% of its area.[58] India's wildlife, which has traditionally been viewed with tolerance in India's culture,[59] is supported among these forests, and elsewhere, in protected habitats.
34
+
35
+ According to the Oxford English Dictionary (Third Edition 2009), the name "India" is derived from the Classical Latin India, a reference to South Asia and an uncertain region to its east; and in turn derived successively from: Hellenistic Greek India ( Ἰνδία); ancient Greek Indos ( Ἰνδός); Old Persian Hindush, an eastern province of the Achaemenid empire; and ultimately its cognate, the Sanskrit Sindhu, or "river," specifically the Indus river and, by implication, its well-settled southern basin.[60][61] The ancient Greeks referred to the Indians as Indoi (Ἰνδοί), which translates as "The people of the Indus".[62]
36
+
37
+ The term Bharat (Bhārat; pronounced [ˈbʱaːɾət] (listen)), mentioned in both Indian epic poetry and the Constitution of India,[63][64] is used in its variations by many Indian languages. A modern rendering of the historical name Bharatavarsha, which applied originally to a region of the Gangetic Valley,[65][66] Bharat gained increased currency from the mid-19th century as a native name for India.[63][67]
38
+
39
+ Hindustan ([ɦɪndʊˈstaːn] (listen)) is a Middle Persian name for India, introduced during the Mughal Empire and used widely since. Its meaning has varied, referring to a region encompassing present-day northern India and Pakistan or to India in its near entirety.[63][67][68]
40
+
41
+ By 55,000 years ago, the first modern humans, or Homo sapiens, had arrived on the Indian subcontinent from Africa, where they had earlier evolved.[69][70][71]
42
+ The earliest known modern human remains in South Asia date to about 30,000 years ago.[72] After 6500 BCE, evidence for domestication of food crops and animals, construction of permanent structures, and storage of agricultural surplus appeared in Mehrgarh and other sites in what is now Balochistan.[73] These gradually developed into the Indus Valley Civilisation,[74][73] the first urban culture in South Asia,[75] which flourished during 2500–1900 BCE in what is now Pakistan and western India.[76] Centred around cities such as Mohenjo-daro, Harappa, Dholavira, and Kalibangan, and relying on varied forms of subsistence, the civilisation engaged robustly in crafts production and wide-ranging trade.[75]
43
+
44
+ During the period 2000–500 BCE, many regions of the subcontinent transitioned from the Chalcolithic cultures to the Iron Age ones.[77] The Vedas, the oldest scriptures associated with Hinduism,[78] were composed during this period,[79] and historians have analysed these to posit a Vedic culture in the Punjab region and the upper Gangetic Plain.[77] Most historians also consider this period to have encompassed several waves of Indo-Aryan migration into the subcontinent from the north-west.[78] The caste system, which created a hierarchy of priests, warriors, and free peasants, but which excluded indigenous peoples by labelling their occupations impure, arose during this period.[80] On the Deccan Plateau, archaeological evidence from this period suggests the existence of a chiefdom stage of political organisation.[77] In South India, a progression to sedentary life is indicated by the large number of megalithic monuments dating from this period,[81] as well as by nearby traces of agriculture, irrigation tanks, and craft traditions.[81]
45
+
46
+ In the late Vedic period, around the 6th century BCE, the small states and chiefdoms of the Ganges Plain and the north-western regions had consolidated into 16 major oligarchies and monarchies that were known as the mahajanapadas.[82][83] The emerging urbanisation gave rise to non-Vedic religious movements, two of which became independent religions. Jainism came into prominence during the life of its exemplar, Mahavira.[84] Buddhism, based on the teachings of Gautama Buddha, attracted followers from all social classes excepting the middle class; chronicling the life of the Buddha was central to the beginnings of recorded history in India.[85][86][87] In an age of increasing urban wealth, both religions held up renunciation as an ideal,[88] and both established long-lasting monastic traditions. Politically, by the 3rd century BCE, the kingdom of Magadha had annexed or reduced other states to emerge as the Mauryan Empire.[89] The empire was once thought to have controlled most of the subcontinent except the far south, but its core regions are now thought to have been separated by large autonomous areas.[90][91] The Mauryan kings are known as much for their empire-building and determined management of public life as for Ashoka's renunciation of militarism and far-flung advocacy of the Buddhist dhamma.[92][93]
47
+
48
+ The Sangam literature of the Tamil language reveals that, between 200 BCE and 200 CE, the southern peninsula was ruled by the Cheras, the Cholas, and the Pandyas, dynasties that traded extensively with the Roman Empire and with West and South-East Asia.[94][95] In North India, Hinduism asserted patriarchal control within the family, leading to increased subordination of women.[96][89] By the 4th and 5th centuries, the Gupta Empire had created a complex system of administration and taxation in the greater Ganges Plain; this system became a model for later Indian kingdoms.[97][98] Under the Guptas, a renewed Hinduism based on devotion, rather than the management of ritual, began to assert itself.[99] This renewal was reflected in a flowering of sculpture and architecture, which found patrons among an urban elite.[98] Classical Sanskrit literature flowered as well, and Indian science, astronomy, medicine, and mathematics made significant advances.[98]
49
+
50
+ The Indian early medieval age, 600 CE to 1200 CE, is defined by regional kingdoms and cultural diversity.[100] When Harsha of Kannauj, who ruled much of the Indo-Gangetic Plain from 606 to 647 CE, attempted to expand southwards, he was defeated by the Chalukya ruler of the Deccan.[101] When his successor attempted to expand eastwards, he was defeated by the Pala king of Bengal.[101] When the Chalukyas attempted to expand southwards, they were defeated by the Pallavas from farther south, who in turn were opposed by the Pandyas and the Cholas from still farther south.[101] No ruler of this period was able to create an empire and consistently control lands much beyond his core region.[100] During this time, pastoral peoples, whose land had been cleared to make way for the growing agricultural economy, were accommodated within caste society, as were new non-traditional ruling classes.[102] The caste system consequently began to show regional differences.[102]
51
+
52
+ In the 6th and 7th centuries, the first devotional hymns were created in the Tamil language.[103] They were imitated all over India and led to both the resurgence of Hinduism and the development of all modern languages of the subcontinent.[103] Indian royalty, big and small, and the temples they patronised drew citizens in great numbers to the capital cities, which became economic hubs as well.[104] Temple towns of various sizes began to appear everywhere as India underwent another urbanisation.[104] By the 8th and 9th centuries, the effects were felt in South-East Asia, as South Indian culture and political systems were exported to lands that became part of modern-day Myanmar, Thailand, Laos, Cambodia, Vietnam, Philippines, Malaysia, and Java.[105] Indian merchants, scholars, and sometimes armies were involved in this transmission; South-East Asians took the initiative as well, with many sojourning in Indian seminaries and translating Buddhist and Hindu texts into their languages.[105]
53
+
54
+ After the 10th century, Muslim Central Asian nomadic clans, using swift-horse cavalry and raising vast armies united by ethnicity and religion, repeatedly overran South Asia's north-western plains, leading eventually to the establishment of the Islamic Delhi Sultanate in 1206.[106] The sultanate was to control much of North India and to make many forays into South India. Although at first disruptive for the Indian elites, the sultanate largely left its vast non-Muslim subject population to its own laws and customs.[107][108] By repeatedly repulsing Mongol raiders in the 13th century, the sultanate saved India from the devastation visited on West and Central Asia, setting the scene for centuries of migration of fleeing soldiers, learned men, mystics, traders, artists, and artisans from that region into the subcontinent, thereby creating a syncretic Indo-Islamic culture in the north.[109][110] The sultanate's raiding and weakening of the regional kingdoms of South India paved the way for the indigenous Vijayanagara Empire.[111] Embracing a strong Shaivite tradition and building upon the military technology of the sultanate, the empire came to control much of peninsular India,[112] and was to influence South Indian society for long afterwards.[111]
55
+
56
+ In the early 16th century, northern India, then under mainly Muslim rulers,[113] fell again to the superior mobility and firepower of a new generation of Central Asian warriors.[114] The resulting Mughal Empire did not stamp out the local societies it came to rule. Instead, it balanced and pacified them through new administrative practices[115][116] and diverse and inclusive ruling elites,[117] leading to more systematic, centralised, and uniform rule.[118] Eschewing tribal bonds and Islamic identity, especially under Akbar, the Mughals united their far-flung realms through loyalty, expressed through a Persianised culture, to an emperor who had near-divine status.[117] The Mughal state's economic policies, deriving most revenues from agriculture[119] and mandating that taxes be paid in the well-regulated silver currency,[120] caused peasants and artisans to enter larger markets.[118] The relative peace maintained by the empire during much of the 17th century was a factor in India's economic expansion,[118] resulting in greater patronage of painting, literary forms, textiles, and architecture.[121] Newly coherent social groups in northern and western India, such as the Marathas, the Rajputs, and the Sikhs, gained military and governing ambitions during Mughal rule, which, through collaboration or adversity, gave them both recognition and military experience.[122] Expanding commerce during Mughal rule gave rise to new Indian commercial and political elites along the coasts of southern and eastern India.[122] As the empire disintegrated, many among these elites were able to seek and control their own affairs.[123]
57
+
58
+ By the early 18th century, with the lines between commercial and political dominance being increasingly blurred, a number of European trading companies, including the English East India Company, had established coastal outposts.[124][125] The East India Company's control of the seas, greater resources, and more advanced military training and technology led it to increasingly flex its military muscle and caused it to become attractive to a portion of the Indian elite; these factors were crucial in allowing the company to gain control over the Bengal region by 1765 and sideline the other European companies.[126][124][127][128] Its further access to the riches of Bengal and the subsequent increased strength and size of its army enabled it to annexe or subdue most of India by the 1820s.[129] India was then no longer exporting manufactured goods as it long had, but was instead supplying the British Empire with raw materials. Many historians consider this to be the onset of India's colonial period.[124] By this time, with its economic power severely curtailed by the British parliament and having effectively been made an arm of British administration, the company began more consciously to enter non-economic arenas like education, social reform, and culture.[130]
59
+
60
+ Historians consider India's modern age to have begun sometime between 1848 and 1885. The appointment in 1848 of Lord Dalhousie as Governor General of the East India Company set the stage for changes essential to a modern state. These included the consolidation and demarcation of sovereignty, the surveillance of the population, and the education of citizens. Technological changes—among them, railways, canals, and the telegraph—were introduced not long after their introduction in Europe.[131][132][133][134] However, disaffection with the company also grew during this time and set off the Indian Rebellion of 1857. Fed by diverse resentments and perceptions, including invasive British-style social reforms, harsh land taxes, and summary treatment of some rich landowners and princes, the rebellion rocked many regions of northern and central India and shook the foundations of Company rule.[135][136] Although the rebellion was suppressed by 1858, it led to the dissolution of the East India Company and the direct administration of India by the British government. Proclaiming a unitary state and a gradual but limited British-style parliamentary system, the new rulers also protected princes and landed gentry as a feudal safeguard against future unrest.[137][138] In the decades following, public life gradually emerged all over India, leading eventually to the founding of the Indian National Congress in 1885.[139][140][141][142]
61
+
62
+ The rush of technology and the commercialisation of agriculture in the second half of the 19th century was marked by economic setbacks and many small farmers became dependent on the whims of far-away markets.[143] There was an increase in the number of large-scale famines,[144] and, despite the risks of infrastructure development borne by Indian taxpayers, little industrial employment was generated for Indians.[145] There were also salutary effects: commercial cropping, especially in the newly canalled Punjab, led to increased food production for internal consumption.[146] The railway network provided critical famine relief,[147] notably reduced the cost of moving goods,[147] and helped nascent Indian-owned industry.[146]
63
+
64
+ After World War I, in which approximately one million Indians served,[148] a new period began. It was marked by British reforms but also repressive legislation, by more strident Indian calls for self-rule, and by the beginnings of a nonviolent movement of non-co-operation, of which Mohandas Karamchand Gandhi would become the leader and enduring symbol.[149] During the 1930s, slow legislative reform was enacted by the British; the Indian National Congress won victories in the resulting elections.[150] The next decade was beset with crises: Indian participation in World War II, the Congress's final push for non-co-operation, and an upsurge of Muslim nationalism. All were capped by the advent of independence in 1947, but tempered by the partition of India into two states: India and Pakistan.[151]
65
+
66
+ Vital to India's self-image as an independent nation was its constitution, completed in 1950, which put in place a secular and democratic republic.[152] It has remained a democracy with civil liberties, an active Supreme Court, and a largely independent press.[153] Economic liberalisation, which began in the 1990s, has created a large urban middle class, transformed India into one of the world's fastest-growing economies,[154] and increased its geopolitical clout. Indian movies, music, and spiritual teachings play an increasing role in global culture.[153] Yet, India is also shaped by seemingly unyielding poverty, both rural and urban;[153] by religious and caste-related violence;[155] by Maoist-inspired Naxalite insurgencies;[156] and by separatism in Jammu and Kashmir and in Northeast India.[157] It has unresolved territorial disputes with China[158] and with Pakistan.[158] India's sustained democratic freedoms are unique among the world's newer nations; however, in spite of its recent economic successes, freedom from want for its disadvantaged population remains a goal yet to be achieved.[159]
67
+
68
+ India accounts for the bulk of the Indian subcontinent, lying atop the Indian tectonic plate, a part of the Indo-Australian Plate.[160] India's defining geological processes began 75 million years ago when the Indian Plate, then part of the southern supercontinent Gondwana, began a north-eastward drift caused by seafloor spreading to its south-west, and later, south and south-east.[160] Simultaneously, the vast Tethyan oceanic crust, to its northeast, began to subduct under the Eurasian Plate.[160] These dual processes, driven by convection in the Earth's mantle, both created the Indian Ocean and caused the Indian continental crust eventually to under-thrust Eurasia and to uplift the Himalayas.[160] Immediately south of the emerging Himalayas, plate movement created a vast trough that rapidly filled with river-borne sediment[161] and now constitutes the Indo-Gangetic Plain.[162] Cut off from the plain by the ancient Aravalli Range lies the Thar Desert.[163]
69
+
70
+ The original Indian Plate survives as peninsular India, the oldest and geologically most stable part of India. It extends as far north as the Satpura and Vindhya ranges in central India. These parallel chains run from the Arabian Sea coast in Gujarat in the west to the coal-rich Chota Nagpur Plateau in Jharkhand in the east.[164] To the south, the remaining peninsular landmass, the Deccan Plateau, is flanked on the west and east by coastal ranges known as the Western and Eastern Ghats;[165] the plateau contains the country's oldest rock formations, some over one billion years old. Constituted in such fashion, India lies to the north of the equator between 6° 44′ and 35° 30′ north latitude[i] and 68° 7′ and 97° 25′ east longitude.[166]
71
+
72
+ India's coastline measures 7,517 kilometres (4,700 mi) in length; of this distance, 5,423 kilometres (3,400 mi) belong to peninsular India and 2,094 kilometres (1,300 mi) to the Andaman, Nicobar, and Lakshadweep island chains.[167] According to the Indian naval hydrographic charts, the mainland coastline consists of the following: 43% sandy beaches; 11% rocky shores, including cliffs; and 46% mudflats or marshy shores.[167]
73
+
74
+ Major Himalayan-origin rivers that substantially flow through India include the Ganges and the Brahmaputra, both of which drain into the Bay of Bengal.[169] Important tributaries of the Ganges include the Yamuna and the Kosi; the latter's extremely low gradient, caused by long-term silt deposition, leads to severe floods and course changes.[170][171] Major peninsular rivers, whose steeper gradients prevent their waters from flooding, include the Godavari, the Mahanadi, the Kaveri, and the Krishna, which also drain into the Bay of Bengal;[172] and the Narmada and the Tapti, which drain into the Arabian Sea.[173] Coastal features include the marshy Rann of Kutch of western India and the alluvial Sundarbans delta of eastern India; the latter is shared with Bangladesh.[174] India has two archipelagos: the Lakshadweep, coral atolls off India's south-western coast; and the Andaman and Nicobar Islands, a volcanic chain in the Andaman Sea.[175]
75
+
76
+ The Indian climate is strongly influenced by the Himalayas and the Thar Desert, both of which drive the economically and culturally pivotal summer and winter monsoons.[176] The Himalayas prevent cold Central Asian katabatic winds from blowing in, keeping the bulk of the Indian subcontinent warmer than most locations at similar latitudes.[177][178] The Thar Desert plays a crucial role in attracting the moisture-laden south-west summer monsoon winds that, between June and October, provide the majority of India's rainfall.[176] Four major climatic groupings predominate in India: tropical wet, tropical dry, subtropical humid, and montane.[179]
77
+
78
+ India is a megadiverse country, a term employed for 17 countries which display high biological diversity and contain many species exclusively indigenous, or endemic, to them.[181] India is a habitat for 8.6% of all mammal species, 13.7% of bird species, 7.9% of reptile species, 6% of amphibian species, 12.2% of fish species, and 6.0% of all flowering plant species.[182][183] Fully a third of Indian plant species are endemic.[184] India also contains four of the world's 34 biodiversity hotspots,[57] or regions that display significant habitat loss in the presence of high endemism.[j][185]
79
+
80
+ India's forest cover is 701,673 km2 (270,917 sq mi), which is 21.35% of the country's total land area. It can be subdivided further into broad categories of canopy density, or the proportion of the area of a forest covered by its tree canopy.[186] Very dense forest, whose canopy density is greater than 70%, occupies 2.61% of India's land area.[186] It predominates in the tropical moist forest of the Andaman Islands, the Western Ghats, and Northeast India.[187] Moderately dense forest, whose canopy density is between 40% and 70%, occupies 9.59% of India's land area.[186] It predominates in the temperate coniferous forest of the Himalayas, the moist deciduous sal forest of eastern India, and the dry deciduous teak forest of central and southern India.[187] Open forest, whose canopy density is between 10% and 40%, occupies 9.14% of India's land area,[186] and predominates in the babul-dominated thorn forest of the central Deccan Plateau and the western Gangetic plain.[187]
81
+
82
+ Among the Indian subcontinent's notable indigenous trees are the astringent Azadirachta indica, or neem, which is widely used in rural Indian herbal medicine,[188] and the luxuriant Ficus religiosa, or peepul,[189] which is displayed on the ancient seals of Mohenjo-daro,[190] and under which the Buddha is recorded in the Pali canon to have sought enlightenment,[191]
83
+
84
+ Many Indian species have descended from those of Gondwana, the southern supercontinent from which India separated more than 100 million years ago.[192] India's subsequent collision with Eurasia set off a mass exchange of species. However, volcanism and climatic changes later caused the extinction of many endemic Indian forms.[193] Still later, mammals entered India from Asia through two zoogeographical passes flanking the Himalayas.[187] This had the effect of lowering endemism among India's mammals, which stands at 12.6%, contrasting with 45.8% among reptiles and 55.8% among amphibians.[183] Notable endemics are the vulnerable[194] hooded leaf monkey[195] and the threatened[196] Beddom's toad[196][197] of the Western Ghats.
85
+
86
+ India contains 172 IUCN-designated threatened animal species, or 2.9% of endangered forms.[198] These include the endangered Bengal tiger and the Ganges river dolphin. Critically endangered species include: the gharial, a crocodilian; the great Indian bustard; and the Indian white-rumped vulture, which has become nearly extinct by having ingested the carrion of diclofenac-treated cattle.[199] The pervasive and ecologically devastating human encroachment of recent decades has critically endangered Indian wildlife. In response, the system of national parks and protected areas, first established in 1935, was expanded substantially. In 1972, India enacted the Wildlife Protection Act[200] and Project Tiger to safeguard crucial wilderness; the Forest Conservation Act was enacted in 1980 and amendments added in 1988.[201] India hosts more than five hundred wildlife sanctuaries and thirteen biosphere reserves,[202] four of which are part of the World Network of Biosphere Reserves; twenty-five wetlands are registered under the Ramsar Convention.[203]
87
+
88
+ India is the world's most populous democracy.[205] A parliamentary republic with a multi-party system,[206] it has eight recognised national parties, including the Indian National Congress and the Bharatiya Janata Party (BJP), and more than 40 regional parties.[207] The Congress is considered centre-left in Indian political culture,[208] and the BJP right-wing.[209][210][211] For most of the period between 1950—when India first became a republic—and the late 1980s, the Congress held a majority in the parliament. Since then, however, it has increasingly shared the political stage with the BJP,[212] as well as with powerful regional parties which have often forced the creation of multi-party coalition governments at the centre.[213]
89
+
90
+ In the Republic of India's first three general elections, in 1951, 1957, and 1962, the Jawaharlal Nehru-led Congress won easy victories. On Nehru's death in 1964, Lal Bahadur Shastri briefly became prime minister; he was succeeded, after his own unexpected death in 1966, by Nehru's daughter Indira Gandhi, who went on to lead the Congress to election victories in 1967 and 1971. Following public discontent with the state of emergency she declared in 1975, the Congress was voted out of power in 1977; the then-new Janata Party, which had opposed the emergency, was voted in. Its government lasted just over two years. Voted back into power in 1980, the Congress saw a change in leadership in 1984, when Indira Gandhi was assassinated; she was succeeded by her son Rajiv Gandhi, who won an easy victory in the general elections later that year. The Congress was voted out again in 1989 when a National Front coalition, led by the newly formed Janata Dal in alliance with the Left Front, won the elections; that government too proved relatively short-lived, lasting just under two years.[214] Elections were held again in 1991; no party won an absolute majority. The Congress, as the largest single party, was able to form a minority government led by P. V. Narasimha Rao.[215]
91
+
92
+ A two-year period of political turmoil followed the general election of 1996. Several short-lived alliances shared power at the centre. The BJP formed a government briefly in 1996; it was followed by two comparatively long-lasting United Front coalitions, which depended on external support. In 1998, the BJP was able to form a successful coalition, the National Democratic Alliance (NDA). Led by Atal Bihari Vajpayee, the NDA became the first non-Congress, coalition government to complete a five-year term.[216] Again in the 2004 Indian general elections, no party won an absolute majority, but the Congress emerged as the largest single party, forming another successful coalition: the United Progressive Alliance (UPA). It had the support of left-leaning parties and MPs who opposed the BJP. The UPA returned to power in the 2009 general election with increased numbers, and it no longer required external support from India's communist parties.[217] That year, Manmohan Singh became the first prime minister since Jawaharlal Nehru in 1957 and 1962 to be re-elected to a consecutive five-year term.[218] In the 2014 general election, the BJP became the first political party since 1984 to win a majority and govern without the support of other parties.[219] The incumbent prime minister is Narendra Modi, a former chief minister of Gujarat. On 20 July 2017, Ram Nath Kovind was elected India's 14th president and took the oath of office on 25 July 2017.[220][221][222]
93
+
94
+ India is a federation with a parliamentary system governed under the Constitution of India—the country's supreme legal document. It is a constitutional republic and representative democracy, in which "majority rule is tempered by minority rights protected by law". Federalism in India defines the power distribution between the union and the states. The Constitution of India, which came into effect on 26 January 1950,[224] originally stated India to be a "sovereign, democratic republic;" this characterisation was amended in 1971 to "a sovereign, socialist, secular, democratic republic".[225] India's form of government, traditionally described as "quasi-federal" with a strong centre and weak states,[226] has grown increasingly federal since the late 1990s as a result of political, economic, and social changes.[227][228]
95
+
96
+ The Government of India comprises three branches:[230]
97
+
98
+ India is a federal union comprising 28 states and 8 union territories.[245] All states, as well as the union territories of Jammu and Kashmir, Puducherry and the National Capital Territory of Delhi, have elected legislatures and governments following the Westminster system of governance. The remaining five union territories are directly ruled by the central government through appointed administrators. In 1956, under the States Reorganisation Act, states were reorganised on a linguistic basis.[246] There are over a quarter of a million local government bodies at city, town, block, district and village levels.[247]
99
+
100
+ In the 1950s, India strongly supported decolonisation in Africa and Asia and played a leading role in the Non-Aligned Movement.[249] After initially cordial relations with neighbouring China, India went to war with China in 1962, and was widely thought to have been humiliated. India has had tense relations with neighbouring Pakistan; the two nations have gone to war four times: in 1947, 1965, 1971, and 1999. Three of these wars were fought over the disputed territory of Kashmir, while the fourth, the 1971 war, followed from India's support for the independence of Bangladesh.[250] In the late 1980s, the Indian military twice intervened abroad at the invitation of the host country: a peace-keeping operation in Sri Lanka between 1987 and 1990; and an armed intervention to prevent a 1988 coup d'état attempt in the Maldives. After the 1965 war with Pakistan, India began to pursue close military and economic ties with the Soviet Union; by the late 1960s, the Soviet Union was its largest arms supplier.[251]
101
+
102
+ Aside from ongoing its special relationship with Russia,[252] India has wide-ranging defence relations with Israel and France. In recent years, it has played key roles in the South Asian Association for Regional Cooperation and the World Trade Organization. The nation has provided 100,000 military and police personnel to serve in 35 UN peacekeeping operations across four continents. It participates in the East Asia Summit, the G8+5, and other multilateral forums.[253] India has close economic ties with South America,[254] Asia, and Africa; it pursues a "Look East" policy that seeks to strengthen partnerships with the ASEAN nations, Japan, and South Korea that revolve around many issues, but especially those involving economic investment and regional security.[255][256]
103
+
104
+ China's nuclear test of 1964, as well as its repeated threats to intervene in support of Pakistan in the 1965 war, convinced India to develop nuclear weapons.[258] India conducted its first nuclear weapons test in 1974 and carried out additional underground testing in 1998. Despite criticism and military sanctions, India has signed neither the Comprehensive Nuclear-Test-Ban Treaty nor the Nuclear Non-Proliferation Treaty, considering both to be flawed and discriminatory.[259] India maintains a "no first use" nuclear policy and is developing a nuclear triad capability as a part of its "Minimum Credible Deterrence" doctrine.[260][261] It is developing a ballistic missile defence shield and, a fifth-generation fighter jet.[262][263] Other indigenous military projects involve the design and implementation of Vikrant-class aircraft carriers and Arihant-class nuclear submarines.[264]
105
+
106
+ Since the end of the Cold War, India has increased its economic, strategic, and military co-operation with the United States and the European Union.[265] In 2008, a civilian nuclear agreement was signed between India and the United States. Although India possessed nuclear weapons at the time and was not a party to the Nuclear Non-Proliferation Treaty, it received waivers from the International Atomic Energy Agency and the Nuclear Suppliers Group, ending earlier restrictions on India's nuclear technology and commerce. As a consequence, India became the sixth de facto nuclear weapons state.[266] India subsequently signed co-operation agreements involving civilian nuclear energy with Russia,[267] France,[268] the United Kingdom,[269] and Canada.[270]
107
+
108
+ The President of India is the supreme commander of the nation's armed forces; with 1.395 million active troops, they compose the world's second-largest military. It comprises the Indian Army, the Indian Navy, the Indian Air Force, and the Indian Coast Guard.[271] The official Indian defence budget for 2011 was US$36.03 billion, or 1.83% of GDP.[272] For the fiscal year spanning 2012–2013, US$40.44 billion was budgeted.[273] According to a 2008 Stockholm International Peace Research Institute (SIPRI) report, India's annual military expenditure in terms of purchasing power stood at US$72.7 billion.[274] In 2011, the annual defence budget increased by 11.6%,[275] although this does not include funds that reach the military through other branches of government.[276] As of 2012[update], India is the world's largest arms importer; between 2007 and 2011, it accounted for 10% of funds spent on international arms purchases.[277] Much of the military expenditure was focused on defence against Pakistan and countering growing Chinese influence in the Indian Ocean.[275] In May 2017, the Indian Space Research Organisation launched the South Asia Satellite, a gift from India to its neighbouring SAARC countries.[278] In October 2018, India signed a US$5.43 billion (over ₹400 billion) agreement with Russia to procure four S-400 Triumf surface-to-air missile defence systems, Russia's most advanced long-range missile defence system.[279]
109
+
110
+ According to the International Monetary Fund (IMF), the Indian economy in 2019 was nominally worth $2.9 trillion; it is the fifth-largest economy by market exchange rates, and is around $11 trillion, the third-largest by purchasing power parity, or PPP.[19] With its average annual GDP growth rate of 5.8% over the past two decades, and reaching 6.1% during 2011–2012,[283] India is one of the world's fastest-growing economies.[284] However, the country ranks 139th in the world in nominal GDP per capita and 118th in GDP per capita at PPP.[285] Until 1991, all Indian governments followed protectionist policies that were influenced by socialist economics. Widespread state intervention and regulation largely walled the economy off from the outside world. An acute balance of payments crisis in 1991 forced the nation to liberalise its economy;[286] since then it has moved slowly towards a free-market system[287][288] by emphasising both foreign trade and direct investment inflows.[289] India has been a member of WTO since 1 January 1995.[290]
111
+
112
+ The 513.7-million-worker Indian labour force is the world's second-largest, as of 2016[update].[271] The service sector makes up 55.6% of GDP, the industrial sector 26.3% and the agricultural sector 18.1%. India's foreign exchange remittances of US$70 billion in 2014, the largest in the world, were contributed to its economy by 25 million Indians working in foreign countries.[291] Major agricultural products include: rice, wheat, oilseed, cotton, jute, tea, sugarcane, and potatoes.[245] Major industries include: textiles, telecommunications, chemicals, pharmaceuticals, biotechnology, food processing, steel, transport equipment, cement, mining, petroleum, machinery, and software.[245] In 2006, the share of external trade in India's GDP stood at 24%, up from 6% in 1985.[287] In 2008, India's share of world trade was 1.68%;[292] In 2011, India was the world's tenth-largest importer and the nineteenth-largest exporter.[293] Major exports include: petroleum products, textile goods, jewellery, software, engineering goods, chemicals, and manufactured leather goods.[245] Major imports include: crude oil, machinery, gems, fertiliser, and chemicals.[245] Between 2001 and 2011, the contribution of petrochemical and engineering goods to total exports grew from 14% to 42%.[294] India was the world's second largest textile exporter after China in the 2013 calendar year.[295]
113
+
114
+ Averaging an economic growth rate of 7.5% for several years prior to 2007,[287] India has more than doubled its hourly wage rates during the first decade of the 21st century.[296] Some 431 million Indians have left poverty since 1985; India's middle classes are projected to number around 580 million by 2030.[297] Though ranking 51st in global competitiveness, as of 2010[update], India ranks 17th in financial market sophistication, 24th in the banking sector, 44th in business sophistication, and 39th in innovation, ahead of several advanced economies.[298] With seven of the world's top 15 information technology outsourcing companies based in India, as of 2009[update], the country is viewed as the second-most favourable outsourcing destination after the United States.[299] India's consumer market, the world's eleventh-largest, is expected to become fifth-largest by 2030.[297] However, barely 2% of Indians pay income taxes.[300]
115
+
116
+ Driven by growth, India's nominal GDP per capita increased steadily from US$329 in 1991, when economic liberalisation began, to US$1,265 in 2010, to an estimated US$1,723 in 2016. It is expected to grow to US$2,358 by 2020.[19] However, it has remained lower than those of other Asian developing countries like Indonesia, Malaysia, Philippines, Sri Lanka, and Thailand, and is expected to remain so in the near future. Its GDP per capita is higher than Bangladesh, Pakistan, Nepal, Afghanistan and others.[301]
117
+
118
+ According to a 2011 PricewaterhouseCoopers (PwC) report, India's GDP at purchasing power parity could overtake that of the United States by 2045.[303] During the next four decades, Indian GDP is expected to grow at an annualised average of 8%, making it potentially the world's fastest-growing major economy until 2050.[303] The report highlights key growth factors: a young and rapidly growing working-age population; growth in the manufacturing sector because of rising education and engineering skill levels; and sustained growth of the consumer market driven by a rapidly growing middle-class.[303] The World Bank cautions that, for India to achieve its economic potential, it must continue to focus on public sector reform, transport infrastructure, agricultural and rural development, removal of labour regulations, education, energy security, and public health and nutrition.[304]
119
+
120
+ According to the Worldwide Cost of Living Report 2017 released by the Economist Intelligence Unit (EIU) which was created by comparing more than 400 individual prices across 160 products and services, four of the cheapest cities were in India: Bangalore (3rd), Mumbai (5th), Chennai (5th) and New Delhi (8th).[305]
121
+
122
+ India's telecommunication industry, the world's fastest-growing, added 227 million subscribers during the period 2010–2011,[306] and after the third quarter of 2017, India surpassed the US to become the second largest smartphone market in the world after China.[307]
123
+
124
+ The Indian automotive industry, the world's second-fastest growing, increased domestic sales by 26% during 2009–2010,[308] and exports by 36% during 2008–2009.[309] India's capacity to generate electrical power is 300 gigawatts, of which 42 gigawatts is renewable.[310] At the end of 2011, the Indian IT industry employed 2.8 million professionals, generated revenues close to US$100 billion equalling 7.5% of Indian GDP, and contributed 26% of India's merchandise exports.[311]
125
+
126
+ The pharmaceutical industry in India is among the significant emerging markets for the global pharmaceutical industry. The Indian pharmaceutical market is expected to reach $48.5 billion by 2020. India's R & D spending constitutes 60% of the biopharmaceutical industry.[312][313] India is among the top 12 biotech destinations in the world.[314][315] The Indian biotech industry grew by 15.1% in 2012–2013, increasing its revenues from ₹204.4 billion (Indian rupees) to ₹235.24 billion (US$3.94 billion at June 2013 exchange rates).[316]
127
+
128
+ Despite economic growth during recent decades, India continues to face socio-economic challenges. In 2006, India contained the largest number of people living below the World Bank's international poverty line of US$1.25 per day.[318] The proportion decreased from 60% in 1981 to 42% in 2005.[319] Under the World Bank's later revised poverty line, it was 21% in 2011.[l][321] 30.7% of India's children under the age of five are underweight.[322] According to a Food and Agriculture Organization report in 2015, 15% of the population is undernourished.[323][324] The Mid-Day Meal Scheme attempts to lower these rates.[325]
129
+
130
+ According to a 2016 Walk Free Foundation report there were an estimated 18.3 million people in India, or 1.4% of the population, living in the forms of modern slavery, such as bonded labour, child labour, human trafficking, and forced begging, among others.[326][327][328] According to the 2011 census, there were 10.1 million child labourers in the country, a decline of 2.6 million from 12.6 million in 2001.[329]
131
+
132
+ Since 1991, economic inequality between India's states has consistently grown: the per-capita net state domestic product of the richest states in 2007 was 3.2 times that of the poorest.[330] Corruption in India is perceived to have decreased. According to the Corruption Perceptions Index, India ranked 78th out of 180 countries in 2018 with a score of 41 out of 100, an improvement from 85th in 2014.[331][332]
133
+
134
+ With 1,210,193,422 residents reported in the 2011 provisional census report,[333] India is the world's second-most populous country. Its population grew by 17.64% from 2001 to 2011,[334] compared to 21.54% growth in the previous decade (1991–2001).[334] The human sex ratio, according to the 2011 census, is 940 females per 1,000 males.[333] The median age was 27.6 as of 2016[update].[271] The first post-colonial census, conducted in 1951, counted 361 million people.[335] Medical advances made in the last 50 years as well as increased agricultural productivity brought about by the "Green Revolution" have caused India's population to grow rapidly.[336]
135
+
136
+ The average life expectancy in India is at 68 years—69.6 years for women, 67.3 years for men.[337] There are around 50 physicians per 100,000 Indians.[338] Migration from rural to urban areas has been an important dynamic in India's recent history. The number of people living in urban areas grew by 31.2% between 1991 and 2001.[339] Yet, in 2001, over 70% still lived in rural areas.[340][341] The level of urbanisation increased further from 27.81% in the 2001 Census to 31.16% in the 2011 Census. The slowing down of the overall population growth rate was due to the sharp decline in the growth rate in rural areas since 1991.[342] According to the 2011 census, there are 53 million-plus urban agglomerations in India; among them Mumbai, Delhi, Kolkata, Chennai, Bangalore, Hyderabad and Ahmedabad, in decreasing order by population.[343] The literacy rate in 2011 was 74.04%: 65.46% among females and 82.14% among males.[344] The rural-urban literacy gap, which was 21.2 percentage points in 2001, dropped to 16.1 percentage points in 2011. The improvement in the rural literacy rate is twice that of urban areas.[342] Kerala is the most literate state with 93.91% literacy; while Bihar the least with 63.82%.[344]
137
+
138
+ India is home to two major language families: Indo-Aryan (spoken by about 74% of the population) and Dravidian (spoken by 24% of the population). Other languages spoken in India come from the Austroasiatic and Sino-Tibetan language families. India has no national language.[345] Hindi, with the largest number of speakers, is the official language of the government.[346][347] English is used extensively in business and administration and has the status of a "subsidiary official language";[5] it is important in education, especially as a medium of higher education. Each state and union territory has one or more official languages, and the constitution recognises in particular 22 "scheduled languages".
139
+
140
+ The 2011 census reported the religion in India with the largest number of followers was Hinduism (79.80% of the population), followed by Islam (14.23%); the remaining were Christianity (2.30%), Sikhism (1.72%), Buddhism (0.70%), Jainism (0.36%) and others[m] (0.9%).[14] India has the world's largest Hindu, Sikh, Jain, Zoroastrian, and Bahá'í populations, and has the third-largest Muslim population—the largest for a non-Muslim majority country.[348][349]
141
+
142
+ Indian cultural history spans more than 4,500 years.[350] During the Vedic period (c. 1700 – c. 500 BCE), the foundations of Hindu philosophy, mythology, theology and literature were laid, and many beliefs and practices which still exist today, such as dhárma, kárma, yóga, and mokṣa, were established.[62] India is notable for its religious diversity, with Hinduism, Buddhism, Sikhism, Islam, Christianity, and Jainism among the nation's major religions.[351] The predominant religion, Hinduism, has been shaped by various historical schools of thought, including those of the Upanishads,[352] the Yoga Sutras, the Bhakti movement,[351] and by Buddhist philosophy.[353]
143
+
144
+ Much of Indian architecture, including the Taj Mahal, other works of Mughal architecture, and South Indian architecture, blends ancient local traditions with imported styles.[354] Vernacular architecture is also regional in its flavours. Vastu shastra, literally "science of construction" or "architecture" and ascribed to Mamuni Mayan,[355] explores how the laws of nature affect human dwellings;[356] it employs precise geometry and directional alignments to reflect perceived cosmic constructs.[357] As applied in Hindu temple architecture, it is influenced by the Shilpa Shastras, a series of foundational texts whose basic mythological form is the Vastu-Purusha mandala, a square that embodied the "absolute".[358] The Taj Mahal, built in Agra between 1631 and 1648 by orders of Emperor Shah Jahan in memory of his wife, has been described in the UNESCO World Heritage List as "the jewel of Muslim art in India and one of the universally admired masterpieces of the world's heritage".[359] Indo-Saracenic Revival architecture, developed by the British in the late 19th century, drew on Indo-Islamic architecture.[360]
145
+
146
+ The earliest literature in India, composed between 1500 BCE and 1200 CE, was in the Sanskrit language.[361] Major works of Sanskrit literature include the Rigveda (c. 1500 BCE – 1200 BCE), the epics: Mahābhārata (c. 400 BCE – 400 CE) and the Ramayana (c. 300 BCE and later); Abhijñānaśākuntalam (The Recognition of Śakuntalā, and other dramas of Kālidāsa (c. 5th century CE) and Mahākāvya poetry.[362][363][364] In Tamil literature, the Sangam literature (c. 600 BCE – 300 BCE) consisting of 2,381 poems, composed by 473 poets, is the earliest work.[365][366][367][368] From the 14th to the 18th centuries, India's literary traditions went through a period of drastic change because of the emergence of devotional poets like Kabīr, Tulsīdās, and Guru Nānak. This period was characterised by a varied and wide spectrum of thought and expression; as a consequence, medieval Indian literary works differed significantly from classical traditions.[369] In the 19th century, Indian writers took a new interest in social questions and psychological descriptions. In the 20th century, Indian literature was influenced by the works of the Bengali poet and novelist Rabindranath Tagore,[370] who was a recipient of the Nobel Prize in Literature.
147
+
148
+ Indian music ranges over various traditions and regional styles. Classical music encompasses two genres and their various folk offshoots: the northern Hindustani and southern Carnatic schools.[371] Regionalised popular forms include filmi and folk music; the syncretic tradition of the bauls is a well-known form of the latter. Indian dance also features diverse folk and classical forms. Among the better-known folk dances are: the bhangra of Punjab, the bihu of Assam, the Jhumair and chhau of Jharkhand, Odisha and West Bengal, garba and dandiya of Gujarat, ghoomar of Rajasthan, and the lavani of Maharashtra. Eight dance forms, many with narrative forms and mythological elements, have been accorded classical dance status by India's National Academy of Music, Dance, and Drama. These are: bharatanatyam of the state of Tamil Nadu, kathak of Uttar Pradesh, kathakali and mohiniyattam of Kerala, kuchipudi of Andhra Pradesh, manipuri of Manipur, odissi of Odisha, and the sattriya of Assam.[372] Theatre in India melds music, dance, and improvised or written dialogue.[373] Often based on Hindu mythology, but also borrowing from medieval romances or social and political events, Indian theatre includes: the bhavai of Gujarat, the jatra of West Bengal, the nautanki and ramlila of North India, tamasha of Maharashtra, burrakatha of Andhra Pradesh, terukkuttu of Tamil Nadu, and the yakshagana of Karnataka.[374] India has a theatre training institute the National School of Drama (NSD) that is situated at New Delhi It is an autonomous organisation under the Ministry of Culture, Government of India.[375]
149
+ The Indian film industry produces the world's most-watched cinema.[376] Established regional cinematic traditions exist in the Assamese, Bengali, Bhojpuri, Hindi, Kannada, Malayalam, Punjabi, Gujarati, Marathi, Odia, Tamil, and Telugu languages.[377] The Hindi language film industry (Bollywood) is the largest sector representing 43% of box office revenue, followed by the South Indian Telugu and Tamil film industries which represent 36% combined.[378]
150
+
151
+ Television broadcasting began in India in 1959 as a state-run medium of communication and expanded slowly for more than two decades.[379][380] The state monopoly on television broadcast ended in the 1990s. Since then, satellite channels have increasingly shaped the popular culture of Indian society.[381] Today, television is the most penetrative media in India; industry estimates indicate that as of 2012[update] there are over 554 million TV consumers, 462 million with satellite or cable connections compared to other forms of mass media such as the press (350 million), radio (156 million) or internet (37 million).[382]
152
+
153
+ Traditional Indian society is sometimes defined by social hierarchy. The Indian caste system embodies much of the social stratification and many of the social restrictions found in the Indian subcontinent. Social classes are defined by thousands of endogamous hereditary groups, often termed as jātis, or "castes".[383] India declared untouchability to be illegal[384] in 1947 and has since enacted other anti-discriminatory laws and social welfare initiatives. At the workplace in urban India, and in international or leading Indian companies, caste-related identification has pretty much lost its importance.[385][386]
154
+
155
+ Family values are important in the Indian tradition, and multi-generational patriarchal joint families have been the norm in India, though nuclear families are becoming common in urban areas.[387] An overwhelming majority of Indians, with their consent, have their marriages arranged by their parents or other family elders.[388] Marriage is thought to be for life,[388] and the divorce rate is extremely low,[389] with less than one in a thousand marriages ending in divorce.[390] Child marriages are common, especially in rural areas; many women wed before reaching 18, which is their legal marriageable age.[391] Female infanticide in India, and lately female foeticide, have created skewed gender ratios; the number of missing women in the country quadrupled from 15 million to 63 million in the 50-year period ending in 2014, faster than the population growth during the same period, and constituting 20 percent of India's female electorate.[392] Accord to an Indian government study, an additional 21 million girls are unwanted and do not receive adequate care.[393] Despite a government ban on sex-selective foeticide, the practice remains commonplace in India, the result of a preference for boys in a patriarchal society.[394] The payment of dowry, although illegal, remains widespread across class lines.[395] Deaths resulting from dowry, mostly from bride burning, are on the rise, despite stringent anti-dowry laws.[396]
156
+
157
+ Many Indian festivals are religious in origin. The best known include: Diwali, Ganesh Chaturthi, Thai Pongal, Holi, Durga Puja, Eid ul-Fitr, Bakr-Id, Christmas, and Vaisakhi.[397][398]
158
+
159
+ The most widely worn traditional dress in India, for both women and men, from ancient times until the advent of modern times, was draped.[399] For women it eventually took the form of a sari, a single long piece of cloth, famously six yards long, and of width spanning the lower body.[399] The sari is tied around the waist and knotted at one end, wrapped around the lower body, and then over the shoulder.[399] In its more modern form, it has been used to cover the head, and sometimes the face, as a veil.[399] It has been combined with an underskirt, or Indian petticoat, and tucked in the waist band for more secure fastening, It is also commonly worn with an Indian blouse, or choli, which serves as the primary upper-body garment, the sari's end, passing over the shoulder, now serving to obscure the upper body's contours, and to cover the midriff.[399]
160
+
161
+ For men, a similar but shorter length of cloth, the dhoti, has served as a lower-body garment.[400] It too is tied around the waist and wrapped.[400] In south India, it is usually wrapped around the lower body, the upper end tucked in the waistband, the lower left free. In addition, in northern India, it is also wrapped once around each leg before being brought up through the legs to be tucked in at the back. Other forms of traditional apparel that involve no stitching or tailoring are the chaddar (a shawl worn by both sexes to cover the upper body during colder weather, or a large veil worn by women for framing the head, or covering it) and the pagri (a turban or a scarf worn around the head as a part of a tradition, or to keep off the sun or the cold).[400]
162
+
163
+ Until the beginning of the first millennium CE, the ordinary dress of people in India was entirely unstitched.[401] The arrival of the Kushans from Central Asia, circa 48 CE, popularised cut and sewn garments in the style of Central Asian favoured by the elite in northern India.[401] However, it was not until Muslim rule was established, first with the Delhi sultanate and then the Mughal Empire, that the range of stitched clothes in India grew and their use became significantly more widespread.[401] Among the various garments gradually establishing themselves in northern India during medieval and early-modern times and now commonly worn are: the shalwars and pyjamas both forms of trousers, as well as the tunics kurta and kameez.[401] In southern India, however, the traditional draped garments were to see much longer continuous use.[401]
164
+
165
+ Shalwars are atypically wide at the waist but narrow to a cuffed bottom. They are held up by a drawstring or elastic belt, which causes them to become pleated around the waist.[402] The pants can be wide and baggy, or they can be cut quite narrow, on the bias, in which case they are called churidars. The kameez is a long shirt or tunic.[403] The side seams are left open below the waist-line,[404]), which gives the wearer greater freedom of movement. The kameez is usually cut straight and flat; older kameez use traditional cuts; modern kameez are more likely to have European-inspired set-in sleeves. The kameez may have a European-style collar, a Mandarin-collar, or it may be collarless; in the latter case, its design as a women's garment is similar to a kurta.[405] At first worn by Muslim women, the use of shalwar kameez gradually spread, making them a regional style,[406][407] especially in the Punjab region.[408]
166
+ [409]
167
+
168
+ A kurta, which traces its roots to Central Asian nomadic tunics, has evolved stylistically in India as a garment for everyday wear as well as for formal occasions.[401] It is traditionally made of cotton or silk; it is worn plain or with embroidered decoration, such as chikan; and it can be loose or tight in the torso, typically falling either just above or somewhere below the wearer's knees.[410] The sleeves of a traditional kurta fall to the wrist without narrowing, the ends hemmed but not cuffed; the kurta can be worn by both men and women; it is traditionally collarless, though standing collars are increasingly popular; and it can be worn over ordinary pyjamas, loose shalwars, churidars, or less traditionally over jeans.[410]
169
+
170
+ In the last 50 years, fashions have changed a great deal in India. Increasingly, in urban settings in northern India, the sari is no longer the apparel of everyday wear, transformed instead into one for formal occasions.[411] The traditional shalwar kameez is rarely worn by younger women, who favour churidars or jeans.[411] The kurtas worn by young men usually fall to the shins and are seldom plain. In white-collar office settings, ubiquitous air conditioning allows men to wear sports jackets year-round.[411] For weddings and formal occasions, men in the middle- and upper classes often wear bandgala, or short Nehru jackets, with pants, with the groom and his groomsmen sporting sherwanis and churidars.[411] The dhoti, the once universal garment of Hindu India, the wearing of which in the homespun and handwoven form of khadi allowed Gandhi to bring Indian nationalism to the millions,[412]
171
+ is seldom seen in the cities,[411] reduced now, with brocaded border, to the liturgical vestments of Hindu priests.
172
+
173
+ Indian cuisine consists of a wide variety of regional and traditional cuisines. Given the range of diversity in soil type, climate, culture, ethnic groups, and occupations, these cuisines vary substantially from each other, using locally available spices, herbs, vegetables, and fruit. Indian foodways have been influenced by religion, in particular Hindu cultural choices and traditions.[413] They have been also shaped by Islamic rule, particularly that of the Mughals, by the arrival of the Portuguese on India's southwestern shores, and by British rule. These three influences are reflected, respectively, in the dishes of pilaf and biryani; the vindaloo; and the tiffin and the Railway mutton curry.[414] Earlier, the Columbian exchange had brought the potato, the tomato, maize, peanuts, cashew nuts, pineapples, guavas, and most notably, chilli peppers, to India. Each became staples of use.[415] In turn, the spice trade between India and Europe was a catalyst for Europe's Age of Discovery.[416]
174
+
175
+ The cereals grown in India, their choice, times, and regions of planting, correspond strongly to the timing of India's monsoons, and the variation across regions in their associated rainfall.[417] In general, the broad division of cereal zones in India, as determined by their dependence on rain, was firmly in place before the arrival of artificial irrigation.[417] Rice, which requires a lot of water, has been grown traditionally in regions of high rainfall in the northeast and the western coast, wheat in regions of moderate rainfall, like India's northern plains, and millet in regions of low rainfall, such as on the Deccan Plateau and in Rajasthan.[418][417]
176
+
177
+ The foundation of a typical Indian meal is a cereal cooked in plain fashion, and complemented with flavourful savoury dishes.[419] The latter includes lentils, pulses and vegetables spiced commonly with ginger and garlic, but also more discerningly with a combination of spices that may include coriander, cumin, turmeric, cinnamon, cardamon and others as informed by culinary conventions.[419] In an actual meal, this mental representation takes the form of a platter, or thali, with a central place for the cooked cereal, peripheral ones, often in small bowls, for the flavourful accompaniments, and the simultaneous, rather than piecemeal, ingestion of the two in each act of eating, whether by actual mixing—for example of rice and lentils—or in the folding of one—such as bread—around the other, such as cooked vegetables.[419]
178
+
179
+ A notable feature of Indian food is the existence of a number of distinctive vegetarian cuisines, each a feature of the geographical and cultural histories of its adherents.[420] The appearance of ahimsa, or the avoidance of violence toward all forms of life in many religious orders early in Indian history, especially Upanishadic Hinduism, Buddhism and Jainism, is thought to have been a notable factor in the prevalence of vegetarianism among a segment of India's Hindu population, especially in southern India, Gujarat, and the Hindi-speaking belt of north-central India, as well as among Jains.[420] Among these groups, strong discomfort is felt at thoughts of eating meat,[421] and contributes to the low proportional consumption of meat to overall diet in India.[421] Unlike China, which has increased its per capita meat consumption substantially in its years of increased economic growth, in India the strong dietary traditions have contributed to dairy, rather than meat, becoming the preferred form of animal protein consumption accompanying higher economic growth.[422]
180
+
181
+ In the last millennium, the most significant import of cooking techniques into India occurred during the Mughal Empire. The cultivation of rice had spread much earlier from India to Central and West Asia; however, it was during Mughal rule that dishes, such as the pilaf,[418] developed in the interim during the Abbasid caliphate,[423] and cooking techniques such as the marinating of meat in yogurt, spread into northern India from regions to its northwest.[424] To the simple yogurt marinade of Persia, onions, garlic, almonds, and spices began to be added in India.[424] Rice grown to the southwest of the Mughal capital, Agra, which had become famous in the Islamic world for its fine grain, was partially cooked and layered alternately with the sauteed meat, the pot sealed tightly, and slow cooked according to another Persian cooking technique, to produce what has today become the Indian biryani,[424] a feature of festive dining in many parts of India.[425]
182
+ In food served in restaurants in urban north India, and internationally, the diversity of Indian food has been partially concealed by the dominance of Punjabi cuisine. This was caused in large part by an entrepreneurial response among people from the Punjab region who had been displaced by the 1947 partition of India, and had arrived in India as refugees.[420] The identification of Indian cuisine with the tandoori chicken—cooked in the tandoor oven, which had traditionally been used for baking bread in the rural Punjab and the Delhi region, especially among Muslims, but which is originally from Central Asia—dates to this period.[420]
183
+
184
+ In India, several traditional indigenous sports remain fairly popular, such as kabaddi, kho kho, pehlwani and gilli-danda. Some of the earliest forms of Asian martial arts, such as kalarippayattu, musti yuddha, silambam, and marma adi, originated in India. Chess, commonly held to have originated in India as chaturaṅga, is regaining widespread popularity with the rise in the number of Indian grandmasters.[426][427] Pachisi, from which parcheesi derives, was played on a giant marble court by Akbar.[428]
185
+
186
+ The improved results garnered by the Indian Davis Cup team and other Indian tennis players in the early 2010s have made tennis increasingly popular in the country.[429] India has a comparatively strong presence in shooting sports, and has won several medals at the Olympics, the World Shooting Championships, and the Commonwealth Games.[430][431] Other sports in which Indians have succeeded internationally include badminton[432] (Saina Nehwal and P V Sindhu are two of the top-ranked female badminton players in the world), boxing,[433] and wrestling.[434] Football is popular in West Bengal, Goa, Tamil Nadu, Kerala, and the north-eastern states.[435]
187
+
188
+ Cricket is the most popular sport in India.[437] Major domestic competitions include the Indian Premier League, which is the most-watched cricket league in the world and ranks sixth among all sports leagues.[438]
189
+
190
+ India has hosted or co-hosted several international sporting events: the 1951 and 1982 Asian Games; the 1987, 1996, and 2011 Cricket World Cup tournaments; the 2003 Afro-Asian Games; the 2006 ICC Champions Trophy; the 2010 Hockey World Cup; the 2010 Commonwealth Games; and the 2017 FIFA U-17 World Cup. Major international sporting events held annually in India include the Chennai Open, the Mumbai Marathon, the Delhi Half Marathon, and the Indian Masters. The first Formula 1 Indian Grand Prix featured in late 2011 but has been discontinued from the F1 season calendar since 2014.[439] India has traditionally been the dominant country at the South Asian Games. An example of this dominance is the basketball competition where the Indian team won three out of four tournaments to date.[440]
191
+
192
+ Overview
193
+
194
+ Etymology
195
+
196
+ History
197
+
198
+ Geography
199
+
200
+ Biodiversity
201
+
202
+ Politics
203
+
204
+ Foreign relations and military
205
+
206
+ Economy
207
+
208
+ Demographics
209
+
210
+ Culture
211
+
212
+ Government
213
+
214
+ General information
215
+
216
+ Coordinates: 21°N 78°E / 21°N 78°E / 21; 78
en/5862.html.txt ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The Soviet Union,[d] officially the Union of Soviet Socialist Republics[e] (USSR),[f] was a federal socialist state in Northern Eurasia that existed from 1922 to 1991. Nominally a union of multiple national Soviet republics,[g] in practice its government and economy were highly centralized until its final years. It was a one-party state governed by the Communist Party, with Moscow as its capital in its largest republic, the Russian SFSR. Other major urban centers were Leningrad, Kiev, Minsk, Tashkent, Alma-Ata and Novosibirsk. It was the largest country in the world by surface area,[18] spanning over 10,000 kilometers (6,200 mi) east to west across 11 time zones and over 7,200 kilometers (4,500 mi) north to south. Its territory included much of Eastern Europe as well as part of Northern Europe and all of Northern and Central Asia. It had five climate zones such as tundra, taiga, steppes, desert, and mountains. Its diverse population was collectively known as Soviet people.
6
+
7
+ The Soviet Union had its roots in the October Revolution of 1917, when the Bolsheviks, headed by Vladimir Lenin, overthrew the Provisional Government that had earlier replaced the monarchy. They established the Russian Soviet Republic[h], beginning a civil war between the Bolshevik Red Army and many anti-Bolshevik forces across the former Empire, among whom the largest faction was the White Guard. The disastrous distractive effect of the war and the Bolshevik policies led to 5 million deaths during the 1921–1922 famine in the region of Povolzhye. The Red Army expanded and helped local Communists take power, establishing soviets, repressing their political opponents and rebellious peasants through the policies of Red Terror and War Communism. In 1922, the Communists were victorious, forming the Soviet Union with the unification of the Russian, Transcaucasian, Ukrainian and Byelorussian republics. The New Economic Policy (NEP) which was introduced by Lenin led to a partial return of a free market and private property, resulting in a period of economic recovery.
8
+
9
+ Following Lenin's death in 1924, a troika and a brief power struggle, Joseph Stalin came to power in the mid-1920s. Stalin suppressed all political opposition to his rule inside the Communist Party, committed the state ideology to Marxism–Leninism, ended the NEP, initiating a centrally planned economy. As a result, the country underwent a period of rapid industrialization and forced collectivization, which led to a significant economic growth, but also created a man-made famine of 1932–1933 and expanded the Gulag labour camp system founded back in 1918. Stalin also fomented political paranoia and conducted the Great Purge to remove opponents of his from the Party through the mass arbitrary arrest of many people (military leaders, Communist Party members and ordinary citizens alike) who were then sent to correctional labor camps or sentenced to death.
10
+
11
+ On 23 August 1939, after unsuccessful efforts to form an anti-fascist alliance with Western powers, the Soviets signed the non-aggression agreement with Nazi Germany. After the start of World War II, the formally neutral Soviets invaded and annexed territories of several Eastern European states, including eastern Poland and the Baltic states. In June 1941 the Germans invaded, opening the largest and bloodiest theater of war in history. Soviet war casualties accounted for the highest proportion of the conflict in the cost of acquiring the upper hand over Axis forces at intense battles such as Stalingrad. Soviet forces eventually captured Berlin and won World War II in Europe on 9 May 1945. The territory overtaken by the Red Army became satellite states of the Eastern Bloc. The Cold War emerged in 1947 as a result of a post-war Soviet dominance in Eastern Europe, where the Eastern Bloc confronted the Western Bloc that united in the North Atlantic Treaty Organization in 1949.
12
+
13
+ Following Stalin's death in 1953, a period known as de-Stalinization and Khrushchev Thaw occurred under the leadership of Nikita Khrushchev. The country developed rapidly, as millions of peasants were moved into industrialized cities. The USSR took an early lead in the Space Race with the first ever satellite and the first human spaceflight. In the 1970s, there was a brief détente of relations with the United States, but tensions resumed when the Soviet Union deployed troops in Afghanistan in 1979. The war drained economic resources and was matched by an escalation of American military aid to Mujahideen fighters.
14
+
15
+ In the mid-1980s, the last Soviet leader, Mikhail Gorbachev, sought to further reform and liberalize the economy through his policies of glasnost and perestroika. The goal was to preserve the Communist Party while reversing economic stagnation. The Cold War ended during his tenure, and in 1989 Soviet satellite countries in Eastern Europe overthrew their respective communist regimes. This led to the rise of strong nationalist and separatist movements inside the USSR as well. Central authorities initiated a referendum—boycotted by the Baltic republics, Armenia, Georgia, and Moldova—which resulted in the majority of participating citizens voting in favor of preserving the Union as a renewed federation. In August 1991, a coup d'état was attempted by Communist Party hardliners. It failed, with Russian President Boris Yeltsin playing a high-profile role in facing down the coup, resulting in the banning of the Communist Party. On 25 December 1991, Gorbachev resigned and the remaining twelve constituent republics emerged from the dissolution of the Soviet Union as independent post-Soviet states. The Russian Federation (formerly the Russian SFSR) assumed the Soviet Union's rights and obligations and is recognized as its continued legal personality.
16
+
17
+ The USSR produced many significant social and technological achievements and innovations of the 20th century, including the world's first ministry of health, first human-made satellite, the first humans in space and the first probe to land on another planet, Venus. The country had the world's second-largest economy and the largest standing military in the world.[19][20][21] The USSR was recognized as one of the five nuclear weapons states. It was a founding permanent member of the United Nations Security Council as well as a member of the Organization for Security and Co-operation in Europe, the World Federation of Trade Unions and the leading member of the Council for Mutual Economic Assistance and the Warsaw Pact.
18
+
19
+ The word soviet is derived from the Russian word sovet (Russian: совет), meaning "council", "assembly", "advice", "harmony", "concord",[note 1] ultimately deriving from the proto-Slavic verbal stem of vět-iti ("to inform"), related to Slavic věst ("news"), English "wise", the root in "ad-vis-or" (which came to English through French), or the Dutch weten ("to know"; cf. wetenschap meaning "science"). The word sovietnik means "councillor".[22]
20
+
21
+ Some organizations in Russian history were called council (Russian: совет). In the Russian Empire, the State Council which functioned from 1810 to 1917 was referred to as a Council of Ministers after the revolt of 1905.[22]
22
+
23
+ During the Georgian Affair, Vladimir Lenin envisioned an expression of Great Russian ethnic chauvinism by Joseph Stalin and his supporters, calling for these nation-states to join Russia as semi-independent parts of a greater union which he initially named as the Union of Soviet Republics of Europe and Asia (Russian: Союз Советских Республик Европы и Азии, tr. Soyuz Sovetskikh Respublik Evropy i Azii).[23] Stalin initially resisted the proposal but ultimately accepted it, although with Lenin's agreement changed the name to the Union of Soviet Socialist Republics (USSR), albeit all the republics began as socialist soviet and did not change to the other order until 1936. In addition, in the national languages of several republics, the word council or conciliar in the respective language was only quite late changed to an adaptation of the Russian soviet and never in others, e.g. Ukraine.
24
+
25
+ СССР (in Latin alphabet: SSSR) is the abbreviation of USSR in Russian. It is written in Cyrillic alphabets. The Soviets used the Cyrillic abbreviation so frequently that audiences worldwide became familiar with its meaning. Notably, both Cyrillic letters used have orthographically-similar (but transliterally distinct) letters in Latin alphabets. Because of widespread familiarity with the Cyrillic abbreviation, Latin alphabet users in particular almost always use the orthographically-similar Latin letters C and P (as opposed to the transliteral Latin letters S and R) when rendering the USSR's native abbreviation.
26
+
27
+ After СССР, the most common short form names for the Soviet state in Russian were Советский Союз (transliteration: Sovetskiy Soyuz) which literally means Soviet Union, and also Союз ССР (transliteration: Soyuz SSR) which, after compensating for grammatical differences, essentially translates to Union of SSR's in English.
28
+
29
+ In the English language media, the state was referred to as the Soviet Union or the USSR. In other European languages, the locally translated short forms and abbreviations are usually used such as Union soviétique and URSS in French, or Sowjetunion and UdSSR in German. In the English-speaking world, the Soviet Union was also informally called Russia and its citizens Russians,[24] although that was technically incorrect since Russia was only one of the republics.[25] Such misapplications of the linguistic equivalents to the term Russia and its derivatives were frequent in other languages as well.
30
+
31
+ With an area of 22,402,200 square kilometres (8,649,500 sq mi), the Soviet Union was the world's largest country, a status that is retained by the Russian Federation.[26] Covering a sixth of Earth's land surface, its size was comparable to that of North America.[27] Two other successor states, Kazakhstan and Ukraine, rank among the top 10 countries by land area, and the largest country entirely in Europe, respectively. The European portion accounted for a quarter of the country's area and was the cultural and economic center. The eastern part in Asia extended to the Pacific Ocean to the east and Afghanistan to the south, and, except some areas in Central Asia, was much less populous. It spanned over 10,000 kilometres (6,200 mi) east to west across 11 time zones, and over 7,200 kilometres (4,500 mi) north to south. It had five climate zones: tundra, taiga, steppes, desert and mountains.
32
+
33
+ The USSR had the world's longest border, like Russia, measuring over 60,000 kilometres (37,000 mi), or ​1 1⁄2 circumferences of Earth. Two-thirds of it was a coastline. Across the Bering Strait was the United States. The country bordered Afghanistan, China, Czechoslovakia, Finland, Hungary, Iran, Mongolia, North Korea, Norway, Poland, Romania, and Turkey from 1945 to 1991.
34
+
35
+ The country's highest mountain was Communism Peak (now Ismoil Somoni Peak) in Tajikistan, at 7,495 metres (24,590 ft). The USSR also included most of the world's largest lakes; the Caspian Sea (shared with Iran), and Lake Baikal, the world's largest (by volume) and deepest freshwater lake that is also an internal body of water in Russia.
36
+
37
+ Modern revolutionary activity in the Russian Empire began with the 1825 Decembrist revolt. Although serfdom was abolished in 1861, it was done on terms unfavorable to the peasants and served to encourage revolutionaries. A parliament—the State Duma—was established in 1906 after the Russian Revolution of 1905, but Tsar Nicholas II resisted attempts to move from absolute to a constitutional monarchy. Social unrest continued and was aggravated during World War I by military defeat and food shortages in major cities.
38
+
39
+ A spontaneous popular uprising in Petrograd, in response to the wartime decay of Russia's economy and morale, culminated in the February Revolution and the toppling of Nicholas II and the imperial government in March 1917. The tsarist autocracy was replaced by the Russian Provisional Government, which intended to conduct elections to the Russian Constituent Assembly and to continue fighting on the side of the Entente in World War I.
40
+
41
+ At the same time, workers' councils, known in Russian as "Soviets", sprang up across the country. The Bolsheviks, led by Vladimir Lenin, pushed for socialist revolution in the Soviets and on the streets. On 7 November 1917, the Red Guards stormed the Winter Palace in Petrograd, ending the rule of the Provisional Government and leaving all political power to the Soviets.[30] This event would later be officially known in Soviet bibliographies as the Great October Socialist Revolution. In December, the Bolsheviks signed an armistice with the Central Powers, though by February 1918, fighting had resumed. In March, the Soviets ended involvement in the war and signed the Treaty of Brest-Litovsk.
42
+
43
+ A long and bloody Civil War ensued between the Reds and the Whites, starting in 1917 and ending in 1923 with the Reds' victory. It included foreign intervention, the execution of the former tsar and his family, and the famine of 1921, which killed about five million people.[31] In March 1921, during a related conflict with Poland, the Peace of Riga was signed, splitting disputed territories in Belarus and Ukraine between the Republic of Poland and Soviet Russia. Soviet Russia had to resolve similar conflicts with the newly established republics of Finland, Estonia, Latvia, and Lithuania.
44
+
45
+ On 28 December 1922, a conference of plenipotentiary delegations from the Russian SFSR, the Transcaucasian SFSR, the Ukrainian SSR and the Byelorussian SSR approved the Treaty on the Creation of the USSR[32] and the Declaration of the Creation of the USSR, forming the Union of Soviet Socialist Republics.[33] These two documents were confirmed by the first Congress of Soviets of the USSR and signed by the heads of the delegations,[34] Mikhail Kalinin, Mikhail Tskhakaya, Mikhail Frunze, Grigory Petrovsky, and Alexander Chervyakov,[35] on 30 December 1922. The formal proclamation was made from the stage of the Bolshoi Theatre.
46
+
47
+ An intensive restructuring of the economy, industry and politics of the country began in the early days of Soviet power in 1917. A large part of this was done according to the Bolshevik Initial Decrees, government documents signed by Vladimir Lenin. One of the most prominent breakthroughs was the GOELRO plan, which envisioned a major restructuring of the Soviet economy based on total electrification of the country.[36] The plan became the prototype for subsequent Five-Year Plans and was fulfilled by 1931.[37] After the economic policy of "War communism" during the Russian Civil War, as a prelude to fully developing socialism in the country, the Soviet government permitted some private enterprise to coexist alongside nationalized industry in the 1920s, and total food requisition in the countryside was replaced by a food tax.
48
+
49
+ From its creation, the government in the Soviet Union was based on the one-party rule of the Communist Party (Bolsheviks).[38] The stated purpose was to prevent the return of capitalist exploitation, and that the principles of democratic centralism would be the most effective in representing the people's will in a practical manner. The debate over the future of the economy provided the background for a power struggle in the years after Lenin's death in 1924. Initially, Lenin was to be replaced by a "troika" consisting of Grigory Zinoviev of the Ukrainian SSR, Lev Kamenev of the Russian SFSR, and Joseph Stalin of the Transcaucasian SFSR.
50
+
51
+ On 1 February 1924, the USSR was recognized by the United Kingdom. The same year, a Soviet Constitution was approved, legitimizing the December 1922 union. Despite the foundation of the Soviet state as a federative entity of many constituent republics, each with its own political and administrative entities, the term "Soviet Russia" – strictly applicable only to the Russian Federative Socialist Republic – was often applied to the entire country by non-Soviet writers and politicians.
52
+
53
+ On 3 April 1922, Stalin was named the General Secretary of the Communist Party of the Soviet Union. Lenin had appointed Stalin the head of the Workers' and Peasants' Inspectorate, which gave Stalin considerable power. By gradually consolidating his influence and isolating and outmanoeuvring his rivals within the party, Stalin became the undisputed leader of the country and, by the end of the 1920s, established a totalitarian rule. In October 1927, Zinoviev and Leon Trotsky were expelled from the Central Committee and forced into exile.
54
+
55
+ In 1928, Stalin introduced the first five-year plan for building a socialist economy. In place of the internationalism expressed by Lenin throughout the Revolution, it aimed to build Socialism in One Country. In industry, the state assumed control over all existing enterprises and undertook an intensive program of industrialization. In agriculture, rather than adhering to the "lead by example" policy advocated by Lenin,[39] forced collectivization of farms was implemented all over the country.
56
+
57
+ Famines ensued as a result, causing deaths estimated at three to seven million; surviving kulaks were persecuted, and many were sent to Gulags to do forced labor.[40][41] Social upheaval continued in the mid-1930s. Despite the turmoil of the mid-to-late 1930s, the country developed a robust industrial economy in the years preceding World War II.
58
+
59
+ Closer cooperation between the USSR and the West developed in the early 1930s. From 1932 to 1934, the country participated in the World Disarmament Conference. In 1933, diplomatic relations between the United States and the USSR were established when in November, the newly elected President of the United States, Franklin D. Roosevelt, chose to recognize Stalin's Communist government formally and negotiated a new trade agreement between the two countries.[42] In September 1934, the country joined the League of Nations. After the Spanish Civil War broke out in 1936, the USSR actively supported the Republican forces against the Nationalists, who were supported by Fascist Italy and Nazi Germany.[43]
60
+
61
+ In December 1936, Stalin unveiled a new constitution that was praised by supporters around the world as the most democratic constitution imaginable, though there was some skepticism.[i] Stalin's Great Purge resulted in the detainment or execution of many "Old Bolsheviks" who had participated in the October Revolution with Lenin. According to declassified Soviet archives, the NKVD arrested more than one and a half million people in 1937 and 1938, of whom 681,692 were shot.[45] Over those two years, there were an average of over one thousand executions a day.[46][j]
62
+
63
+ In 1939, the Soviet Union made a dramatic shift toward Nazi Germany. Almost a year after Britain and France had concluded the Munich Agreement with Germany, the Soviet Union made agreements with Germany as well, both militarily and economically during extensive talks. The two countries concluded the Molotov–Ribbentrop Pact and the German–Soviet Commercial Agreement in August 1939. The former made possible the Soviet occupation of Lithuania, Latvia, Estonia, Bessarabia, northern Bukovina, and eastern Poland. In late November, unable to coerce the Republic of Finland by diplomatic means into moving its border 25 kilometres (16 mi) back from Leningrad, Stalin ordered the invasion of Finland. In the east, the Soviet military won several decisive victories during border clashes with the Empire of Japan in 1938 and 1939. However, in April 1941, the USSR signed the Soviet–Japanese Neutrality Pact with Japan, recognizing the territorial integrity of Manchukuo, a Japanese puppet state.
64
+
65
+ Germany broke the Molotov–Ribbentrop Pact and invaded the Soviet Union on 22 June 1941 starting what was known in the USSR as the Great Patriotic War. The Red Army stopped the seemingly invincible German Army at the Battle of Moscow, aided by an unusually harsh winter. The Battle of Stalingrad, which lasted from late 1942 to early 1943, dealt a severe blow to Germany from which they never fully recovered and became a turning point in the war. After Stalingrad, Soviet forces drove through Eastern Europe to Berlin before Germany surrendered in 1945. The German Army suffered 80% of its military deaths in the Eastern Front.[50] Harry Hopkins, a close foreign policy advisor to Franklin D. Roosevelt, spoke on 10 August 1943 of the USSR's decisive role in the war.[k]
66
+
67
+ In the same year, the USSR, in fulfilment of its agreement with the Allies at the Yalta Conference, denounced the Soviet–Japanese Neutrality Pact in April 1945[52] and invaded Manchukuo and other Japan-controlled territories on 9 August 1945.[53] This conflict ended with a decisive Soviet victory, contributing to the unconditional surrender of Japan and the end of World War II.
68
+
69
+ The USSR suffered greatly in the war, losing around 27 million people.[54] Approximately 2.8 million Soviet POWs died of starvation, mistreatment, or executions in just eight months of 1941–42.[55][56] During the war, the country together with the United States, the United Kingdom and China were considered the Big Four Allied powers,[57] and later became the Four Policemen that formed the basis of the United Nations Security Council.[58] It emerged as a superpower in the post-war period. Once denied diplomatic recognition by the Western world, the USSR had official relations with practically every country by the late 1940s. A member of the United Nations at its foundation in 1945, the country became one of the five permanent members of the United Nations Security Council, which gave it the right to veto any of its resolutions.
70
+
71
+ During the immediate post-war period, the Soviet Union rebuilt and expanded its economy, while maintaining its strictly centralized control. It took effective control over most of the countries of Eastern Europe (except Yugoslavia and later Albania), turning them into satellite states. The USSR bound its satellite states in a military alliance, the Warsaw Pact, in 1955, and an economic organization, Council for Mutual Economic Assistance or Comecon, a counterpart to the European Economic Community (EEC), from 1949 to 1991.[59] The USSR concentrated on its own recovery, seizing and transferring most of Germany's industrial plants, and it exacted war reparations from East Germany, Hungary, Romania, and Bulgaria using Soviet-dominated joint enterprises. It also instituted trading arrangements deliberately designed to favor the country. Moscow controlled the Communist parties that ruled the satellite states, and they followed orders from the Kremlin.[m] Later, the Comecon supplied aid to the eventually victorious Communist Party of China, and its influence grew elsewhere in the world. Fearing its ambitions, the Soviet Union's wartime allies, the United Kingdom and the United States, became its enemies. In the ensuing Cold War, the two sides clashed indirectly in proxy wars.
72
+
73
+ Stalin died on 5 March 1953. Without a mutually agreeable successor, the highest Communist Party officials initially opted to rule the Soviet Union jointly through a troika headed by Georgy Malenkov. This did not last, however, and Nikita Khrushchev eventually won the ensuing power struggle by the mid-1950s. In 1956, he denounced Stalin's use of repression and proceeded to ease controls over the party and society. This was known as de-Stalinization.
74
+
75
+ Moscow considered Eastern Europe to be a critically vital buffer zone for the forward defence of its western borders, in case of another major invasion such as the German invasion of 1941. For this reason, the USSR sought to cement its control of the region by transforming the Eastern European countries into satellite states, dependent upon and subservient to its leadership. Soviet military force was used to suppress anti-Stalinist uprisings in Hungary and Poland in 1956.
76
+
77
+ In the late 1950s, a confrontation with China regarding the Soviet rapprochement with the West, and what Mao Zedong perceived as Khrushchev's revisionism, led to the Sino–Soviet split. This resulted in a break throughout the global Marxist–Leninist movement, with the governments in Albania, Cambodia and Somalia choosing to ally with China.
78
+
79
+ During this period of the late 1950s and early 1960s, the USSR continued to realize scientific and technological exploits in the Space Race, rivaling the United States: launching the first artificial satellite, Sputnik 1 in 1957; a living dog named Laika in 1957; the first human being, Yuri Gagarin in 1961; the first woman in space, Valentina Tereshkova in 1963; Alexei Leonov, the first person to walk in space in 1965; the first soft landing on the Moon by spacecraft Luna 9 in 1966; and the first Moon rovers, Lunokhod 1 and Lunokhod 2.[61]
80
+
81
+ Khrushchev initiated "The Thaw", a complex shift in political, cultural and economic life in the country. This included some openness and contact with other nations and new social and economic policies with more emphasis on commodity goods, allowing a dramatic rise in living standards while maintaining high levels of economic growth. Censorship was relaxed as well. Khrushchev's reforms in agriculture and administration, however, were generally unproductive. In 1962, he precipitated a crisis with the United States over the Soviet deployment of nuclear missiles in Cuba. An agreement was made with the United States to remove nuclear missiles from both Cuba and Turkey, concluding the crisis. This event caused Khrushchev much embarrassment and loss of prestige, resulting in his removal from power in 1964.
82
+
83
+ Following the ousting of Khrushchev, another period of collective leadership ensued, consisting of Leonid Brezhnev as General Secretary, Alexei Kosygin as Premier and Nikolai Podgorny as Chairman of the Presidium, lasting until Brezhnev established himself in the early 1970s as the preeminent Soviet leader.
84
+
85
+ In 1968, the Soviet Union and Warsaw Pact allies invaded Czechoslovakia to halt the Prague Spring reforms. In the aftermath, Brezhnev justified the invasion along with the earlier invasions of Eastern European states by introducing the Brezhnev Doctrine, which claimed the right of the Soviet Union to violate the sovereignty of any country that attempted to replace Marxism–Leninism with capitalism.
86
+
87
+ Brezhnev presided throughout détente with the West that resulted in treaties on armament control (SALT I, SALT II, Anti-Ballistic Missile Treaty) while at the same time building up Soviet military might.
88
+
89
+ In October 1977, the third Soviet Constitution was unanimously adopted. The prevailing mood of the Soviet leadership at the time of Brezhnev's death in 1982 was one of aversion to change. The long period of Brezhnev's rule had come to be dubbed one of "standstill", with an ageing and ossified top political leadership. This period is also known as the Era of Stagnation, a period of adverse economic, political, and social effects in the country, which began during the rule of Brezhnev and continued under his successors Yuri Andropov and Konstantin Chernenko.
90
+
91
+ In late 1979, the Soviet Union's military intervened in the ongoing civil war in neighboring Afghanistan, effectively ending a détente with the West.
92
+
93
+ Two developments dominated the decade that followed: the increasingly apparent crumbling of the Soviet Union's economic and political structures, and the patchwork attempts at reforms to reverse that process. Kenneth S. Deffeyes argued in Beyond Oil that the Reagan administration encouraged Saudi Arabia to lower the price of oil to the point where the Soviets could not make a profit selling their oil, and resulted in the depletion of the country's hard currency reserves.[62]
94
+
95
+ Brezhnev's next two successors, transitional figures with deep roots in his tradition, did not last long. Yuri Andropov was 68 years old and Konstantin Chernenko 72 when they assumed power; both died in less than two years. In an attempt to avoid a third short-lived leader, in 1985, the Soviets turned to the next generation and selected Mikhail Gorbachev. He made significant changes in the economy and party leadership, called perestroika. His policy of glasnost freed public access to information after decades of heavy government censorship. Gorbachev also moved to end the Cold War. In 1988, the USSR abandoned its war in Afghanistan and began to withdraw its forces. In the following year, Gorbachev refused to interfere in the internal affairs of the Soviet satellite states, which paved the way for the Revolutions of 1989. With the tearing down of the Berlin Wall and with East and West Germany pursuing unification, the Iron Curtain between the West and Soviet-controlled regions came down.
96
+
97
+ At the same time, the Soviet republics started legal moves towards potentially declaring sovereignty over their territories, citing the freedom to secede in Article 72 of the USSR constitution.[63] On 7 April 1990, a law was passed allowing a republic to secede if more than two-thirds of its residents voted for it in a referendum.[64] Many held their first free elections in the Soviet era for their own national legislatures in 1990. Many of these legislatures proceeded to produce legislation contradicting the Union laws in what was known as the "War of Laws". In 1989, the Russian SFSR convened a newly elected Congress of People's Deputies. Boris Yeltsin was elected its chairman. On 12 June 1990, the Congress declared Russia's sovereignty over its territory and proceeded to pass laws that attempted to supersede some of the Soviet laws. After a landslide victory of Sąjūdis in Lithuania, that country declared its independence restored on 11 March 1990.
98
+
99
+ A referendum for the preservation of the USSR was held on 17 March 1991 in nine republics (the remainder having boycotted the vote), with the majority of the population in those republics voting for preservation of the Union. The referendum gave Gorbachev a minor boost. In the summer of 1991, the New Union Treaty, which would have turned the country into a much looser Union, was agreed upon by eight republics. The signing of the treaty, however, was interrupted by the August Coup—an attempted coup d'état by hardline members of the government and the KGB who sought to reverse Gorbachev's reforms and reassert the central government's control over the republics. After the coup collapsed, Yeltsin was seen as a hero for his decisive actions, while Gorbachev's power was effectively ended. The balance of power tipped significantly towards the republics. In August 1991, Latvia and Estonia immediately declared the restoration of their full independence (following Lithuania's 1990 example). Gorbachev resigned as general secretary in late August, and soon afterwards, the party's activities were indefinitely suspended—effectively ending its rule. By the fall, Gorbachev could no longer influence events outside Moscow, and he was being challenged even there by Yeltsin, who had been elected President of Russia in July 1991.
100
+
101
+ The remaining 12 republics continued discussing new, increasingly looser, models of the Union. However, by December all except Russia and Kazakhstan had formally declared independence. During this time, Yeltsin took over what remained of the Soviet government, including the Moscow Kremlin. The final blow was struck on 1 December when Ukraine, the second-most powerful republic, voted overwhelmingly for independence. Ukraine's secession ended any realistic chance of the country staying together even on a limited scale.
102
+
103
+ On 8 December 1991, the presidents of Russia, Ukraine and Belarus (formerly Byelorussia), signed the Belavezha Accords, which declared the Soviet Union dissolved and established the Commonwealth of Independent States (CIS) in its place. While doubts remained over the authority of the accords to do this, on 21 December 1991, the representatives of all Soviet republics except Georgia signed the Alma-Ata Protocol, which confirmed the accords. On 25 December 1991, Gorbachev resigned as the President of the USSR, declaring the office extinct. He turned the powers that had been vested in the presidency over to Yeltsin. That night, the Soviet flag was lowered for the last time, and the Russian tricolor was raised in its place.
104
+
105
+ The following day, the Supreme Soviet, the highest governmental body, voted both itself and the country out of existence. This is generally recognized as marking the official, final dissolution of the Soviet Union as a functioning state, and the end of the Cold War.[65] The Soviet Army initially remained under overall CIS command but was soon absorbed into the different military forces of the newly independent states. The few remaining Soviet institutions that had not been taken over by Russia ceased to function by the end of 1991.
106
+
107
+ Following the dissolution, Russia was internationally recognized[66] as its legal successor on the international stage. To that end, Russia voluntarily accepted all Soviet foreign debt and claimed Soviet overseas properties as its own. Under the 1992 Lisbon Protocol, Russia also agreed to receive all nuclear weapons remaining in the territory of other former Soviet republics. Since then, the Russian Federation has assumed the Soviet Union's rights and obligations. Ukraine has refused to recognize exclusive Russian claims to succession of the USSR and claimed such status for Ukraine as well, which was codified in Articles 7 and 8 of its 1991 law On Legal Succession of Ukraine. Since its independence in 1991, Ukraine has continued to pursue claims against Russia in foreign courts, seeking to recover its share of the foreign property that was owned by the USSR.
108
+
109
+ The dissolution was followed by a severe drop in economic and social conditions in post-Soviet states,[67][68] including a rapid increase in poverty,[69][70][71][72] crime,[73][74] corruption,[75][76] unemployment,[77] homelessness,[78][79] rates of disease,[80][81][82] demographic losses,[83] income inequality and the rise of an oligarchical class,[84][69] along with decreases in calorie intake, life expectancy, adult literacy, and income.[85] Between 1988/1989 and 1993/1995, the Gini ratio increased by an average of 9 points for all former socialist countries.[69] The economic shocks that accompanied wholesale privatization were associated with sharp increases in mortality. Data shows Russia, Kazakhstan, Latvia, Lithuania and Estonia saw a tripling of unemployment and a 42% increase in male death rates between 1991 and 1994.[86][87] In the following decades, only five or six of the post-communist states are on a path to joining the wealthy capitalist West while most are falling behind, some to such an extent that it will take over fifty years to catch up to where they were before the fall of the Soviet Bloc.[88][89]
110
+
111
+ In summing up the international ramifications of these events, Vladislav Zubok stated: "The collapse of the Soviet empire was an event of epochal geopolitical, military, ideological, and economic significance."[90] Before the dissolution, the country had maintained its status as one of the world's two superpowers for four decades after World War II through its hegemony in Eastern Europe, military strength, economic strength, aid to developing countries, and scientific research, especially in space technology and weaponry.[91]
112
+
113
+ The analysis of the succession of states for the 15 post-Soviet states is complex. The Russian Federation is seen as the legal continuator state and is for most purposes the heir to the Soviet Union. It retained ownership of all former Soviet embassy properties, as well as the old Soviet UN membership and permanent membership on the Security Council.
114
+
115
+ Of the two other co-founding states of the USSR at the time of the dissolution, Ukraine was the only one that had passed similar to Russia's laws that it is a state-successor of both the Ukrainian SSR and the USSR.[92] Soviet treaties laid groundwork for Ukraine's future foreign agreements as well as they led to Ukraine agreeing to undertake 16.37% of debts of the Soviet Union for which it was going to receive its share of USSR's foreign property. Although it had a tough position at the time, due to Russia's position as a "single continuation of the USSR" that became widely accepted in the West as well as a constant pressure from the Western countries, allowed Russia to dispose state property of USSR abroad and conceal information about it. Due to that Ukraine never ratified "zero option" agreement that Russian Federation had signed with other former Soviet republics, as it denied disclosing of information about Soviet Gold Reserves and its Diamond Fund.[93][94] Dispute over former Soviet property and assets between two former republics is still ongoing:
116
+
117
+ The conflict is unsolvable. We can continue to poke Kiev handouts in the calculation of "solve the problem", only it won't be solved. Going to a trial is also pointless: for a number of European countries this is a political issue, and they will make a decision clearly in whose favor. What to do in this situation is an open question. Search for non-trivial solutions. But we must remember that in 2014, with the filing of the then Ukrainian Prime Minister Yatsenyuk, litigation with Russia resumed in 32 countries.
118
+
119
+ Similar situation occurred with restitution of cultural property. Although on 14 February 1992 Russia and other former Soviet republics signed agreement "On the return of cultural and historic property to the origin states" in Minsk, it was halted by Russian State Duma that had eventually passed "Federal Law on Cultural Valuables Displaced to the USSR as a Result of the Second World War and Located on the Territory of the Russian Federation" which made restitution currently impossible.[96]
120
+
121
+ There are additionally four states that claim independence from the other internationally recognised post-Soviet states but possess limited international recognition: Abkhazia, Nagorno-Karabakh, South Ossetia and Transnistria. The Chechen separatist movement of the Chechen Republic of Ichkeria lacks any international recognition.
122
+
123
+ During his rule, Stalin always made the final policy decisions. Otherwise, Soviet foreign policy was set by the Commission on the Foreign Policy of the Central Committee of the Communist Party of the Soviet Union, or by the party's highest body the Politburo. Operations were handled by the separate Ministry of Foreign Affairs. It was known as the People's Commissariat for Foreign Affairs (or Narkomindel), until 1946. The most influential spokesmen were Georgy Chicherin (1872–1936), Maxim Litvinov (1876–1951), Vyacheslav Molotov (1890–1986), Andrey Vyshinsky (1883–1954) and Andrei Gromyko (1909–1989). Intellectuals were based in the Moscow State Institute of International Relations.[97]
124
+
125
+ The Communist leadership of the Soviet Union intensely debated foreign policy issues and change directions several times. Even after Stalin assumed dictatorial control in the late 1920s, there were debates, and he frequently changed positions.[106]
126
+
127
+ During the country's early period, it was assumed that Communist revolutions would break out soon in every major industrial country, and it was the Soviet responsibility to assist them. The Comintern was the weapon of choice. A few revolutions did break out, but they were quickly suppressed (the longest lasting one was in Hungary)—the Hungarian Soviet Republic—lasted only from 21 March 1919 to 1 August 1919. The Russian Bolsheviks were in no position to give any help.
128
+
129
+ By 1921, Lenin, Trotsky, and Stalin realized that capitalism had stabilized itself in Europe and there would not be any widespread revolutions anytime soon. It became the duty of the Russian Bolsheviks to protect what they had in Russia, and avoid military confrontations that might destroy their bridgehead. Russia was now a pariah state, along with Germany. The two came to terms in 1922 with the Treaty of Rapallo that settled long-standing grievances. At the same time, the two countries secretly set up training programs for the illegal German army and air force operations at hidden camps in the USSR.[107]
130
+
131
+ Moscow eventually stopped threatening other states, and instead worked to open peaceful relationships in terms of trade, and diplomatic recognition. The United Kingdom dismissed the warnings of Winston Churchill and a few others about a continuing communist threat, and opened trade relations and de facto diplomatic recognition in 1922. There was hope for a settlement of the pre-war tsarist debts, but it was repeatedly postponed. Formal recognition came when the new Labour Party came to power in 1924.[108] All the other countries followed suit in opening trade relations. Henry Ford opened large-scale business relations with the Soviets in the late 1920s, hoping that it would lead to long-term peace. Finally, in 1933, the United States officially recognized the USSR, a decision backed by the public opinion and especially by US business interests that expected an opening of a new profitable market.[109]
132
+
133
+ In the late 1920s and early 1930s, Stalin ordered Communist parties across the world to strongly oppose non-communist political parties, labor unions or other organizations on the left. Stalin reversed himself in 1934 with the Popular Front program that called on all Communist parties to join together with all anti-Fascist political, labor, and organizational forces that were opposed to fascism, especially of the Nazi variety.[110][111]
134
+
135
+ In 1939, half a year after the Munich Agreement, the USSR attempted to form an anti-Nazi alliance with France and Britain.[112] Adolf Hitler proposed a better deal, which would give the USSR control over much of Eastern Europe through the Molotov–Ribbentrop Pact. In September, Germany invaded Poland, and the USSR also invaded later that month, resulting in the partition of Poland. In response, Britain and France declared war on Germany, marking the beginning of World War II.[113]
136
+
137
+ There were three power hierarchies in the Soviet Union: the legislature represented by the Supreme Soviet of the Soviet Union, the government represented by the Council of Ministers, and the Communist Party of the Soviet Union (CPSU), the only legal party and the final policymaker in the country.[114]
138
+
139
+ At the top of the Communist Party was the Central Committee, elected at Party Congresses and Conferences. In turn, the Central Committee voted for a Politburo (called the Presidium between 1952–1966), Secretariat and the General Secretary (First Secretary from 1953 to 1966), the de facto highest office in the Soviet Union.[115] Depending on the degree of power consolidation, it was either the Politburo as a collective body or the General Secretary, who always was one of the Politburo members, that effectively led the party and the country[116] (except for the period of the highly personalized authority of Stalin, exercised directly through his position in the Council of Ministers rather than the Politburo after 1941).[117] They were not controlled by the general party membership, as the key principle of the party organization was democratic centralism, demanding strict subordination to higher bodies, and elections went uncontested, endorsing the candidates proposed from above.[118]
140
+
141
+ The Communist Party maintained its dominance over the state mainly through its control over the system of appointments. All senior government officials and most deputies of the Supreme Soviet were members of the CPSU. Of the party heads themselves, Stalin (1941–1953) and Khrushchev (1958–1964) were Premiers. Upon the forced retirement of Khrushchev, the party leader was prohibited from this kind of double membership,[119] but the later General Secretaries for at least some part of their tenure occupied the mostly ceremonial position of Chairman of the Presidium of the Supreme Soviet, the nominal head of state. The institutions at lower levels were overseen and at times supplanted by primary party organizations.[120]
142
+
143
+ However, in practice the degree of control the party was able to exercise over the state bureaucracy, particularly after the death of Stalin, was far from total, with the bureaucracy pursuing different interests that were at times in conflict with the party.[121] Nor was the party itself monolithic from top to bottom, although factions were officially banned.[122]
144
+
145
+ The Supreme Soviet (successor of the Congress of Soviets and Central Executive Committee) was nominally the highest state body for most of the Soviet history,[123] at first acting as a rubber stamp institution, approving and implementing all decisions made by the party. However, its powers and functions were extended in the late 1950s, 1960s and 1970s, including the creation of new state commissions and committees. It gained additional powers relating to the approval of the Five-Year Plans and the government budget.[124] The Supreme Soviet elected a Presidium to wield its power between plenary sessions,[125] ordinarily held twice a year, and appointed the Supreme Court,[126] the Procurator General[127] and the Council of Ministers (known before 1946 as the Council of People's Commissars), headed by the Chairman (Premier) and managing an enormous bureaucracy responsible for the administration of the economy and society.[125] State and party structures of the constituent republics largely emulated the structure of the central institutions, although the Russian SFSR, unlike the other constituent republics, for most of its history had no republican branch of the CPSU, being ruled directly by the union-wide party until 1990. Local authorities were organized likewise into party committees, local Soviets and executive committees. While the state system was nominally federal, the party was unitary.[128]
146
+
147
+ The state security police (the KGB and its predecessor agencies) played an important role in Soviet politics. It was instrumental in the Great Purge,[129] but was brought under strict party control after Stalin's death. Under Yuri Andropov, the KGB engaged in the suppression of political dissent and maintained an extensive network of informers, reasserting itself as a political actor to some extent independent of the party-state structure,[130] culminating in the anti-corruption campaign targeting high-ranking party officials in the late 1970s and early 1980s.[131]
148
+
149
+ The constitution, which was promulgated in 1918, 1924, 1936 and 1977,[132] did not limit state power. No formal separation of powers existed between the Party, Supreme Soviet and Council of Ministers[133] that represented executive and legislative branches of the government. The system was governed less by statute than by informal conventions, and no settled mechanism of leadership succession existed. Bitter and at times deadly power struggles took place in the Politburo after the deaths of Lenin[134] and Stalin,[135] as well as after Khrushchev's dismissal,[136] itself due to a decision by both the Politburo and the Central Committee.[137] All leaders of the Communist Party before Gorbachev died in office, except Georgy Malenkov[138] and Khrushchev, both dismissed from the party leadership amid internal struggle within the party.[137]
150
+
151
+ Between 1988 and 1990, facing considerable opposition, Mikhail Gorbachev enacted reforms shifting power away from the highest bodies of the party and making the Supreme Soviet less dependent on them. The Congress of People's Deputies was established, the majority of whose members were directly elected in competitive elections held in March 1989. The Congress now elected the Supreme Soviet, which became a full-time parliament, and much stronger than before. For the first time since the 1920s, it refused to rubber stamp proposals from the party and Council of Ministers.[139] In 1990, Gorbachev introduced and assumed the position of the President of the Soviet Union, concentrated power in his executive office, independent of the party, and subordinated the government,[140] now renamed the Cabinet of Ministers of the USSR, to himself.[141]
152
+
153
+ Tensions grew between the Union-wide authorities under Gorbachev, reformists led in Russia by Boris Yeltsin and controlling the newly elected Supreme Soviet of the Russian SFSR, and communist hardliners. On 19–21 August 1991, a group of hardliners staged a coup attempt. The coup failed, and the State Council of the Soviet Union became the highest organ of state power "in the period of transition".[142] Gorbachev resigned as General Secretary, only remaining President for the final months of the existence of the USSR.[143]
154
+
155
+ The judiciary was not independent of the other branches of government. The Supreme Court supervised the lower courts (People's Court) and applied the law as established by the constitution or as interpreted by the Supreme Soviet. The Constitutional Oversight Committee reviewed the constitutionality of laws and acts. The Soviet Union used the inquisitorial system of Roman law, where the judge, procurator, and defence attorney collaborate to establish the truth.[144]
156
+
157
+ Constitutionally, the USSR was a federation of constituent Union Republics, which were either unitary states, such as Ukraine or Byelorussia (SSRs), or federations, such as Russia or Transcaucasia (SFSRs),[114] all four being the founding republics who signed the Treaty on the Creation of the USSR in December 1922. In 1924, during the national delimitation in Central Asia, Uzbekistan and Turkmenistan were formed from parts of Russia's Turkestan ASSR and two Soviet dependencies, the Khorezm and Bukharan SSRs. In 1929, Tajikistan was split off from the Uzbekistan SSR. With the constitution of 1936, the Transcaucasian SFSR was dissolved, resulting in its constituent republics of Armenia, Georgia and Azerbaijan being elevated to Union Republics, while Kazakhstan and Kirghizia were split off from Russian SFSR, resulting in the same status.[145] In August 1940, Moldavia was formed from parts of Ukraine and Bessarabia and northern Bukovina. Estonia, Latvia and Lithuania (SSRs) were also admitted into the union which was not recognized by most of the international community and was considered an illegal occupation. Karelia was split off from Russia as a Union Republic in March 1940 and was reabsorbed in 1956. Between July 1956 and September 1991, there were 15 union republics (see map below).[146]
158
+
159
+ While nominally a union of equals, in practice the Soviet Union was dominated by Russians. The domination was so absolute that for most of its existence, the country was commonly (but incorrectly) referred to as "Russia". While the RSFSR was technically only one republic within the larger union, it was by far the largest (both in terms of population and area), most powerful, most developed, and the industrial center of the Soviet Union. Historian Matthew White wrote that it was an open secret that the country's federal structure was "window dressing" for Russian dominance. For that reason, the people of the USSR were usually called "Russians", not "Soviets", since "everyone knew who really ran the show".[147]
160
+
161
+ Under the Military Law of September 1925, the Soviet Armed Forces consisted of three components, namely the Land Forces, the Air Force, the Navy, Joint State Political Directorate (OGPU), and the Internal Troops.[148] The OGPU later became independent and in 1934 joined the NKVD, and so its internal troops were under the joint leadership of the defense and internal commissariats. After World War II, Strategic Missile Forces (1959), Air Defense Forces (1948) and National Civil Defense Forces (1970) were formed, which ranked first, third, and sixth in the official Soviet system of importance (ground forces were second, Air Force Fourth, and Navy Fifth).
162
+
163
+ The army had the greatest political influence. In 1989, there served two million soldiers divided between 150 motorized and 52 armored divisions. Until the early 1960s, the Soviet navy was a rather small military branch, but after the Caribbean crisis, under the leadership of Sergei Gorshkov, it expanded significantly. It became known for battlecruisers and submarines. In 1989 there served 500 000 men. The Soviet Air Force focused on a fleet of strategic bombers and during war situation was to eradicate enemy infrastructure and nuclear capacity. The air force also had a number of fighters and tactical bombers to support the army in the war. Strategic missile forces had more than 1,400 intercontinental ballistic missiles (ICBMs), deployed between 28 bases and 300 command centers.
164
+
165
+ In the post-war period, the Soviet Army was directly involved in several military operations abroad. These included the suppression of the uprising in East Germany (1953), Hungarian revolution (1956) and the invasion of Czechoslovakia (1968). The Soviet Union also participated in the war in Afghanistan between 1979 and 1989.
166
+
167
+ In the Soviet Union, general conscription applied.
168
+
169
+ At the end of the 1950s, with the help of engineers and technologies captured and imported from defeated Nazi Germany, the Soviets constructed the first satellite - Sputnik 1 and thus overtook the United States. This was followed by other successful satellites and experimental dogs were sent. On April 12, 1961, the first cosmonaut, Yuri Gagarin, was sent to the space. He once flew around the Earth and successfully landed in the Kazakh steppe. At that time, the first plans for space shuttles and orbital stations were drawn up in Soviet design offices, but in the end personal disputes between designers and management prevented this.
170
+
171
+ The first big fiasco for the USSR was the landing on the moon by the Americans, when the Russians were not able to respond to the Americans in time with the same project. In the 1970s, more specific proposals for the design of the space shuttle began to emerge, but shortcomings, especially in the electronics industry (rapid overheating of electronics), postponed the program until the end of the 1980s. The first shuttle, the Buran, flew in 1988, but without a human crew. Another shuttle, Ptichka, eventually ended up under construction, as the shuttle project was canceled in 1991. For their launch into space, there is today an unused superpower rocket, Energia, which is the most powerful in the world.
172
+
173
+ In the late 1980s, the Soviet Union managed to build the Mir orbital station. It was built on the construction of Salyut stations and its tasks were purely civilian and research. In the 1990s, when the US Skylab was shut down due to lack of funds, it was the only orbital station in operation. Gradually, other modules were added to it, including American ones. However, the technical condition of the station deteriorated rapidly, especially after the fire, so in 2001 it was decided to bring it into the atmosphere where it burned down.
174
+
175
+ The Soviet Union adopted a command economy, whereby production and distribution of goods were centralized and directed by the government. The first Bolshevik experience with a command economy was the policy of War communism, which involved the nationalization of industry, centralized distribution of output, coercive requisition of agricultural production, and attempts to eliminate money circulation, private enterprises and free trade. After the severe economic collapse, Lenin replaced war communism by the New Economic Policy (NEP) in 1921, legalizing free trade and private ownership of small businesses. The economy quickly recovered as a result.[149]
176
+
177
+ After a long debate among the members of Politburo about the course of economic development, by 1928–1929, upon gaining control of the country, Stalin abandoned the NEP and pushed for full central planning, starting forced collectivization of agriculture and enacting draconian labor legislation. Resources were mobilized for rapid industrialization, which significantly expanded Soviet capacity in heavy industry and capital goods during the 1930s.[149] The primary motivation for industrialization was preparation for war, mostly due to distrust of the outside capitalist world.[150] As a result, the USSR was transformed from a largely agrarian economy into a great industrial power, leading the way for its emergence as a superpower after World War II.[151] The war caused extensive devastation of the Soviet economy and infrastructure, which required massive reconstruction.[152]
178
+
179
+ By the early 1940s, the Soviet economy had become relatively self-sufficient; for most of the period until the creation of Comecon, only a tiny share of domestic products was traded internationally.[153] After the creation of the Eastern Bloc, external trade rose rapidly. However, the influence of the world economy on the USSR was limited by fixed domestic prices and a state monopoly on foreign trade.[154] Grain and sophisticated consumer manufactures became major import articles from around the 1960s.[153] During the arms race of the Cold War, the Soviet economy was burdened by military expenditures, heavily lobbied for by a powerful bureaucracy dependent on the arms industry. At the same time, the USSR became the largest arms exporter to the Third World. Significant amounts of Soviet resources during the Cold War were allocated in aid to the other socialist states.[153]
180
+
181
+ From the 1930s until its dissolution in late 1991, the way the Soviet economy operated remained essentially unchanged. The economy was formally directed by central planning, carried out by Gosplan and organized in five-year plans. However, in practice, the plans were highly aggregated and provisional, subject to ad hoc intervention by superiors. All critical economic decisions were taken by the political leadership. Allocated resources and plan targets were usually denominated in rubles rather than in physical goods. Credit was discouraged, but widespread. The final allocation of output was achieved through relatively decentralized, unplanned contracting. Although in theory prices were legally set from above, in practice they were often negotiated, and informal horizontal links (e.g. between producer factories) were widespread.[149]
182
+
183
+ A number of basic services were state-funded, such as education and health care. In the manufacturing sector, heavy industry and defence were prioritized over consumer goods.[155] Consumer goods, particularly outside large cities, were often scarce, of poor quality and limited variety. Under the command economy, consumers had almost no influence on production, and the changing demands of a population with growing incomes could not be satisfied by supplies at rigidly fixed prices.[156] A massive unplanned second economy grew up at low levels alongside the planned one, providing some of the goods and services that the planners could not. The legalization of some elements of the decentralized economy was attempted with the reform of 1965.[149]
184
+
185
+ Although statistics of the Soviet economy are notoriously unreliable and its economic growth difficult to estimate precisely,[157][158] by most accounts, the economy continued to expand until the mid-1980s. During the 1950s and 1960s, it had comparatively high growth and was catching up to the West.[159] However, after 1970, the growth, while still positive, steadily declined much more quickly and consistently than in other countries, despite a rapid increase in the capital stock (the rate of capital increase was only surpassed by Japan).[149]
186
+
187
+ Overall, the growth rate of per capita income in the Soviet Union between 1960 and 1989 was slightly above the world average (based on 102 countries).[citation needed] According to Stanley Fischer and William Easterly, growth could have been faster. By their calculation, per capita income in 1989 should have been twice higher than it was, considering the amount of investment, education and population. The authors attribute this poor performance to the low productivity of capital.[160] Steven Rosenfielde states that the standard of living declined due to Stalin's despotism. While there was a brief improvement after his death, it lapsed into stagnation.[161]
188
+
189
+ In 1987, Mikhail Gorbachev attempted to reform and revitalize the economy with his program of perestroika. His policies relaxed state control over enterprises but did not replace it by market incentives, resulting in a sharp decline in output. The economy, already suffering from reduced petroleum export revenues, started to collapse. Prices were still fixed, and the property was still largely state-owned until after the country's dissolution.[149][156] For most of the period after World War II until its collapse, Soviet GDP (PPP) was the second-largest in the world, and third during the second half of the 1980s,[162] although on a per-capita basis, it was behind that of First World countries.[163] Compared to countries with similar per-capita GDP in 1928, the Soviet Union experienced significant growth.[164]
190
+
191
+ In 1990, the country had a Human Development Index of 0.920, placing it in the "high" category of human development. It was the third-highest in the Eastern Bloc, behind Czechoslovakia and East Germany, and the 25th in the world of 130 countries.[165]
192
+
193
+ The need for fuel declined in the Soviet Union from the 1970s to the 1980s,[166] both per ruble of gross social product and per ruble of industrial product. At the start, this decline grew very rapidly but gradually slowed down between 1970 and 1975. From 1975 and 1980, it grew even slower,[clarification needed] only 2.6%.[167] David Wilson, a historian, believed that the gas industry would account for 40% of Soviet fuel production by the end of the century. His theory did not come to fruition because of the USSR's collapse.[168] The USSR, in theory, would have continued to have an economic growth rate of 2–2.5% during the 1990s because of Soviet energy fields.[clarification needed][169] However, the energy sector faced many difficulties, among them the country's high military expenditure and hostile relations with the First World.[170]
194
+
195
+ In 1991, the Soviet Union had a pipeline network of 82,000 kilometres (51,000 mi) for crude oil and another 206,500 kilometres (128,300 mi) for natural gas.[171] Petroleum and petroleum-based products, natural gas, metals, wood, agricultural products, and a variety of manufactured goods, primarily machinery, arms and military equipment, were exported.[172] In the 1970s and 1980s, the USSR heavily relied on fossil fuel exports to earn hard currency.[153] At its peak in 1988, it was the largest producer and second-largest exporter of crude oil, surpassed only by Saudi Arabia.[173]
196
+
197
+ The Soviet Union placed great emphasis on science and technology within its economy,[174] however, the most remarkable Soviet successes in technology, such as producing the world's first space satellite, typically were the responsibility of the military.[155] Lenin believed that the USSR would never overtake the developed world if it remained as technologically backward as it was upon its founding. Soviet authorities proved their commitment to Lenin's belief by developing massive networks, research and development organizations. In the early 1960s, the Soviets awarded 40% of chemistry PhDs to women, compared to only 5% in the United States.[175] By 1989, Soviet scientists were among the world's best-trained specialists in several areas, such as energy physics, selected areas of medicine, mathematics, welding and military technologies. Due to rigid state planning and bureaucracy, the Soviets remained far behind technologically in chemistry, biology, and computers when compared to the First World.
198
+
199
+ Under the Reagan administration, Project Socrates determined that the Soviet Union addressed the acquisition of science and technology in a manner that was radically different from what the US was using. In the case of the US, economic prioritization was being used for indigenous research and development as the means to acquire science and technology in both the private and public sectors. In contrast, the USSR was offensively and defensively maneuvering in the acquisition and utilization of the worldwide technology, to increase the competitive advantage that they acquired from the technology while preventing the US from acquiring a competitive advantage. However, technology-based planning was executed in a centralized, government-centric manner that greatly hindered its flexibility. This was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.[176][177][178]
200
+
201
+ Transport was a vital component of the country's economy. The economic centralization of the late 1920s and 1930s led to the development of infrastructure on a massive scale, most notably the establishment of Aeroflot, an aviation enterprise.[179] The country had a wide variety of modes of transport by land, water and air.[171] However, due to inadequate maintenance, much of the road, water and Soviet civil aviation transport were outdated and technologically backward compared to the First World.[180]
202
+
203
+ Soviet rail transport was the largest and most intensively used in the world;[180] it was also better developed than most of its Western counterparts.[181] By the late 1970s and early 1980s, Soviet economists were calling for the construction of more roads to alleviate some of the burdens from the railways and to improve the Soviet government budget.[182] The street network and automotive industry[183] remained underdeveloped,[184] and dirt roads were common outside major cities.[185] Soviet maintenance projects proved unable to take care of even the few roads the country had. By the early-to-mid-1980s, the Soviet authorities tried to solve the road problem by ordering the construction of new ones.[185] Meanwhile, the automobile industry was growing at a faster rate than road construction.[186] The underdeveloped road network led to a growing demand for public transport.[187]
204
+
205
+ Despite improvements, several aspects of the transport sector were still[when?] riddled with problems due to outdated infrastructure, lack of investment, corruption and bad decision-making. Soviet authorities were unable to meet the growing demand for transport infrastructure and services.
206
+
207
+ The Soviet merchant navy was one of the largest in the world.[171]
208
+
209
+ Excess deaths throughout World War I and the Russian Civil War (including the postwar famine) amounted to a combined total of 18 million,[188] some 10 million in the 1930s,[47] and more than 26 million in 1941–5. The postwar Soviet population was 45 to 50 million smaller than it would have been if pre-war demographic growth had continued.[54] According to Catherine Merridale, "... reasonable estimate would place the total number of excess deaths for the whole period somewhere around 60 million."[189]
210
+
211
+ The birth rate of the USSR decreased from 44.0 per thousand in 1926 to 18.0 in 1974, mainly due to increasing urbanization and the rising average age of marriages. The mortality rate demonstrated a gradual decrease as well – from 23.7 per thousand in 1926 to 8.7 in 1974. In general, the birth rates of the southern republics in Transcaucasia and Central Asia were considerably higher than those in the northern parts of the Soviet Union, and in some cases even increased in the post–World War II period, a phenomenon partly attributed to slower rates of urbanistion and traditionally earlier marriages in the southern republics.[190] Soviet Europe moved towards sub-replacement fertility, while Soviet Central Asia continued to exhibit population growth well above replacement-level fertility.[191]
212
+
213
+ The late 1960s and the 1970s witnessed a reversal of the declining trajectory of the rate of mortality in the USSR, and was especially notable among men of working age, but was also prevalent in Russia and other predominantly Slavic areas of the country.[192] An analysis of the official data from the late 1980s showed that after worsening in the late-1970s and the early 1980s, adult mortality began to improve again.[193] The infant mortality rate increased from 24.7 in 1970 to 27.9 in 1974. Some researchers regarded the rise as mostly real, a consequence of worsening health conditions and services.[194] The rises in both adult and infant mortality were not explained or defended by Soviet officials, and the Soviet government stopped publishing all mortality statistics for ten years. Soviet demographers and health specialists remained silent about the mortality increases until the late-1980s, when the publication of mortality data resumed, and researchers could delve into the real causes.[195]
214
+
215
+ Under Lenin, the state made explicit commitments to promote the equality of men and women. Many early Russian feminists and ordinary Russian working women actively participated in the Revolution, and many more were affected by the events of that period and the new policies. Beginning in October 1918, the Lenin's government liberalized divorce and abortion laws, decriminalized homosexuality (re-criminalized in the 1930s), permitted cohabitation, and ushered in a host of reforms.[196] However, without birth control, the new system produced many broken marriages, as well as countless out-of-wedlock children.[197] The epidemic of divorces and extramarital affairs created social hardships when Soviet leaders wanted people to concentrate their efforts on growing the economy. Giving women control over their fertility also led to a precipitous decline in the birth rate, perceived as a threat to their country's military power. By 1936, Stalin reversed most of the liberal laws, ushering in a pronatalist era that lasted for decades.[198]
216
+
217
+ By 1917, Russia became the first great power to grant women the right to vote.[199] After heavy casualties in World War I and II, women outnumbered men in Russia by a 4:3 ratio.[200] This contributed to the larger role women played in Russian society compared to other great powers at the time.
218
+
219
+ Anatoly Lunacharsky became the first People's Commissar for Education of Soviet Russia. In the beginning, the Soviet authorities placed great emphasis on the elimination of illiteracy. All left-handed kids were forced to write with their right hand in the Soviet school system.[201][202][203][204] Literate people were automatically hired as teachers.[citation needed] For a short period, quality was sacrificed for quantity. By 1940, Stalin could announce that illiteracy had been eliminated. Throughout the 1930s, social mobility rose sharply, which has been attributed to reforms in education.[205] In the aftermath of World War II, the country's educational system expanded dramatically, which had a tremendous effect. In the 1960s, nearly all children had access to education, the only exception being those living in remote areas. Nikita Khrushchev tried to make education more accessible, making it clear to children that education was closely linked to the needs of society. Education also became important in giving rise to the New Man.[206] Citizens directly entering the workforce had the constitutional right to a job and to free vocational training.
220
+
221
+ The education system was highly centralized and universally accessible to all citizens, with affirmative action for applicants from nations associated with cultural backwardness. However, as part of the general antisemitic policy, an unofficial Jewish quota was applied[when?] in the leading institutions of higher education by subjecting Jewish applicants to harsher entrance examinations.[207][208][209][210] The Brezhnev era also introduced a rule that required all university applicants to present a reference from the local Komsomol party secretary.[211] According to statistics from 1986, the number of higher education students per the population of 10,000 was 181 for the USSR, compared to 517 for the US.[212]
222
+
223
+ The Soviet Union was an ethnically diverse country, with more than 100 distinct ethnic groups. The total population was estimated at 293 million in 1991. According to a 1990 estimate, the majority were Russians (50.78%), followed by Ukrainians (15.45%) and Uzbeks (5.84%).[213]
224
+
225
+ All citizens of the USSR had their own ethnic affiliation. The ethnicity of a person was chosen at the age of sixteen[214] by the child's parents. If the parents did not agree, the child was automatically assigned the ethnicity of the father. Partly due to Soviet policies, some of the smaller minority ethnic groups were considered part of larger ones, such as the Mingrelians of Georgia, who were classified with the linguistically related Georgians.[215] Some ethnic groups voluntarily assimilated, while others were brought in by force. Russians, Belarusians, and Ukrainians shared close cultural ties, while other groups did not. With multiple nationalities living in the same territory, ethnic antagonisms developed over the years.[216][neutrality is disputed]
226
+
227
+ Members of various ethnicities participated in legislative bodies. Organs of power like the Politburo, the Secretariat of the Central Committee etc., were formally ethnically neutral, but in reality, ethnic Russians were overrepresented, although there were also non-Russian leaders in the Soviet leadership, such as Joseph Stalin, Grigory Zinoviev, Nikolai Podgorny or Andrei Gromyko. During the Soviet era, a significant number of ethnic Russians and Ukrainians migrated to other Soviet republics, and many of them settled there. According to the last census in 1989, the Russian "diaspora" in the Soviet republics had reached 25 million.[217]
228
+
229
+ Ethnographic map of the Soviet Union, 1941
230
+
231
+ Number and share of Ukrainians in the population of the regions of the RSFSR (1926 census)
232
+
233
+ Number and share of Ukrainians in the population of the regions of the RSFSR (1979 census)
234
+
235
+ In 1917, before the revolution, health conditions were significantly behind those of developed countries. As Lenin later noted, "Either the lice will defeat socialism, or socialism will defeat the lice".[218] The Soviet principle of health care was conceived by the People's Commissariat for Health in 1918. Health care was to be controlled by the state and would be provided to its citizens free of charge, a revolutionary concept at the time. Article 42 of the 1977 Soviet Constitution gave all citizens the right to health protection and free access to any health institutions in the USSR. Before Leonid Brezhnev became General Secretary, the Soviet healthcare system was held in high esteem by many foreign specialists. This changed, however, from Brezhnev's accession and Mikhail Gorbachev's tenure as leader, during which the health care system was heavily criticized for many basic faults, such as the quality of service and the unevenness in its provision.[219] Minister of Health Yevgeniy Chazov, during the 19th Congress of the Communist Party of the Soviet Union, while highlighting such successes as having the most doctors and hospitals in the world, recognized the system's areas for improvement and felt that billions of Soviet rubles were squandered.[220]
236
+
237
+ After the revolution, life expectancy for all age groups went up. This statistic in itself was seen by some that the socialist system was superior to the capitalist system. These improvements continued into the 1960s when statistics indicated that the life expectancy briefly surpassed that of the United States. Life expectancy started to decline in the 1970s, possibly because of alcohol abuse. At the same time, infant mortality began to rise. After 1974, the government stopped publishing statistics on the matter. This trend can be partly explained by the number of pregnancies rising drastically in the Asian part of the country where infant mortality was the highest while declining markedly in the more developed European part of the Soviet Union.[221]
238
+
239
+ Under Lenin, the government gave small language groups their own writing systems.[222] The development of these writing systems was highly successful, even though some flaws were detected. During the later days of the USSR, countries with the same multilingual situation implemented similar policies. A serious problem when creating these writing systems was that the languages differed dialectally greatly from each other.[223] When a language had been given a writing system and appeared in a notable publication, it would attain "official language" status. There were many minority languages which never received their own writing system; therefore, their speakers were forced to have a second language.[224] There are examples where the government retreated from this policy, most notably under Stalin where education was discontinued in languages that were not widespread. These languages were then assimilated into another language, mostly Russian.[225] During World War II, some minority languages were banned, and their speakers accused of collaborating with the enemy.[226]
240
+
241
+ As the most widely spoken of the Soviet Union's many languages, Russian de facto functioned as an official language, as the "language of interethnic communication" (Russian: язык межнационального общения), but only assumed the de jure status as the official national language in 1990.[227]
242
+
243
+ Christianity and Islam had the highest number of adherents among the religious citizens.[228] Eastern Christianity predominated among Christians, with Russia's traditional Russian Orthodox Church being the largest Christian denomination. About 90% of the Soviet Union's Muslims were Sunnis, with Shias being concentrated in the Azerbaijan SSR.[228] Smaller groups included Roman Catholics, Jews, Buddhists, and a variety of Protestant denominations (especially Baptists and Lutherans).[228]
244
+
245
+ Religious influence had been strong in the Russian Empire. The Russian Orthodox Church enjoyed a privileged status as the church of the monarchy and took part in carrying out official state functions.[229] The immediate period following the establishment of the Soviet state included a struggle against the Orthodox Church, which the revolutionaries considered an ally of the former ruling classes.[230]
246
+
247
+ In Soviet law, the "freedom to hold religious services" was constitutionally guaranteed, although the ruling Communist Party regarded religion as incompatible with the Marxist spirit of scientific materialism.[230] In practice, the Soviet system subscribed to a narrow interpretation of this right, and in fact utilized a range of official measures to discourage religion and curb the activities of religious groups.[230]
248
+
249
+ The 1918 Council of People's Commissars decree establishing the Russian SFSR as a secular state also decreed that "the teaching of religion in all [places] where subjects of general instruction are taught, is forbidden. Citizens may teach and may be taught religion privately."[231] Among further restrictions, those adopted in 1929 included express prohibitions on a range of church activities, including meetings for organized Bible study.[230] Both Christian and non-Christian establishments were shut down by the thousands in the 1920s and 1930s. By 1940, as many as 90% of the churches, synagogues, and mosques that had been operating in 1917 were closed.[232]
250
+
251
+ Under the doctrine of state atheism, there was a "government-sponsored program of forced conversion to atheism" conducted by the Communists.[233][234][235] The regime targeted religions based on state interests, and while most organized religions were never outlawed, religious property was confiscated, believers were harassed, and religion was ridiculed while atheism was propagated in schools.[236] In 1925, the government founded the League of Militant Atheists to intensify the propaganda campaign.[237] Accordingly, although personal expressions of religious faith were not explicitly banned, a strong sense of social stigma was imposed on them by the formal structures and mass media, and it was generally considered unacceptable for members of certain professions (teachers, state bureaucrats, soldiers) to be openly religious. As for the Russian Orthodox Church, Soviet authorities sought to control it and, in times of national crisis, to exploit it for the regime's own purposes; but their ultimate goal was to eliminate it. During the first five years of Soviet power, the Bolsheviks executed 28 Russian Orthodox bishops and over 1,200 Russian Orthodox priests. Many others were imprisoned or exiled. Believers were harassed and persecuted. Most seminaries were closed, and the publication of most religious material was prohibited. By 1941, only 500 churches remained open out of about 54,000 in existence before World War I.
252
+
253
+ Convinced that religious anti-Sovietism had become a thing of the past, and with the looming threat of war, the Stalin regime began shifting to a more moderate religion policy in the late 1930s.[238] Soviet religious establishments overwhelmingly rallied to support the war effort during World War II. Amid other accommodations to religious faith after the German invasion, churches were reopened. Radio Moscow began broadcasting a religious hour, and a historic meeting between Stalin and Orthodox Church leader Patriarch Sergius of Moscow was held in 1943. Stalin had the support of the majority of the religious people in the USSR even through the late 1980s.[238] The general tendency of this period was an increase in religious activity among believers of all faiths.[239]
254
+
255
+ Under Nikita Khrushchev, the state leadership clashed with the churches in 1958–1964, a period when atheism was emphasized in the educational curriculum, and numerous state publications promoted atheistic views.[238] During this period, the number of churches fell from 20,000 to 10,000 from 1959 to 1965, and the number of synagogues dropped from 500 to 97.[240] The number of working mosques also declined, falling from 1,500 to 500 within a decade.[240]
256
+
257
+ Religious institutions remained monitored by the Soviet government, but churches, synagogues, temples, and mosques were all given more leeway in the Brezhnev era.[241] Official relations between the Orthodox Church and the government again warmed to the point that the Brezhnev government twice honored Orthodox Patriarch Alexy I with the Order of the Red Banner of Labour.[242] A poll conducted by Soviet authorities in 1982 recorded 20% of the Soviet population as "active religious believers."[243]
258
+
259
+ The culture of the Soviet Union passed through several stages during the USSR's existence. During the first decade following the revolution, there was relative freedom and artists experimented with several different styles to find a distinctive Soviet style of art. Lenin wanted art to be accessible to the Russian people. On the other hand, hundreds of intellectuals, writers, and artists were exiled or executed, and their work banned, such as Nikolay Gumilyov who was shot for alleged conspiring against the Bolshevik regime, and Yevgeny Zamyatin.[244]
260
+
261
+ The government encouraged a variety of trends. In art and literature, numerous schools, some traditional and others radically experimental, proliferated. Communist writers Maxim Gorky and Vladimir Mayakovsky were active during this time. As a means of influencing a largely illiterate society, films received encouragement from the state, and much of director Sergei Eisenstein's best work dates from this period.
262
+
263
+ During Stalin's rule, the Soviet culture was characterized by the rise and domination of the government-imposed style of socialist realism, with all other trends being severely repressed, with rare exceptions, such as Mikhail Bulgakov's works. Many writers were imprisoned and killed.[245]
264
+
265
+ Following the Khrushchev Thaw, censorship was diminished. During this time, a distinctive period of Soviet culture developed, characterized by conformist public life and an intense focus on personal life. Greater experimentation in art forms was again permissible, resulting in the production of more sophisticated and subtly critical work. The regime loosened its emphasis on socialist realism; thus, for instance, many protagonists of the novels of author Yury Trifonov concerned themselves with problems of daily life rather than with building socialism. Underground dissident literature, known as samizdat, developed during this late period. In architecture, the Khrushchev era mostly focused on functional design as opposed to the highly decorated style of Stalin's epoch.
266
+
267
+ In the second half of the 1980s, Gorbachev's policies of perestroika and glasnost significantly expanded freedom of expression throughout the country in the media and the press.[246]
268
+
269
+ Founded on 20 July 1924 in Moscow, Sovetsky Sport was the first sports newspaper of the Soviet Union.
270
+
271
+ The Soviet Olympic Committee formed on 21 April 1951, and the IOC recognized the new body in its 45th session. In the same year, when the Soviet representative Konstantin Andrianov became an IOC member, the USSR officially joined the Olympic Movement. The 1952 Summer Olympics in Helsinki thus became first Olympic Games for Soviet athletes.
272
+
273
+ The Soviet Union national ice hockey team won nearly every world championship and Olympic tournament between 1954 and 1991 and never failed to medal in any International Ice Hockey Federation (IIHF) tournament in which they competed.
274
+
275
+ The advent[when?] of the state-sponsored "full-time amateur athlete" of the Eastern Bloc countries further eroded the ideology of the pure amateur, as it put the self-financed amateurs of the Western countries at a disadvantage. The Soviet Union entered teams of athletes who were all nominally students, soldiers, or working in a profession – in reality, the state paid many of these competitors to train on a full-time basis.[247] Nevertheless, the IOC held to the traditional rules regarding amateurism.[248]
276
+
277
+ A 1989 report by a committee of the Australian Senate claimed that "there is hardly a medal winner at the Moscow Games, certainly not a gold medal winner...who is not on one sort of drug or another: usually several kinds. The Moscow Games might well have been called the Chemists' Games".[249]
278
+
279
+ A member of the IOC Medical Commission, Manfred Donike, privately ran additional tests with a new technique for identifying abnormal levels of testosterone by measuring its ratio to epitestosterone in urine. Twenty percent of the specimens he tested, including those from sixteen gold medalists, would have resulted in disciplinary proceedings had the tests been official. The results of Donike's unofficial tests later convinced the IOC to add his new technique to their testing protocols.[250] The first documented case of "blood doping" occurred at the 1980 Summer Olympics when a runner[who?] was transfused with two pints of blood before winning medals in the 5000 m and 10,000 m.[251]
280
+
281
+ Documentation obtained in 2016 revealed the Soviet Union's plans for a statewide doping system in track and field in preparation for the 1984 Summer Olympics in Los Angeles. Dated before the decision to boycott the 1984 Games, the document detailed the existing steroids operations of the program, along with suggestions for further enhancements. Dr. Sergei Portugalov of the Institute for Physical Culture prepared the communication, directed to the Soviet Union's head of track and field. Portugalov later became one of the leading figures involved in the implementation of Russian doping before the 2016 Summer Olympics.[252]
282
+
283
+ Official Soviet environmental policy has always attached great importance to actions in which human beings actively improve nature. Lenin's quote "Communism is Soviet power and electrification of the country!" in many respects it summarizes the focus on modernization and industrial development. During the first five-year plan in 1928, Stalin proceeded to industrialize the country at all costs. Values such as environmental and nature protection have been completely ignored in the struggle to create a modern industrial society. After Stalin's death, they focused more on environmental issues, but the basic perception of the value of environmental protection remained the same.[253]
284
+
285
+ The Soviet media has always focused on the vast expanse of land and the virtually indestructible natural resources. This made it feel that contamination and looting of nature were not a problem. The Soviet state also firmly believed that scientific and technological progress would solve all the problems. Official ideology said that under socialism environmental problems could easily be overcome, unlike capitalist countries, where they seemingly could not be solved. The Soviet authorities had an almost unwavering belief that man could transcend nature. However, when the authorities had to admit that there were environmental problems in the USSR in the 1980s, they explained the problems in such a way that socialism had not yet been fully developed; pollution in socialist society was only a temporary anomaly that would have been resolved if socialism had developed.[citation needed]
286
+
287
+ The Chernobyl disaster in 1986 was the first major accident at a civilian nuclear power plant, unparalleled in the world, when a large number of radioactive isotopes were released into the atmosphere. Radioactive doses have scattered relatively far. The main health problem after the accident was 4,000 new cases of thyroid cancer, but this led to a relatively low number of deaths (WHO data, 2005). However, the long-term effects of the accident are unknown. Another major accident is the Kyshtym disaster.[254]
288
+
289
+ After the fall of the USSR, it was discovered that the environmental problems were greater than what the Soviet authorities admitted. The Kola Peninsula was one of the places with clear problems. Around the industrial cities of Monchegorsk and Norilsk, where nickel, for example, is mined, all forests have been killed by contamination, while the northern and other parts of Russia have been affected by emissions. During the 1990s, people in the West were also interested in the radioactive hazards of nuclear facilities, decommissioned nuclear submarines, and the processing of nuclear waste or spent nuclear fuel. It was also known in the early 1990s that the USSR had transported radioactive material to the Barents Sea and Kara Sea, which was later confirmed by the Russian parliament. The crash of the K-141 Kursk submarine in 2000 in the west further raised concerns.[255] In the past, there were accidents involving submarines K-19, K-8 or K-129.[citation needed]
290
+
291
+ 1918–1924  Turkestan3
292
+ 1918–1941  Volga German4
293
+ 1919–1990  Bashkir
294
+ 1920–1925  Kirghiz2
295
+ 1920–1990  Tatar
296
+ 1921–1990  Adjar
297
+ 1921–1945  Crimean
298
+ 1921–1991  Dagestan
299
+ 1921–1924  Mountain
300
+
301
+ 1921–1990  Nakhchivan
302
+ 1922–1991  Yakut
303
+ 1923–1990  Buryat1
304
+ 1923–1940  Karelian
305
+ 1924–1940  Moldavian
306
+ 1924–1929  Tajik
307
+ 1925–1992  Chuvash
308
+ 1925–1936  Kazak2
309
+ 1926–1936  Kirghiz
310
+
311
+ 1931–1991  Abkhaz
312
+ 1932–1992  Karakalpak
313
+ 1934–1990  Mordovian
314
+ 1934–1990  Udmurt
315
+ 1935–1943  Kalmyk
316
+ 1936–1944  Checheno-Ingush
317
+ 1936–1944  Kabardino-Balkar
318
+ 1936–1990  Komi
319
+ 1936–1990  Mari
320
+
321
+ 1936–1990  North Ossetian
322
+ 1944–1957  Kabardin
323
+ 1956–1991  Karelian
324
+ 1957–1990  Checheno-Ingush
325
+ 1957–1991  Kabardino-Balkar
326
+ 1958–1990  Kalmyk
327
+ 1961–1992  Tuva
328
+ 1990–1991  Gorno-Altai
329
+ 1991–1992  Crimean
en/5863.html.txt ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The Soviet Union,[d] officially the Union of Soviet Socialist Republics[e] (USSR),[f] was a federal socialist state in Northern Eurasia that existed from 1922 to 1991. Nominally a union of multiple national Soviet republics,[g] in practice its government and economy were highly centralized until its final years. It was a one-party state governed by the Communist Party, with Moscow as its capital in its largest republic, the Russian SFSR. Other major urban centers were Leningrad, Kiev, Minsk, Tashkent, Alma-Ata and Novosibirsk. It was the largest country in the world by surface area,[18] spanning over 10,000 kilometers (6,200 mi) east to west across 11 time zones and over 7,200 kilometers (4,500 mi) north to south. Its territory included much of Eastern Europe as well as part of Northern Europe and all of Northern and Central Asia. It had five climate zones such as tundra, taiga, steppes, desert, and mountains. Its diverse population was collectively known as Soviet people.
6
+
7
+ The Soviet Union had its roots in the October Revolution of 1917, when the Bolsheviks, headed by Vladimir Lenin, overthrew the Provisional Government that had earlier replaced the monarchy. They established the Russian Soviet Republic[h], beginning a civil war between the Bolshevik Red Army and many anti-Bolshevik forces across the former Empire, among whom the largest faction was the White Guard. The disastrous distractive effect of the war and the Bolshevik policies led to 5 million deaths during the 1921–1922 famine in the region of Povolzhye. The Red Army expanded and helped local Communists take power, establishing soviets, repressing their political opponents and rebellious peasants through the policies of Red Terror and War Communism. In 1922, the Communists were victorious, forming the Soviet Union with the unification of the Russian, Transcaucasian, Ukrainian and Byelorussian republics. The New Economic Policy (NEP) which was introduced by Lenin led to a partial return of a free market and private property, resulting in a period of economic recovery.
8
+
9
+ Following Lenin's death in 1924, a troika and a brief power struggle, Joseph Stalin came to power in the mid-1920s. Stalin suppressed all political opposition to his rule inside the Communist Party, committed the state ideology to Marxism–Leninism, ended the NEP, initiating a centrally planned economy. As a result, the country underwent a period of rapid industrialization and forced collectivization, which led to a significant economic growth, but also created a man-made famine of 1932–1933 and expanded the Gulag labour camp system founded back in 1918. Stalin also fomented political paranoia and conducted the Great Purge to remove opponents of his from the Party through the mass arbitrary arrest of many people (military leaders, Communist Party members and ordinary citizens alike) who were then sent to correctional labor camps or sentenced to death.
10
+
11
+ On 23 August 1939, after unsuccessful efforts to form an anti-fascist alliance with Western powers, the Soviets signed the non-aggression agreement with Nazi Germany. After the start of World War II, the formally neutral Soviets invaded and annexed territories of several Eastern European states, including eastern Poland and the Baltic states. In June 1941 the Germans invaded, opening the largest and bloodiest theater of war in history. Soviet war casualties accounted for the highest proportion of the conflict in the cost of acquiring the upper hand over Axis forces at intense battles such as Stalingrad. Soviet forces eventually captured Berlin and won World War II in Europe on 9 May 1945. The territory overtaken by the Red Army became satellite states of the Eastern Bloc. The Cold War emerged in 1947 as a result of a post-war Soviet dominance in Eastern Europe, where the Eastern Bloc confronted the Western Bloc that united in the North Atlantic Treaty Organization in 1949.
12
+
13
+ Following Stalin's death in 1953, a period known as de-Stalinization and Khrushchev Thaw occurred under the leadership of Nikita Khrushchev. The country developed rapidly, as millions of peasants were moved into industrialized cities. The USSR took an early lead in the Space Race with the first ever satellite and the first human spaceflight. In the 1970s, there was a brief détente of relations with the United States, but tensions resumed when the Soviet Union deployed troops in Afghanistan in 1979. The war drained economic resources and was matched by an escalation of American military aid to Mujahideen fighters.
14
+
15
+ In the mid-1980s, the last Soviet leader, Mikhail Gorbachev, sought to further reform and liberalize the economy through his policies of glasnost and perestroika. The goal was to preserve the Communist Party while reversing economic stagnation. The Cold War ended during his tenure, and in 1989 Soviet satellite countries in Eastern Europe overthrew their respective communist regimes. This led to the rise of strong nationalist and separatist movements inside the USSR as well. Central authorities initiated a referendum—boycotted by the Baltic republics, Armenia, Georgia, and Moldova—which resulted in the majority of participating citizens voting in favor of preserving the Union as a renewed federation. In August 1991, a coup d'état was attempted by Communist Party hardliners. It failed, with Russian President Boris Yeltsin playing a high-profile role in facing down the coup, resulting in the banning of the Communist Party. On 25 December 1991, Gorbachev resigned and the remaining twelve constituent republics emerged from the dissolution of the Soviet Union as independent post-Soviet states. The Russian Federation (formerly the Russian SFSR) assumed the Soviet Union's rights and obligations and is recognized as its continued legal personality.
16
+
17
+ The USSR produced many significant social and technological achievements and innovations of the 20th century, including the world's first ministry of health, first human-made satellite, the first humans in space and the first probe to land on another planet, Venus. The country had the world's second-largest economy and the largest standing military in the world.[19][20][21] The USSR was recognized as one of the five nuclear weapons states. It was a founding permanent member of the United Nations Security Council as well as a member of the Organization for Security and Co-operation in Europe, the World Federation of Trade Unions and the leading member of the Council for Mutual Economic Assistance and the Warsaw Pact.
18
+
19
+ The word soviet is derived from the Russian word sovet (Russian: совет), meaning "council", "assembly", "advice", "harmony", "concord",[note 1] ultimately deriving from the proto-Slavic verbal stem of vět-iti ("to inform"), related to Slavic věst ("news"), English "wise", the root in "ad-vis-or" (which came to English through French), or the Dutch weten ("to know"; cf. wetenschap meaning "science"). The word sovietnik means "councillor".[22]
20
+
21
+ Some organizations in Russian history were called council (Russian: совет). In the Russian Empire, the State Council which functioned from 1810 to 1917 was referred to as a Council of Ministers after the revolt of 1905.[22]
22
+
23
+ During the Georgian Affair, Vladimir Lenin envisioned an expression of Great Russian ethnic chauvinism by Joseph Stalin and his supporters, calling for these nation-states to join Russia as semi-independent parts of a greater union which he initially named as the Union of Soviet Republics of Europe and Asia (Russian: Союз Советских Республик Европы и Азии, tr. Soyuz Sovetskikh Respublik Evropy i Azii).[23] Stalin initially resisted the proposal but ultimately accepted it, although with Lenin's agreement changed the name to the Union of Soviet Socialist Republics (USSR), albeit all the republics began as socialist soviet and did not change to the other order until 1936. In addition, in the national languages of several republics, the word council or conciliar in the respective language was only quite late changed to an adaptation of the Russian soviet and never in others, e.g. Ukraine.
24
+
25
+ СССР (in Latin alphabet: SSSR) is the abbreviation of USSR in Russian. It is written in Cyrillic alphabets. The Soviets used the Cyrillic abbreviation so frequently that audiences worldwide became familiar with its meaning. Notably, both Cyrillic letters used have orthographically-similar (but transliterally distinct) letters in Latin alphabets. Because of widespread familiarity with the Cyrillic abbreviation, Latin alphabet users in particular almost always use the orthographically-similar Latin letters C and P (as opposed to the transliteral Latin letters S and R) when rendering the USSR's native abbreviation.
26
+
27
+ After СССР, the most common short form names for the Soviet state in Russian were Советский Союз (transliteration: Sovetskiy Soyuz) which literally means Soviet Union, and also Союз ССР (transliteration: Soyuz SSR) which, after compensating for grammatical differences, essentially translates to Union of SSR's in English.
28
+
29
+ In the English language media, the state was referred to as the Soviet Union or the USSR. In other European languages, the locally translated short forms and abbreviations are usually used such as Union soviétique and URSS in French, or Sowjetunion and UdSSR in German. In the English-speaking world, the Soviet Union was also informally called Russia and its citizens Russians,[24] although that was technically incorrect since Russia was only one of the republics.[25] Such misapplications of the linguistic equivalents to the term Russia and its derivatives were frequent in other languages as well.
30
+
31
+ With an area of 22,402,200 square kilometres (8,649,500 sq mi), the Soviet Union was the world's largest country, a status that is retained by the Russian Federation.[26] Covering a sixth of Earth's land surface, its size was comparable to that of North America.[27] Two other successor states, Kazakhstan and Ukraine, rank among the top 10 countries by land area, and the largest country entirely in Europe, respectively. The European portion accounted for a quarter of the country's area and was the cultural and economic center. The eastern part in Asia extended to the Pacific Ocean to the east and Afghanistan to the south, and, except some areas in Central Asia, was much less populous. It spanned over 10,000 kilometres (6,200 mi) east to west across 11 time zones, and over 7,200 kilometres (4,500 mi) north to south. It had five climate zones: tundra, taiga, steppes, desert and mountains.
32
+
33
+ The USSR had the world's longest border, like Russia, measuring over 60,000 kilometres (37,000 mi), or ​1 1⁄2 circumferences of Earth. Two-thirds of it was a coastline. Across the Bering Strait was the United States. The country bordered Afghanistan, China, Czechoslovakia, Finland, Hungary, Iran, Mongolia, North Korea, Norway, Poland, Romania, and Turkey from 1945 to 1991.
34
+
35
+ The country's highest mountain was Communism Peak (now Ismoil Somoni Peak) in Tajikistan, at 7,495 metres (24,590 ft). The USSR also included most of the world's largest lakes; the Caspian Sea (shared with Iran), and Lake Baikal, the world's largest (by volume) and deepest freshwater lake that is also an internal body of water in Russia.
36
+
37
+ Modern revolutionary activity in the Russian Empire began with the 1825 Decembrist revolt. Although serfdom was abolished in 1861, it was done on terms unfavorable to the peasants and served to encourage revolutionaries. A parliament—the State Duma—was established in 1906 after the Russian Revolution of 1905, but Tsar Nicholas II resisted attempts to move from absolute to a constitutional monarchy. Social unrest continued and was aggravated during World War I by military defeat and food shortages in major cities.
38
+
39
+ A spontaneous popular uprising in Petrograd, in response to the wartime decay of Russia's economy and morale, culminated in the February Revolution and the toppling of Nicholas II and the imperial government in March 1917. The tsarist autocracy was replaced by the Russian Provisional Government, which intended to conduct elections to the Russian Constituent Assembly and to continue fighting on the side of the Entente in World War I.
40
+
41
+ At the same time, workers' councils, known in Russian as "Soviets", sprang up across the country. The Bolsheviks, led by Vladimir Lenin, pushed for socialist revolution in the Soviets and on the streets. On 7 November 1917, the Red Guards stormed the Winter Palace in Petrograd, ending the rule of the Provisional Government and leaving all political power to the Soviets.[30] This event would later be officially known in Soviet bibliographies as the Great October Socialist Revolution. In December, the Bolsheviks signed an armistice with the Central Powers, though by February 1918, fighting had resumed. In March, the Soviets ended involvement in the war and signed the Treaty of Brest-Litovsk.
42
+
43
+ A long and bloody Civil War ensued between the Reds and the Whites, starting in 1917 and ending in 1923 with the Reds' victory. It included foreign intervention, the execution of the former tsar and his family, and the famine of 1921, which killed about five million people.[31] In March 1921, during a related conflict with Poland, the Peace of Riga was signed, splitting disputed territories in Belarus and Ukraine between the Republic of Poland and Soviet Russia. Soviet Russia had to resolve similar conflicts with the newly established republics of Finland, Estonia, Latvia, and Lithuania.
44
+
45
+ On 28 December 1922, a conference of plenipotentiary delegations from the Russian SFSR, the Transcaucasian SFSR, the Ukrainian SSR and the Byelorussian SSR approved the Treaty on the Creation of the USSR[32] and the Declaration of the Creation of the USSR, forming the Union of Soviet Socialist Republics.[33] These two documents were confirmed by the first Congress of Soviets of the USSR and signed by the heads of the delegations,[34] Mikhail Kalinin, Mikhail Tskhakaya, Mikhail Frunze, Grigory Petrovsky, and Alexander Chervyakov,[35] on 30 December 1922. The formal proclamation was made from the stage of the Bolshoi Theatre.
46
+
47
+ An intensive restructuring of the economy, industry and politics of the country began in the early days of Soviet power in 1917. A large part of this was done according to the Bolshevik Initial Decrees, government documents signed by Vladimir Lenin. One of the most prominent breakthroughs was the GOELRO plan, which envisioned a major restructuring of the Soviet economy based on total electrification of the country.[36] The plan became the prototype for subsequent Five-Year Plans and was fulfilled by 1931.[37] After the economic policy of "War communism" during the Russian Civil War, as a prelude to fully developing socialism in the country, the Soviet government permitted some private enterprise to coexist alongside nationalized industry in the 1920s, and total food requisition in the countryside was replaced by a food tax.
48
+
49
+ From its creation, the government in the Soviet Union was based on the one-party rule of the Communist Party (Bolsheviks).[38] The stated purpose was to prevent the return of capitalist exploitation, and that the principles of democratic centralism would be the most effective in representing the people's will in a practical manner. The debate over the future of the economy provided the background for a power struggle in the years after Lenin's death in 1924. Initially, Lenin was to be replaced by a "troika" consisting of Grigory Zinoviev of the Ukrainian SSR, Lev Kamenev of the Russian SFSR, and Joseph Stalin of the Transcaucasian SFSR.
50
+
51
+ On 1 February 1924, the USSR was recognized by the United Kingdom. The same year, a Soviet Constitution was approved, legitimizing the December 1922 union. Despite the foundation of the Soviet state as a federative entity of many constituent republics, each with its own political and administrative entities, the term "Soviet Russia" – strictly applicable only to the Russian Federative Socialist Republic – was often applied to the entire country by non-Soviet writers and politicians.
52
+
53
+ On 3 April 1922, Stalin was named the General Secretary of the Communist Party of the Soviet Union. Lenin had appointed Stalin the head of the Workers' and Peasants' Inspectorate, which gave Stalin considerable power. By gradually consolidating his influence and isolating and outmanoeuvring his rivals within the party, Stalin became the undisputed leader of the country and, by the end of the 1920s, established a totalitarian rule. In October 1927, Zinoviev and Leon Trotsky were expelled from the Central Committee and forced into exile.
54
+
55
+ In 1928, Stalin introduced the first five-year plan for building a socialist economy. In place of the internationalism expressed by Lenin throughout the Revolution, it aimed to build Socialism in One Country. In industry, the state assumed control over all existing enterprises and undertook an intensive program of industrialization. In agriculture, rather than adhering to the "lead by example" policy advocated by Lenin,[39] forced collectivization of farms was implemented all over the country.
56
+
57
+ Famines ensued as a result, causing deaths estimated at three to seven million; surviving kulaks were persecuted, and many were sent to Gulags to do forced labor.[40][41] Social upheaval continued in the mid-1930s. Despite the turmoil of the mid-to-late 1930s, the country developed a robust industrial economy in the years preceding World War II.
58
+
59
+ Closer cooperation between the USSR and the West developed in the early 1930s. From 1932 to 1934, the country participated in the World Disarmament Conference. In 1933, diplomatic relations between the United States and the USSR were established when in November, the newly elected President of the United States, Franklin D. Roosevelt, chose to recognize Stalin's Communist government formally and negotiated a new trade agreement between the two countries.[42] In September 1934, the country joined the League of Nations. After the Spanish Civil War broke out in 1936, the USSR actively supported the Republican forces against the Nationalists, who were supported by Fascist Italy and Nazi Germany.[43]
60
+
61
+ In December 1936, Stalin unveiled a new constitution that was praised by supporters around the world as the most democratic constitution imaginable, though there was some skepticism.[i] Stalin's Great Purge resulted in the detainment or execution of many "Old Bolsheviks" who had participated in the October Revolution with Lenin. According to declassified Soviet archives, the NKVD arrested more than one and a half million people in 1937 and 1938, of whom 681,692 were shot.[45] Over those two years, there were an average of over one thousand executions a day.[46][j]
62
+
63
+ In 1939, the Soviet Union made a dramatic shift toward Nazi Germany. Almost a year after Britain and France had concluded the Munich Agreement with Germany, the Soviet Union made agreements with Germany as well, both militarily and economically during extensive talks. The two countries concluded the Molotov–Ribbentrop Pact and the German–Soviet Commercial Agreement in August 1939. The former made possible the Soviet occupation of Lithuania, Latvia, Estonia, Bessarabia, northern Bukovina, and eastern Poland. In late November, unable to coerce the Republic of Finland by diplomatic means into moving its border 25 kilometres (16 mi) back from Leningrad, Stalin ordered the invasion of Finland. In the east, the Soviet military won several decisive victories during border clashes with the Empire of Japan in 1938 and 1939. However, in April 1941, the USSR signed the Soviet–Japanese Neutrality Pact with Japan, recognizing the territorial integrity of Manchukuo, a Japanese puppet state.
64
+
65
+ Germany broke the Molotov–Ribbentrop Pact and invaded the Soviet Union on 22 June 1941 starting what was known in the USSR as the Great Patriotic War. The Red Army stopped the seemingly invincible German Army at the Battle of Moscow, aided by an unusually harsh winter. The Battle of Stalingrad, which lasted from late 1942 to early 1943, dealt a severe blow to Germany from which they never fully recovered and became a turning point in the war. After Stalingrad, Soviet forces drove through Eastern Europe to Berlin before Germany surrendered in 1945. The German Army suffered 80% of its military deaths in the Eastern Front.[50] Harry Hopkins, a close foreign policy advisor to Franklin D. Roosevelt, spoke on 10 August 1943 of the USSR's decisive role in the war.[k]
66
+
67
+ In the same year, the USSR, in fulfilment of its agreement with the Allies at the Yalta Conference, denounced the Soviet–Japanese Neutrality Pact in April 1945[52] and invaded Manchukuo and other Japan-controlled territories on 9 August 1945.[53] This conflict ended with a decisive Soviet victory, contributing to the unconditional surrender of Japan and the end of World War II.
68
+
69
+ The USSR suffered greatly in the war, losing around 27 million people.[54] Approximately 2.8 million Soviet POWs died of starvation, mistreatment, or executions in just eight months of 1941–42.[55][56] During the war, the country together with the United States, the United Kingdom and China were considered the Big Four Allied powers,[57] and later became the Four Policemen that formed the basis of the United Nations Security Council.[58] It emerged as a superpower in the post-war period. Once denied diplomatic recognition by the Western world, the USSR had official relations with practically every country by the late 1940s. A member of the United Nations at its foundation in 1945, the country became one of the five permanent members of the United Nations Security Council, which gave it the right to veto any of its resolutions.
70
+
71
+ During the immediate post-war period, the Soviet Union rebuilt and expanded its economy, while maintaining its strictly centralized control. It took effective control over most of the countries of Eastern Europe (except Yugoslavia and later Albania), turning them into satellite states. The USSR bound its satellite states in a military alliance, the Warsaw Pact, in 1955, and an economic organization, Council for Mutual Economic Assistance or Comecon, a counterpart to the European Economic Community (EEC), from 1949 to 1991.[59] The USSR concentrated on its own recovery, seizing and transferring most of Germany's industrial plants, and it exacted war reparations from East Germany, Hungary, Romania, and Bulgaria using Soviet-dominated joint enterprises. It also instituted trading arrangements deliberately designed to favor the country. Moscow controlled the Communist parties that ruled the satellite states, and they followed orders from the Kremlin.[m] Later, the Comecon supplied aid to the eventually victorious Communist Party of China, and its influence grew elsewhere in the world. Fearing its ambitions, the Soviet Union's wartime allies, the United Kingdom and the United States, became its enemies. In the ensuing Cold War, the two sides clashed indirectly in proxy wars.
72
+
73
+ Stalin died on 5 March 1953. Without a mutually agreeable successor, the highest Communist Party officials initially opted to rule the Soviet Union jointly through a troika headed by Georgy Malenkov. This did not last, however, and Nikita Khrushchev eventually won the ensuing power struggle by the mid-1950s. In 1956, he denounced Stalin's use of repression and proceeded to ease controls over the party and society. This was known as de-Stalinization.
74
+
75
+ Moscow considered Eastern Europe to be a critically vital buffer zone for the forward defence of its western borders, in case of another major invasion such as the German invasion of 1941. For this reason, the USSR sought to cement its control of the region by transforming the Eastern European countries into satellite states, dependent upon and subservient to its leadership. Soviet military force was used to suppress anti-Stalinist uprisings in Hungary and Poland in 1956.
76
+
77
+ In the late 1950s, a confrontation with China regarding the Soviet rapprochement with the West, and what Mao Zedong perceived as Khrushchev's revisionism, led to the Sino–Soviet split. This resulted in a break throughout the global Marxist–Leninist movement, with the governments in Albania, Cambodia and Somalia choosing to ally with China.
78
+
79
+ During this period of the late 1950s and early 1960s, the USSR continued to realize scientific and technological exploits in the Space Race, rivaling the United States: launching the first artificial satellite, Sputnik 1 in 1957; a living dog named Laika in 1957; the first human being, Yuri Gagarin in 1961; the first woman in space, Valentina Tereshkova in 1963; Alexei Leonov, the first person to walk in space in 1965; the first soft landing on the Moon by spacecraft Luna 9 in 1966; and the first Moon rovers, Lunokhod 1 and Lunokhod 2.[61]
80
+
81
+ Khrushchev initiated "The Thaw", a complex shift in political, cultural and economic life in the country. This included some openness and contact with other nations and new social and economic policies with more emphasis on commodity goods, allowing a dramatic rise in living standards while maintaining high levels of economic growth. Censorship was relaxed as well. Khrushchev's reforms in agriculture and administration, however, were generally unproductive. In 1962, he precipitated a crisis with the United States over the Soviet deployment of nuclear missiles in Cuba. An agreement was made with the United States to remove nuclear missiles from both Cuba and Turkey, concluding the crisis. This event caused Khrushchev much embarrassment and loss of prestige, resulting in his removal from power in 1964.
82
+
83
+ Following the ousting of Khrushchev, another period of collective leadership ensued, consisting of Leonid Brezhnev as General Secretary, Alexei Kosygin as Premier and Nikolai Podgorny as Chairman of the Presidium, lasting until Brezhnev established himself in the early 1970s as the preeminent Soviet leader.
84
+
85
+ In 1968, the Soviet Union and Warsaw Pact allies invaded Czechoslovakia to halt the Prague Spring reforms. In the aftermath, Brezhnev justified the invasion along with the earlier invasions of Eastern European states by introducing the Brezhnev Doctrine, which claimed the right of the Soviet Union to violate the sovereignty of any country that attempted to replace Marxism–Leninism with capitalism.
86
+
87
+ Brezhnev presided throughout détente with the West that resulted in treaties on armament control (SALT I, SALT II, Anti-Ballistic Missile Treaty) while at the same time building up Soviet military might.
88
+
89
+ In October 1977, the third Soviet Constitution was unanimously adopted. The prevailing mood of the Soviet leadership at the time of Brezhnev's death in 1982 was one of aversion to change. The long period of Brezhnev's rule had come to be dubbed one of "standstill", with an ageing and ossified top political leadership. This period is also known as the Era of Stagnation, a period of adverse economic, political, and social effects in the country, which began during the rule of Brezhnev and continued under his successors Yuri Andropov and Konstantin Chernenko.
90
+
91
+ In late 1979, the Soviet Union's military intervened in the ongoing civil war in neighboring Afghanistan, effectively ending a détente with the West.
92
+
93
+ Two developments dominated the decade that followed: the increasingly apparent crumbling of the Soviet Union's economic and political structures, and the patchwork attempts at reforms to reverse that process. Kenneth S. Deffeyes argued in Beyond Oil that the Reagan administration encouraged Saudi Arabia to lower the price of oil to the point where the Soviets could not make a profit selling their oil, and resulted in the depletion of the country's hard currency reserves.[62]
94
+
95
+ Brezhnev's next two successors, transitional figures with deep roots in his tradition, did not last long. Yuri Andropov was 68 years old and Konstantin Chernenko 72 when they assumed power; both died in less than two years. In an attempt to avoid a third short-lived leader, in 1985, the Soviets turned to the next generation and selected Mikhail Gorbachev. He made significant changes in the economy and party leadership, called perestroika. His policy of glasnost freed public access to information after decades of heavy government censorship. Gorbachev also moved to end the Cold War. In 1988, the USSR abandoned its war in Afghanistan and began to withdraw its forces. In the following year, Gorbachev refused to interfere in the internal affairs of the Soviet satellite states, which paved the way for the Revolutions of 1989. With the tearing down of the Berlin Wall and with East and West Germany pursuing unification, the Iron Curtain between the West and Soviet-controlled regions came down.
96
+
97
+ At the same time, the Soviet republics started legal moves towards potentially declaring sovereignty over their territories, citing the freedom to secede in Article 72 of the USSR constitution.[63] On 7 April 1990, a law was passed allowing a republic to secede if more than two-thirds of its residents voted for it in a referendum.[64] Many held their first free elections in the Soviet era for their own national legislatures in 1990. Many of these legislatures proceeded to produce legislation contradicting the Union laws in what was known as the "War of Laws". In 1989, the Russian SFSR convened a newly elected Congress of People's Deputies. Boris Yeltsin was elected its chairman. On 12 June 1990, the Congress declared Russia's sovereignty over its territory and proceeded to pass laws that attempted to supersede some of the Soviet laws. After a landslide victory of Sąjūdis in Lithuania, that country declared its independence restored on 11 March 1990.
98
+
99
+ A referendum for the preservation of the USSR was held on 17 March 1991 in nine republics (the remainder having boycotted the vote), with the majority of the population in those republics voting for preservation of the Union. The referendum gave Gorbachev a minor boost. In the summer of 1991, the New Union Treaty, which would have turned the country into a much looser Union, was agreed upon by eight republics. The signing of the treaty, however, was interrupted by the August Coup—an attempted coup d'état by hardline members of the government and the KGB who sought to reverse Gorbachev's reforms and reassert the central government's control over the republics. After the coup collapsed, Yeltsin was seen as a hero for his decisive actions, while Gorbachev's power was effectively ended. The balance of power tipped significantly towards the republics. In August 1991, Latvia and Estonia immediately declared the restoration of their full independence (following Lithuania's 1990 example). Gorbachev resigned as general secretary in late August, and soon afterwards, the party's activities were indefinitely suspended—effectively ending its rule. By the fall, Gorbachev could no longer influence events outside Moscow, and he was being challenged even there by Yeltsin, who had been elected President of Russia in July 1991.
100
+
101
+ The remaining 12 republics continued discussing new, increasingly looser, models of the Union. However, by December all except Russia and Kazakhstan had formally declared independence. During this time, Yeltsin took over what remained of the Soviet government, including the Moscow Kremlin. The final blow was struck on 1 December when Ukraine, the second-most powerful republic, voted overwhelmingly for independence. Ukraine's secession ended any realistic chance of the country staying together even on a limited scale.
102
+
103
+ On 8 December 1991, the presidents of Russia, Ukraine and Belarus (formerly Byelorussia), signed the Belavezha Accords, which declared the Soviet Union dissolved and established the Commonwealth of Independent States (CIS) in its place. While doubts remained over the authority of the accords to do this, on 21 December 1991, the representatives of all Soviet republics except Georgia signed the Alma-Ata Protocol, which confirmed the accords. On 25 December 1991, Gorbachev resigned as the President of the USSR, declaring the office extinct. He turned the powers that had been vested in the presidency over to Yeltsin. That night, the Soviet flag was lowered for the last time, and the Russian tricolor was raised in its place.
104
+
105
+ The following day, the Supreme Soviet, the highest governmental body, voted both itself and the country out of existence. This is generally recognized as marking the official, final dissolution of the Soviet Union as a functioning state, and the end of the Cold War.[65] The Soviet Army initially remained under overall CIS command but was soon absorbed into the different military forces of the newly independent states. The few remaining Soviet institutions that had not been taken over by Russia ceased to function by the end of 1991.
106
+
107
+ Following the dissolution, Russia was internationally recognized[66] as its legal successor on the international stage. To that end, Russia voluntarily accepted all Soviet foreign debt and claimed Soviet overseas properties as its own. Under the 1992 Lisbon Protocol, Russia also agreed to receive all nuclear weapons remaining in the territory of other former Soviet republics. Since then, the Russian Federation has assumed the Soviet Union's rights and obligations. Ukraine has refused to recognize exclusive Russian claims to succession of the USSR and claimed such status for Ukraine as well, which was codified in Articles 7 and 8 of its 1991 law On Legal Succession of Ukraine. Since its independence in 1991, Ukraine has continued to pursue claims against Russia in foreign courts, seeking to recover its share of the foreign property that was owned by the USSR.
108
+
109
+ The dissolution was followed by a severe drop in economic and social conditions in post-Soviet states,[67][68] including a rapid increase in poverty,[69][70][71][72] crime,[73][74] corruption,[75][76] unemployment,[77] homelessness,[78][79] rates of disease,[80][81][82] demographic losses,[83] income inequality and the rise of an oligarchical class,[84][69] along with decreases in calorie intake, life expectancy, adult literacy, and income.[85] Between 1988/1989 and 1993/1995, the Gini ratio increased by an average of 9 points for all former socialist countries.[69] The economic shocks that accompanied wholesale privatization were associated with sharp increases in mortality. Data shows Russia, Kazakhstan, Latvia, Lithuania and Estonia saw a tripling of unemployment and a 42% increase in male death rates between 1991 and 1994.[86][87] In the following decades, only five or six of the post-communist states are on a path to joining the wealthy capitalist West while most are falling behind, some to such an extent that it will take over fifty years to catch up to where they were before the fall of the Soviet Bloc.[88][89]
110
+
111
+ In summing up the international ramifications of these events, Vladislav Zubok stated: "The collapse of the Soviet empire was an event of epochal geopolitical, military, ideological, and economic significance."[90] Before the dissolution, the country had maintained its status as one of the world's two superpowers for four decades after World War II through its hegemony in Eastern Europe, military strength, economic strength, aid to developing countries, and scientific research, especially in space technology and weaponry.[91]
112
+
113
+ The analysis of the succession of states for the 15 post-Soviet states is complex. The Russian Federation is seen as the legal continuator state and is for most purposes the heir to the Soviet Union. It retained ownership of all former Soviet embassy properties, as well as the old Soviet UN membership and permanent membership on the Security Council.
114
+
115
+ Of the two other co-founding states of the USSR at the time of the dissolution, Ukraine was the only one that had passed similar to Russia's laws that it is a state-successor of both the Ukrainian SSR and the USSR.[92] Soviet treaties laid groundwork for Ukraine's future foreign agreements as well as they led to Ukraine agreeing to undertake 16.37% of debts of the Soviet Union for which it was going to receive its share of USSR's foreign property. Although it had a tough position at the time, due to Russia's position as a "single continuation of the USSR" that became widely accepted in the West as well as a constant pressure from the Western countries, allowed Russia to dispose state property of USSR abroad and conceal information about it. Due to that Ukraine never ratified "zero option" agreement that Russian Federation had signed with other former Soviet republics, as it denied disclosing of information about Soviet Gold Reserves and its Diamond Fund.[93][94] Dispute over former Soviet property and assets between two former republics is still ongoing:
116
+
117
+ The conflict is unsolvable. We can continue to poke Kiev handouts in the calculation of "solve the problem", only it won't be solved. Going to a trial is also pointless: for a number of European countries this is a political issue, and they will make a decision clearly in whose favor. What to do in this situation is an open question. Search for non-trivial solutions. But we must remember that in 2014, with the filing of the then Ukrainian Prime Minister Yatsenyuk, litigation with Russia resumed in 32 countries.
118
+
119
+ Similar situation occurred with restitution of cultural property. Although on 14 February 1992 Russia and other former Soviet republics signed agreement "On the return of cultural and historic property to the origin states" in Minsk, it was halted by Russian State Duma that had eventually passed "Federal Law on Cultural Valuables Displaced to the USSR as a Result of the Second World War and Located on the Territory of the Russian Federation" which made restitution currently impossible.[96]
120
+
121
+ There are additionally four states that claim independence from the other internationally recognised post-Soviet states but possess limited international recognition: Abkhazia, Nagorno-Karabakh, South Ossetia and Transnistria. The Chechen separatist movement of the Chechen Republic of Ichkeria lacks any international recognition.
122
+
123
+ During his rule, Stalin always made the final policy decisions. Otherwise, Soviet foreign policy was set by the Commission on the Foreign Policy of the Central Committee of the Communist Party of the Soviet Union, or by the party's highest body the Politburo. Operations were handled by the separate Ministry of Foreign Affairs. It was known as the People's Commissariat for Foreign Affairs (or Narkomindel), until 1946. The most influential spokesmen were Georgy Chicherin (1872–1936), Maxim Litvinov (1876–1951), Vyacheslav Molotov (1890–1986), Andrey Vyshinsky (1883–1954) and Andrei Gromyko (1909–1989). Intellectuals were based in the Moscow State Institute of International Relations.[97]
124
+
125
+ The Communist leadership of the Soviet Union intensely debated foreign policy issues and change directions several times. Even after Stalin assumed dictatorial control in the late 1920s, there were debates, and he frequently changed positions.[106]
126
+
127
+ During the country's early period, it was assumed that Communist revolutions would break out soon in every major industrial country, and it was the Soviet responsibility to assist them. The Comintern was the weapon of choice. A few revolutions did break out, but they were quickly suppressed (the longest lasting one was in Hungary)—the Hungarian Soviet Republic—lasted only from 21 March 1919 to 1 August 1919. The Russian Bolsheviks were in no position to give any help.
128
+
129
+ By 1921, Lenin, Trotsky, and Stalin realized that capitalism had stabilized itself in Europe and there would not be any widespread revolutions anytime soon. It became the duty of the Russian Bolsheviks to protect what they had in Russia, and avoid military confrontations that might destroy their bridgehead. Russia was now a pariah state, along with Germany. The two came to terms in 1922 with the Treaty of Rapallo that settled long-standing grievances. At the same time, the two countries secretly set up training programs for the illegal German army and air force operations at hidden camps in the USSR.[107]
130
+
131
+ Moscow eventually stopped threatening other states, and instead worked to open peaceful relationships in terms of trade, and diplomatic recognition. The United Kingdom dismissed the warnings of Winston Churchill and a few others about a continuing communist threat, and opened trade relations and de facto diplomatic recognition in 1922. There was hope for a settlement of the pre-war tsarist debts, but it was repeatedly postponed. Formal recognition came when the new Labour Party came to power in 1924.[108] All the other countries followed suit in opening trade relations. Henry Ford opened large-scale business relations with the Soviets in the late 1920s, hoping that it would lead to long-term peace. Finally, in 1933, the United States officially recognized the USSR, a decision backed by the public opinion and especially by US business interests that expected an opening of a new profitable market.[109]
132
+
133
+ In the late 1920s and early 1930s, Stalin ordered Communist parties across the world to strongly oppose non-communist political parties, labor unions or other organizations on the left. Stalin reversed himself in 1934 with the Popular Front program that called on all Communist parties to join together with all anti-Fascist political, labor, and organizational forces that were opposed to fascism, especially of the Nazi variety.[110][111]
134
+
135
+ In 1939, half a year after the Munich Agreement, the USSR attempted to form an anti-Nazi alliance with France and Britain.[112] Adolf Hitler proposed a better deal, which would give the USSR control over much of Eastern Europe through the Molotov–Ribbentrop Pact. In September, Germany invaded Poland, and the USSR also invaded later that month, resulting in the partition of Poland. In response, Britain and France declared war on Germany, marking the beginning of World War II.[113]
136
+
137
+ There were three power hierarchies in the Soviet Union: the legislature represented by the Supreme Soviet of the Soviet Union, the government represented by the Council of Ministers, and the Communist Party of the Soviet Union (CPSU), the only legal party and the final policymaker in the country.[114]
138
+
139
+ At the top of the Communist Party was the Central Committee, elected at Party Congresses and Conferences. In turn, the Central Committee voted for a Politburo (called the Presidium between 1952–1966), Secretariat and the General Secretary (First Secretary from 1953 to 1966), the de facto highest office in the Soviet Union.[115] Depending on the degree of power consolidation, it was either the Politburo as a collective body or the General Secretary, who always was one of the Politburo members, that effectively led the party and the country[116] (except for the period of the highly personalized authority of Stalin, exercised directly through his position in the Council of Ministers rather than the Politburo after 1941).[117] They were not controlled by the general party membership, as the key principle of the party organization was democratic centralism, demanding strict subordination to higher bodies, and elections went uncontested, endorsing the candidates proposed from above.[118]
140
+
141
+ The Communist Party maintained its dominance over the state mainly through its control over the system of appointments. All senior government officials and most deputies of the Supreme Soviet were members of the CPSU. Of the party heads themselves, Stalin (1941–1953) and Khrushchev (1958–1964) were Premiers. Upon the forced retirement of Khrushchev, the party leader was prohibited from this kind of double membership,[119] but the later General Secretaries for at least some part of their tenure occupied the mostly ceremonial position of Chairman of the Presidium of the Supreme Soviet, the nominal head of state. The institutions at lower levels were overseen and at times supplanted by primary party organizations.[120]
142
+
143
+ However, in practice the degree of control the party was able to exercise over the state bureaucracy, particularly after the death of Stalin, was far from total, with the bureaucracy pursuing different interests that were at times in conflict with the party.[121] Nor was the party itself monolithic from top to bottom, although factions were officially banned.[122]
144
+
145
+ The Supreme Soviet (successor of the Congress of Soviets and Central Executive Committee) was nominally the highest state body for most of the Soviet history,[123] at first acting as a rubber stamp institution, approving and implementing all decisions made by the party. However, its powers and functions were extended in the late 1950s, 1960s and 1970s, including the creation of new state commissions and committees. It gained additional powers relating to the approval of the Five-Year Plans and the government budget.[124] The Supreme Soviet elected a Presidium to wield its power between plenary sessions,[125] ordinarily held twice a year, and appointed the Supreme Court,[126] the Procurator General[127] and the Council of Ministers (known before 1946 as the Council of People's Commissars), headed by the Chairman (Premier) and managing an enormous bureaucracy responsible for the administration of the economy and society.[125] State and party structures of the constituent republics largely emulated the structure of the central institutions, although the Russian SFSR, unlike the other constituent republics, for most of its history had no republican branch of the CPSU, being ruled directly by the union-wide party until 1990. Local authorities were organized likewise into party committees, local Soviets and executive committees. While the state system was nominally federal, the party was unitary.[128]
146
+
147
+ The state security police (the KGB and its predecessor agencies) played an important role in Soviet politics. It was instrumental in the Great Purge,[129] but was brought under strict party control after Stalin's death. Under Yuri Andropov, the KGB engaged in the suppression of political dissent and maintained an extensive network of informers, reasserting itself as a political actor to some extent independent of the party-state structure,[130] culminating in the anti-corruption campaign targeting high-ranking party officials in the late 1970s and early 1980s.[131]
148
+
149
+ The constitution, which was promulgated in 1918, 1924, 1936 and 1977,[132] did not limit state power. No formal separation of powers existed between the Party, Supreme Soviet and Council of Ministers[133] that represented executive and legislative branches of the government. The system was governed less by statute than by informal conventions, and no settled mechanism of leadership succession existed. Bitter and at times deadly power struggles took place in the Politburo after the deaths of Lenin[134] and Stalin,[135] as well as after Khrushchev's dismissal,[136] itself due to a decision by both the Politburo and the Central Committee.[137] All leaders of the Communist Party before Gorbachev died in office, except Georgy Malenkov[138] and Khrushchev, both dismissed from the party leadership amid internal struggle within the party.[137]
150
+
151
+ Between 1988 and 1990, facing considerable opposition, Mikhail Gorbachev enacted reforms shifting power away from the highest bodies of the party and making the Supreme Soviet less dependent on them. The Congress of People's Deputies was established, the majority of whose members were directly elected in competitive elections held in March 1989. The Congress now elected the Supreme Soviet, which became a full-time parliament, and much stronger than before. For the first time since the 1920s, it refused to rubber stamp proposals from the party and Council of Ministers.[139] In 1990, Gorbachev introduced and assumed the position of the President of the Soviet Union, concentrated power in his executive office, independent of the party, and subordinated the government,[140] now renamed the Cabinet of Ministers of the USSR, to himself.[141]
152
+
153
+ Tensions grew between the Union-wide authorities under Gorbachev, reformists led in Russia by Boris Yeltsin and controlling the newly elected Supreme Soviet of the Russian SFSR, and communist hardliners. On 19–21 August 1991, a group of hardliners staged a coup attempt. The coup failed, and the State Council of the Soviet Union became the highest organ of state power "in the period of transition".[142] Gorbachev resigned as General Secretary, only remaining President for the final months of the existence of the USSR.[143]
154
+
155
+ The judiciary was not independent of the other branches of government. The Supreme Court supervised the lower courts (People's Court) and applied the law as established by the constitution or as interpreted by the Supreme Soviet. The Constitutional Oversight Committee reviewed the constitutionality of laws and acts. The Soviet Union used the inquisitorial system of Roman law, where the judge, procurator, and defence attorney collaborate to establish the truth.[144]
156
+
157
+ Constitutionally, the USSR was a federation of constituent Union Republics, which were either unitary states, such as Ukraine or Byelorussia (SSRs), or federations, such as Russia or Transcaucasia (SFSRs),[114] all four being the founding republics who signed the Treaty on the Creation of the USSR in December 1922. In 1924, during the national delimitation in Central Asia, Uzbekistan and Turkmenistan were formed from parts of Russia's Turkestan ASSR and two Soviet dependencies, the Khorezm and Bukharan SSRs. In 1929, Tajikistan was split off from the Uzbekistan SSR. With the constitution of 1936, the Transcaucasian SFSR was dissolved, resulting in its constituent republics of Armenia, Georgia and Azerbaijan being elevated to Union Republics, while Kazakhstan and Kirghizia were split off from Russian SFSR, resulting in the same status.[145] In August 1940, Moldavia was formed from parts of Ukraine and Bessarabia and northern Bukovina. Estonia, Latvia and Lithuania (SSRs) were also admitted into the union which was not recognized by most of the international community and was considered an illegal occupation. Karelia was split off from Russia as a Union Republic in March 1940 and was reabsorbed in 1956. Between July 1956 and September 1991, there were 15 union republics (see map below).[146]
158
+
159
+ While nominally a union of equals, in practice the Soviet Union was dominated by Russians. The domination was so absolute that for most of its existence, the country was commonly (but incorrectly) referred to as "Russia". While the RSFSR was technically only one republic within the larger union, it was by far the largest (both in terms of population and area), most powerful, most developed, and the industrial center of the Soviet Union. Historian Matthew White wrote that it was an open secret that the country's federal structure was "window dressing" for Russian dominance. For that reason, the people of the USSR were usually called "Russians", not "Soviets", since "everyone knew who really ran the show".[147]
160
+
161
+ Under the Military Law of September 1925, the Soviet Armed Forces consisted of three components, namely the Land Forces, the Air Force, the Navy, Joint State Political Directorate (OGPU), and the Internal Troops.[148] The OGPU later became independent and in 1934 joined the NKVD, and so its internal troops were under the joint leadership of the defense and internal commissariats. After World War II, Strategic Missile Forces (1959), Air Defense Forces (1948) and National Civil Defense Forces (1970) were formed, which ranked first, third, and sixth in the official Soviet system of importance (ground forces were second, Air Force Fourth, and Navy Fifth).
162
+
163
+ The army had the greatest political influence. In 1989, there served two million soldiers divided between 150 motorized and 52 armored divisions. Until the early 1960s, the Soviet navy was a rather small military branch, but after the Caribbean crisis, under the leadership of Sergei Gorshkov, it expanded significantly. It became known for battlecruisers and submarines. In 1989 there served 500 000 men. The Soviet Air Force focused on a fleet of strategic bombers and during war situation was to eradicate enemy infrastructure and nuclear capacity. The air force also had a number of fighters and tactical bombers to support the army in the war. Strategic missile forces had more than 1,400 intercontinental ballistic missiles (ICBMs), deployed between 28 bases and 300 command centers.
164
+
165
+ In the post-war period, the Soviet Army was directly involved in several military operations abroad. These included the suppression of the uprising in East Germany (1953), Hungarian revolution (1956) and the invasion of Czechoslovakia (1968). The Soviet Union also participated in the war in Afghanistan between 1979 and 1989.
166
+
167
+ In the Soviet Union, general conscription applied.
168
+
169
+ At the end of the 1950s, with the help of engineers and technologies captured and imported from defeated Nazi Germany, the Soviets constructed the first satellite - Sputnik 1 and thus overtook the United States. This was followed by other successful satellites and experimental dogs were sent. On April 12, 1961, the first cosmonaut, Yuri Gagarin, was sent to the space. He once flew around the Earth and successfully landed in the Kazakh steppe. At that time, the first plans for space shuttles and orbital stations were drawn up in Soviet design offices, but in the end personal disputes between designers and management prevented this.
170
+
171
+ The first big fiasco for the USSR was the landing on the moon by the Americans, when the Russians were not able to respond to the Americans in time with the same project. In the 1970s, more specific proposals for the design of the space shuttle began to emerge, but shortcomings, especially in the electronics industry (rapid overheating of electronics), postponed the program until the end of the 1980s. The first shuttle, the Buran, flew in 1988, but without a human crew. Another shuttle, Ptichka, eventually ended up under construction, as the shuttle project was canceled in 1991. For their launch into space, there is today an unused superpower rocket, Energia, which is the most powerful in the world.
172
+
173
+ In the late 1980s, the Soviet Union managed to build the Mir orbital station. It was built on the construction of Salyut stations and its tasks were purely civilian and research. In the 1990s, when the US Skylab was shut down due to lack of funds, it was the only orbital station in operation. Gradually, other modules were added to it, including American ones. However, the technical condition of the station deteriorated rapidly, especially after the fire, so in 2001 it was decided to bring it into the atmosphere where it burned down.
174
+
175
+ The Soviet Union adopted a command economy, whereby production and distribution of goods were centralized and directed by the government. The first Bolshevik experience with a command economy was the policy of War communism, which involved the nationalization of industry, centralized distribution of output, coercive requisition of agricultural production, and attempts to eliminate money circulation, private enterprises and free trade. After the severe economic collapse, Lenin replaced war communism by the New Economic Policy (NEP) in 1921, legalizing free trade and private ownership of small businesses. The economy quickly recovered as a result.[149]
176
+
177
+ After a long debate among the members of Politburo about the course of economic development, by 1928–1929, upon gaining control of the country, Stalin abandoned the NEP and pushed for full central planning, starting forced collectivization of agriculture and enacting draconian labor legislation. Resources were mobilized for rapid industrialization, which significantly expanded Soviet capacity in heavy industry and capital goods during the 1930s.[149] The primary motivation for industrialization was preparation for war, mostly due to distrust of the outside capitalist world.[150] As a result, the USSR was transformed from a largely agrarian economy into a great industrial power, leading the way for its emergence as a superpower after World War II.[151] The war caused extensive devastation of the Soviet economy and infrastructure, which required massive reconstruction.[152]
178
+
179
+ By the early 1940s, the Soviet economy had become relatively self-sufficient; for most of the period until the creation of Comecon, only a tiny share of domestic products was traded internationally.[153] After the creation of the Eastern Bloc, external trade rose rapidly. However, the influence of the world economy on the USSR was limited by fixed domestic prices and a state monopoly on foreign trade.[154] Grain and sophisticated consumer manufactures became major import articles from around the 1960s.[153] During the arms race of the Cold War, the Soviet economy was burdened by military expenditures, heavily lobbied for by a powerful bureaucracy dependent on the arms industry. At the same time, the USSR became the largest arms exporter to the Third World. Significant amounts of Soviet resources during the Cold War were allocated in aid to the other socialist states.[153]
180
+
181
+ From the 1930s until its dissolution in late 1991, the way the Soviet economy operated remained essentially unchanged. The economy was formally directed by central planning, carried out by Gosplan and organized in five-year plans. However, in practice, the plans were highly aggregated and provisional, subject to ad hoc intervention by superiors. All critical economic decisions were taken by the political leadership. Allocated resources and plan targets were usually denominated in rubles rather than in physical goods. Credit was discouraged, but widespread. The final allocation of output was achieved through relatively decentralized, unplanned contracting. Although in theory prices were legally set from above, in practice they were often negotiated, and informal horizontal links (e.g. between producer factories) were widespread.[149]
182
+
183
+ A number of basic services were state-funded, such as education and health care. In the manufacturing sector, heavy industry and defence were prioritized over consumer goods.[155] Consumer goods, particularly outside large cities, were often scarce, of poor quality and limited variety. Under the command economy, consumers had almost no influence on production, and the changing demands of a population with growing incomes could not be satisfied by supplies at rigidly fixed prices.[156] A massive unplanned second economy grew up at low levels alongside the planned one, providing some of the goods and services that the planners could not. The legalization of some elements of the decentralized economy was attempted with the reform of 1965.[149]
184
+
185
+ Although statistics of the Soviet economy are notoriously unreliable and its economic growth difficult to estimate precisely,[157][158] by most accounts, the economy continued to expand until the mid-1980s. During the 1950s and 1960s, it had comparatively high growth and was catching up to the West.[159] However, after 1970, the growth, while still positive, steadily declined much more quickly and consistently than in other countries, despite a rapid increase in the capital stock (the rate of capital increase was only surpassed by Japan).[149]
186
+
187
+ Overall, the growth rate of per capita income in the Soviet Union between 1960 and 1989 was slightly above the world average (based on 102 countries).[citation needed] According to Stanley Fischer and William Easterly, growth could have been faster. By their calculation, per capita income in 1989 should have been twice higher than it was, considering the amount of investment, education and population. The authors attribute this poor performance to the low productivity of capital.[160] Steven Rosenfielde states that the standard of living declined due to Stalin's despotism. While there was a brief improvement after his death, it lapsed into stagnation.[161]
188
+
189
+ In 1987, Mikhail Gorbachev attempted to reform and revitalize the economy with his program of perestroika. His policies relaxed state control over enterprises but did not replace it by market incentives, resulting in a sharp decline in output. The economy, already suffering from reduced petroleum export revenues, started to collapse. Prices were still fixed, and the property was still largely state-owned until after the country's dissolution.[149][156] For most of the period after World War II until its collapse, Soviet GDP (PPP) was the second-largest in the world, and third during the second half of the 1980s,[162] although on a per-capita basis, it was behind that of First World countries.[163] Compared to countries with similar per-capita GDP in 1928, the Soviet Union experienced significant growth.[164]
190
+
191
+ In 1990, the country had a Human Development Index of 0.920, placing it in the "high" category of human development. It was the third-highest in the Eastern Bloc, behind Czechoslovakia and East Germany, and the 25th in the world of 130 countries.[165]
192
+
193
+ The need for fuel declined in the Soviet Union from the 1970s to the 1980s,[166] both per ruble of gross social product and per ruble of industrial product. At the start, this decline grew very rapidly but gradually slowed down between 1970 and 1975. From 1975 and 1980, it grew even slower,[clarification needed] only 2.6%.[167] David Wilson, a historian, believed that the gas industry would account for 40% of Soviet fuel production by the end of the century. His theory did not come to fruition because of the USSR's collapse.[168] The USSR, in theory, would have continued to have an economic growth rate of 2–2.5% during the 1990s because of Soviet energy fields.[clarification needed][169] However, the energy sector faced many difficulties, among them the country's high military expenditure and hostile relations with the First World.[170]
194
+
195
+ In 1991, the Soviet Union had a pipeline network of 82,000 kilometres (51,000 mi) for crude oil and another 206,500 kilometres (128,300 mi) for natural gas.[171] Petroleum and petroleum-based products, natural gas, metals, wood, agricultural products, and a variety of manufactured goods, primarily machinery, arms and military equipment, were exported.[172] In the 1970s and 1980s, the USSR heavily relied on fossil fuel exports to earn hard currency.[153] At its peak in 1988, it was the largest producer and second-largest exporter of crude oil, surpassed only by Saudi Arabia.[173]
196
+
197
+ The Soviet Union placed great emphasis on science and technology within its economy,[174] however, the most remarkable Soviet successes in technology, such as producing the world's first space satellite, typically were the responsibility of the military.[155] Lenin believed that the USSR would never overtake the developed world if it remained as technologically backward as it was upon its founding. Soviet authorities proved their commitment to Lenin's belief by developing massive networks, research and development organizations. In the early 1960s, the Soviets awarded 40% of chemistry PhDs to women, compared to only 5% in the United States.[175] By 1989, Soviet scientists were among the world's best-trained specialists in several areas, such as energy physics, selected areas of medicine, mathematics, welding and military technologies. Due to rigid state planning and bureaucracy, the Soviets remained far behind technologically in chemistry, biology, and computers when compared to the First World.
198
+
199
+ Under the Reagan administration, Project Socrates determined that the Soviet Union addressed the acquisition of science and technology in a manner that was radically different from what the US was using. In the case of the US, economic prioritization was being used for indigenous research and development as the means to acquire science and technology in both the private and public sectors. In contrast, the USSR was offensively and defensively maneuvering in the acquisition and utilization of the worldwide technology, to increase the competitive advantage that they acquired from the technology while preventing the US from acquiring a competitive advantage. However, technology-based planning was executed in a centralized, government-centric manner that greatly hindered its flexibility. This was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.[176][177][178]
200
+
201
+ Transport was a vital component of the country's economy. The economic centralization of the late 1920s and 1930s led to the development of infrastructure on a massive scale, most notably the establishment of Aeroflot, an aviation enterprise.[179] The country had a wide variety of modes of transport by land, water and air.[171] However, due to inadequate maintenance, much of the road, water and Soviet civil aviation transport were outdated and technologically backward compared to the First World.[180]
202
+
203
+ Soviet rail transport was the largest and most intensively used in the world;[180] it was also better developed than most of its Western counterparts.[181] By the late 1970s and early 1980s, Soviet economists were calling for the construction of more roads to alleviate some of the burdens from the railways and to improve the Soviet government budget.[182] The street network and automotive industry[183] remained underdeveloped,[184] and dirt roads were common outside major cities.[185] Soviet maintenance projects proved unable to take care of even the few roads the country had. By the early-to-mid-1980s, the Soviet authorities tried to solve the road problem by ordering the construction of new ones.[185] Meanwhile, the automobile industry was growing at a faster rate than road construction.[186] The underdeveloped road network led to a growing demand for public transport.[187]
204
+
205
+ Despite improvements, several aspects of the transport sector were still[when?] riddled with problems due to outdated infrastructure, lack of investment, corruption and bad decision-making. Soviet authorities were unable to meet the growing demand for transport infrastructure and services.
206
+
207
+ The Soviet merchant navy was one of the largest in the world.[171]
208
+
209
+ Excess deaths throughout World War I and the Russian Civil War (including the postwar famine) amounted to a combined total of 18 million,[188] some 10 million in the 1930s,[47] and more than 26 million in 1941–5. The postwar Soviet population was 45 to 50 million smaller than it would have been if pre-war demographic growth had continued.[54] According to Catherine Merridale, "... reasonable estimate would place the total number of excess deaths for the whole period somewhere around 60 million."[189]
210
+
211
+ The birth rate of the USSR decreased from 44.0 per thousand in 1926 to 18.0 in 1974, mainly due to increasing urbanization and the rising average age of marriages. The mortality rate demonstrated a gradual decrease as well – from 23.7 per thousand in 1926 to 8.7 in 1974. In general, the birth rates of the southern republics in Transcaucasia and Central Asia were considerably higher than those in the northern parts of the Soviet Union, and in some cases even increased in the post–World War II period, a phenomenon partly attributed to slower rates of urbanistion and traditionally earlier marriages in the southern republics.[190] Soviet Europe moved towards sub-replacement fertility, while Soviet Central Asia continued to exhibit population growth well above replacement-level fertility.[191]
212
+
213
+ The late 1960s and the 1970s witnessed a reversal of the declining trajectory of the rate of mortality in the USSR, and was especially notable among men of working age, but was also prevalent in Russia and other predominantly Slavic areas of the country.[192] An analysis of the official data from the late 1980s showed that after worsening in the late-1970s and the early 1980s, adult mortality began to improve again.[193] The infant mortality rate increased from 24.7 in 1970 to 27.9 in 1974. Some researchers regarded the rise as mostly real, a consequence of worsening health conditions and services.[194] The rises in both adult and infant mortality were not explained or defended by Soviet officials, and the Soviet government stopped publishing all mortality statistics for ten years. Soviet demographers and health specialists remained silent about the mortality increases until the late-1980s, when the publication of mortality data resumed, and researchers could delve into the real causes.[195]
214
+
215
+ Under Lenin, the state made explicit commitments to promote the equality of men and women. Many early Russian feminists and ordinary Russian working women actively participated in the Revolution, and many more were affected by the events of that period and the new policies. Beginning in October 1918, the Lenin's government liberalized divorce and abortion laws, decriminalized homosexuality (re-criminalized in the 1930s), permitted cohabitation, and ushered in a host of reforms.[196] However, without birth control, the new system produced many broken marriages, as well as countless out-of-wedlock children.[197] The epidemic of divorces and extramarital affairs created social hardships when Soviet leaders wanted people to concentrate their efforts on growing the economy. Giving women control over their fertility also led to a precipitous decline in the birth rate, perceived as a threat to their country's military power. By 1936, Stalin reversed most of the liberal laws, ushering in a pronatalist era that lasted for decades.[198]
216
+
217
+ By 1917, Russia became the first great power to grant women the right to vote.[199] After heavy casualties in World War I and II, women outnumbered men in Russia by a 4:3 ratio.[200] This contributed to the larger role women played in Russian society compared to other great powers at the time.
218
+
219
+ Anatoly Lunacharsky became the first People's Commissar for Education of Soviet Russia. In the beginning, the Soviet authorities placed great emphasis on the elimination of illiteracy. All left-handed kids were forced to write with their right hand in the Soviet school system.[201][202][203][204] Literate people were automatically hired as teachers.[citation needed] For a short period, quality was sacrificed for quantity. By 1940, Stalin could announce that illiteracy had been eliminated. Throughout the 1930s, social mobility rose sharply, which has been attributed to reforms in education.[205] In the aftermath of World War II, the country's educational system expanded dramatically, which had a tremendous effect. In the 1960s, nearly all children had access to education, the only exception being those living in remote areas. Nikita Khrushchev tried to make education more accessible, making it clear to children that education was closely linked to the needs of society. Education also became important in giving rise to the New Man.[206] Citizens directly entering the workforce had the constitutional right to a job and to free vocational training.
220
+
221
+ The education system was highly centralized and universally accessible to all citizens, with affirmative action for applicants from nations associated with cultural backwardness. However, as part of the general antisemitic policy, an unofficial Jewish quota was applied[when?] in the leading institutions of higher education by subjecting Jewish applicants to harsher entrance examinations.[207][208][209][210] The Brezhnev era also introduced a rule that required all university applicants to present a reference from the local Komsomol party secretary.[211] According to statistics from 1986, the number of higher education students per the population of 10,000 was 181 for the USSR, compared to 517 for the US.[212]
222
+
223
+ The Soviet Union was an ethnically diverse country, with more than 100 distinct ethnic groups. The total population was estimated at 293 million in 1991. According to a 1990 estimate, the majority were Russians (50.78%), followed by Ukrainians (15.45%) and Uzbeks (5.84%).[213]
224
+
225
+ All citizens of the USSR had their own ethnic affiliation. The ethnicity of a person was chosen at the age of sixteen[214] by the child's parents. If the parents did not agree, the child was automatically assigned the ethnicity of the father. Partly due to Soviet policies, some of the smaller minority ethnic groups were considered part of larger ones, such as the Mingrelians of Georgia, who were classified with the linguistically related Georgians.[215] Some ethnic groups voluntarily assimilated, while others were brought in by force. Russians, Belarusians, and Ukrainians shared close cultural ties, while other groups did not. With multiple nationalities living in the same territory, ethnic antagonisms developed over the years.[216][neutrality is disputed]
226
+
227
+ Members of various ethnicities participated in legislative bodies. Organs of power like the Politburo, the Secretariat of the Central Committee etc., were formally ethnically neutral, but in reality, ethnic Russians were overrepresented, although there were also non-Russian leaders in the Soviet leadership, such as Joseph Stalin, Grigory Zinoviev, Nikolai Podgorny or Andrei Gromyko. During the Soviet era, a significant number of ethnic Russians and Ukrainians migrated to other Soviet republics, and many of them settled there. According to the last census in 1989, the Russian "diaspora" in the Soviet republics had reached 25 million.[217]
228
+
229
+ Ethnographic map of the Soviet Union, 1941
230
+
231
+ Number and share of Ukrainians in the population of the regions of the RSFSR (1926 census)
232
+
233
+ Number and share of Ukrainians in the population of the regions of the RSFSR (1979 census)
234
+
235
+ In 1917, before the revolution, health conditions were significantly behind those of developed countries. As Lenin later noted, "Either the lice will defeat socialism, or socialism will defeat the lice".[218] The Soviet principle of health care was conceived by the People's Commissariat for Health in 1918. Health care was to be controlled by the state and would be provided to its citizens free of charge, a revolutionary concept at the time. Article 42 of the 1977 Soviet Constitution gave all citizens the right to health protection and free access to any health institutions in the USSR. Before Leonid Brezhnev became General Secretary, the Soviet healthcare system was held in high esteem by many foreign specialists. This changed, however, from Brezhnev's accession and Mikhail Gorbachev's tenure as leader, during which the health care system was heavily criticized for many basic faults, such as the quality of service and the unevenness in its provision.[219] Minister of Health Yevgeniy Chazov, during the 19th Congress of the Communist Party of the Soviet Union, while highlighting such successes as having the most doctors and hospitals in the world, recognized the system's areas for improvement and felt that billions of Soviet rubles were squandered.[220]
236
+
237
+ After the revolution, life expectancy for all age groups went up. This statistic in itself was seen by some that the socialist system was superior to the capitalist system. These improvements continued into the 1960s when statistics indicated that the life expectancy briefly surpassed that of the United States. Life expectancy started to decline in the 1970s, possibly because of alcohol abuse. At the same time, infant mortality began to rise. After 1974, the government stopped publishing statistics on the matter. This trend can be partly explained by the number of pregnancies rising drastically in the Asian part of the country where infant mortality was the highest while declining markedly in the more developed European part of the Soviet Union.[221]
238
+
239
+ Under Lenin, the government gave small language groups their own writing systems.[222] The development of these writing systems was highly successful, even though some flaws were detected. During the later days of the USSR, countries with the same multilingual situation implemented similar policies. A serious problem when creating these writing systems was that the languages differed dialectally greatly from each other.[223] When a language had been given a writing system and appeared in a notable publication, it would attain "official language" status. There were many minority languages which never received their own writing system; therefore, their speakers were forced to have a second language.[224] There are examples where the government retreated from this policy, most notably under Stalin where education was discontinued in languages that were not widespread. These languages were then assimilated into another language, mostly Russian.[225] During World War II, some minority languages were banned, and their speakers accused of collaborating with the enemy.[226]
240
+
241
+ As the most widely spoken of the Soviet Union's many languages, Russian de facto functioned as an official language, as the "language of interethnic communication" (Russian: язык межнационального общения), but only assumed the de jure status as the official national language in 1990.[227]
242
+
243
+ Christianity and Islam had the highest number of adherents among the religious citizens.[228] Eastern Christianity predominated among Christians, with Russia's traditional Russian Orthodox Church being the largest Christian denomination. About 90% of the Soviet Union's Muslims were Sunnis, with Shias being concentrated in the Azerbaijan SSR.[228] Smaller groups included Roman Catholics, Jews, Buddhists, and a variety of Protestant denominations (especially Baptists and Lutherans).[228]
244
+
245
+ Religious influence had been strong in the Russian Empire. The Russian Orthodox Church enjoyed a privileged status as the church of the monarchy and took part in carrying out official state functions.[229] The immediate period following the establishment of the Soviet state included a struggle against the Orthodox Church, which the revolutionaries considered an ally of the former ruling classes.[230]
246
+
247
+ In Soviet law, the "freedom to hold religious services" was constitutionally guaranteed, although the ruling Communist Party regarded religion as incompatible with the Marxist spirit of scientific materialism.[230] In practice, the Soviet system subscribed to a narrow interpretation of this right, and in fact utilized a range of official measures to discourage religion and curb the activities of religious groups.[230]
248
+
249
+ The 1918 Council of People's Commissars decree establishing the Russian SFSR as a secular state also decreed that "the teaching of religion in all [places] where subjects of general instruction are taught, is forbidden. Citizens may teach and may be taught religion privately."[231] Among further restrictions, those adopted in 1929 included express prohibitions on a range of church activities, including meetings for organized Bible study.[230] Both Christian and non-Christian establishments were shut down by the thousands in the 1920s and 1930s. By 1940, as many as 90% of the churches, synagogues, and mosques that had been operating in 1917 were closed.[232]
250
+
251
+ Under the doctrine of state atheism, there was a "government-sponsored program of forced conversion to atheism" conducted by the Communists.[233][234][235] The regime targeted religions based on state interests, and while most organized religions were never outlawed, religious property was confiscated, believers were harassed, and religion was ridiculed while atheism was propagated in schools.[236] In 1925, the government founded the League of Militant Atheists to intensify the propaganda campaign.[237] Accordingly, although personal expressions of religious faith were not explicitly banned, a strong sense of social stigma was imposed on them by the formal structures and mass media, and it was generally considered unacceptable for members of certain professions (teachers, state bureaucrats, soldiers) to be openly religious. As for the Russian Orthodox Church, Soviet authorities sought to control it and, in times of national crisis, to exploit it for the regime's own purposes; but their ultimate goal was to eliminate it. During the first five years of Soviet power, the Bolsheviks executed 28 Russian Orthodox bishops and over 1,200 Russian Orthodox priests. Many others were imprisoned or exiled. Believers were harassed and persecuted. Most seminaries were closed, and the publication of most religious material was prohibited. By 1941, only 500 churches remained open out of about 54,000 in existence before World War I.
252
+
253
+ Convinced that religious anti-Sovietism had become a thing of the past, and with the looming threat of war, the Stalin regime began shifting to a more moderate religion policy in the late 1930s.[238] Soviet religious establishments overwhelmingly rallied to support the war effort during World War II. Amid other accommodations to religious faith after the German invasion, churches were reopened. Radio Moscow began broadcasting a religious hour, and a historic meeting between Stalin and Orthodox Church leader Patriarch Sergius of Moscow was held in 1943. Stalin had the support of the majority of the religious people in the USSR even through the late 1980s.[238] The general tendency of this period was an increase in religious activity among believers of all faiths.[239]
254
+
255
+ Under Nikita Khrushchev, the state leadership clashed with the churches in 1958–1964, a period when atheism was emphasized in the educational curriculum, and numerous state publications promoted atheistic views.[238] During this period, the number of churches fell from 20,000 to 10,000 from 1959 to 1965, and the number of synagogues dropped from 500 to 97.[240] The number of working mosques also declined, falling from 1,500 to 500 within a decade.[240]
256
+
257
+ Religious institutions remained monitored by the Soviet government, but churches, synagogues, temples, and mosques were all given more leeway in the Brezhnev era.[241] Official relations between the Orthodox Church and the government again warmed to the point that the Brezhnev government twice honored Orthodox Patriarch Alexy I with the Order of the Red Banner of Labour.[242] A poll conducted by Soviet authorities in 1982 recorded 20% of the Soviet population as "active religious believers."[243]
258
+
259
+ The culture of the Soviet Union passed through several stages during the USSR's existence. During the first decade following the revolution, there was relative freedom and artists experimented with several different styles to find a distinctive Soviet style of art. Lenin wanted art to be accessible to the Russian people. On the other hand, hundreds of intellectuals, writers, and artists were exiled or executed, and their work banned, such as Nikolay Gumilyov who was shot for alleged conspiring against the Bolshevik regime, and Yevgeny Zamyatin.[244]
260
+
261
+ The government encouraged a variety of trends. In art and literature, numerous schools, some traditional and others radically experimental, proliferated. Communist writers Maxim Gorky and Vladimir Mayakovsky were active during this time. As a means of influencing a largely illiterate society, films received encouragement from the state, and much of director Sergei Eisenstein's best work dates from this period.
262
+
263
+ During Stalin's rule, the Soviet culture was characterized by the rise and domination of the government-imposed style of socialist realism, with all other trends being severely repressed, with rare exceptions, such as Mikhail Bulgakov's works. Many writers were imprisoned and killed.[245]
264
+
265
+ Following the Khrushchev Thaw, censorship was diminished. During this time, a distinctive period of Soviet culture developed, characterized by conformist public life and an intense focus on personal life. Greater experimentation in art forms was again permissible, resulting in the production of more sophisticated and subtly critical work. The regime loosened its emphasis on socialist realism; thus, for instance, many protagonists of the novels of author Yury Trifonov concerned themselves with problems of daily life rather than with building socialism. Underground dissident literature, known as samizdat, developed during this late period. In architecture, the Khrushchev era mostly focused on functional design as opposed to the highly decorated style of Stalin's epoch.
266
+
267
+ In the second half of the 1980s, Gorbachev's policies of perestroika and glasnost significantly expanded freedom of expression throughout the country in the media and the press.[246]
268
+
269
+ Founded on 20 July 1924 in Moscow, Sovetsky Sport was the first sports newspaper of the Soviet Union.
270
+
271
+ The Soviet Olympic Committee formed on 21 April 1951, and the IOC recognized the new body in its 45th session. In the same year, when the Soviet representative Konstantin Andrianov became an IOC member, the USSR officially joined the Olympic Movement. The 1952 Summer Olympics in Helsinki thus became first Olympic Games for Soviet athletes.
272
+
273
+ The Soviet Union national ice hockey team won nearly every world championship and Olympic tournament between 1954 and 1991 and never failed to medal in any International Ice Hockey Federation (IIHF) tournament in which they competed.
274
+
275
+ The advent[when?] of the state-sponsored "full-time amateur athlete" of the Eastern Bloc countries further eroded the ideology of the pure amateur, as it put the self-financed amateurs of the Western countries at a disadvantage. The Soviet Union entered teams of athletes who were all nominally students, soldiers, or working in a profession – in reality, the state paid many of these competitors to train on a full-time basis.[247] Nevertheless, the IOC held to the traditional rules regarding amateurism.[248]
276
+
277
+ A 1989 report by a committee of the Australian Senate claimed that "there is hardly a medal winner at the Moscow Games, certainly not a gold medal winner...who is not on one sort of drug or another: usually several kinds. The Moscow Games might well have been called the Chemists' Games".[249]
278
+
279
+ A member of the IOC Medical Commission, Manfred Donike, privately ran additional tests with a new technique for identifying abnormal levels of testosterone by measuring its ratio to epitestosterone in urine. Twenty percent of the specimens he tested, including those from sixteen gold medalists, would have resulted in disciplinary proceedings had the tests been official. The results of Donike's unofficial tests later convinced the IOC to add his new technique to their testing protocols.[250] The first documented case of "blood doping" occurred at the 1980 Summer Olympics when a runner[who?] was transfused with two pints of blood before winning medals in the 5000 m and 10,000 m.[251]
280
+
281
+ Documentation obtained in 2016 revealed the Soviet Union's plans for a statewide doping system in track and field in preparation for the 1984 Summer Olympics in Los Angeles. Dated before the decision to boycott the 1984 Games, the document detailed the existing steroids operations of the program, along with suggestions for further enhancements. Dr. Sergei Portugalov of the Institute for Physical Culture prepared the communication, directed to the Soviet Union's head of track and field. Portugalov later became one of the leading figures involved in the implementation of Russian doping before the 2016 Summer Olympics.[252]
282
+
283
+ Official Soviet environmental policy has always attached great importance to actions in which human beings actively improve nature. Lenin's quote "Communism is Soviet power and electrification of the country!" in many respects it summarizes the focus on modernization and industrial development. During the first five-year plan in 1928, Stalin proceeded to industrialize the country at all costs. Values such as environmental and nature protection have been completely ignored in the struggle to create a modern industrial society. After Stalin's death, they focused more on environmental issues, but the basic perception of the value of environmental protection remained the same.[253]
284
+
285
+ The Soviet media has always focused on the vast expanse of land and the virtually indestructible natural resources. This made it feel that contamination and looting of nature were not a problem. The Soviet state also firmly believed that scientific and technological progress would solve all the problems. Official ideology said that under socialism environmental problems could easily be overcome, unlike capitalist countries, where they seemingly could not be solved. The Soviet authorities had an almost unwavering belief that man could transcend nature. However, when the authorities had to admit that there were environmental problems in the USSR in the 1980s, they explained the problems in such a way that socialism had not yet been fully developed; pollution in socialist society was only a temporary anomaly that would have been resolved if socialism had developed.[citation needed]
286
+
287
+ The Chernobyl disaster in 1986 was the first major accident at a civilian nuclear power plant, unparalleled in the world, when a large number of radioactive isotopes were released into the atmosphere. Radioactive doses have scattered relatively far. The main health problem after the accident was 4,000 new cases of thyroid cancer, but this led to a relatively low number of deaths (WHO data, 2005). However, the long-term effects of the accident are unknown. Another major accident is the Kyshtym disaster.[254]
288
+
289
+ After the fall of the USSR, it was discovered that the environmental problems were greater than what the Soviet authorities admitted. The Kola Peninsula was one of the places with clear problems. Around the industrial cities of Monchegorsk and Norilsk, where nickel, for example, is mined, all forests have been killed by contamination, while the northern and other parts of Russia have been affected by emissions. During the 1990s, people in the West were also interested in the radioactive hazards of nuclear facilities, decommissioned nuclear submarines, and the processing of nuclear waste or spent nuclear fuel. It was also known in the early 1990s that the USSR had transported radioactive material to the Barents Sea and Kara Sea, which was later confirmed by the Russian parliament. The crash of the K-141 Kursk submarine in 2000 in the west further raised concerns.[255] In the past, there were accidents involving submarines K-19, K-8 or K-129.[citation needed]
290
+
291
+ 1918–1924  Turkestan3
292
+ 1918–1941  Volga German4
293
+ 1919–1990  Bashkir
294
+ 1920–1925  Kirghiz2
295
+ 1920–1990  Tatar
296
+ 1921–1990  Adjar
297
+ 1921–1945  Crimean
298
+ 1921–1991  Dagestan
299
+ 1921–1924  Mountain
300
+
301
+ 1921–1990  Nakhchivan
302
+ 1922–1991  Yakut
303
+ 1923–1990  Buryat1
304
+ 1923–1940  Karelian
305
+ 1924–1940  Moldavian
306
+ 1924–1929  Tajik
307
+ 1925–1992  Chuvash
308
+ 1925–1936  Kazak2
309
+ 1926–1936  Kirghiz
310
+
311
+ 1931–1991  Abkhaz
312
+ 1932–1992  Karakalpak
313
+ 1934–1990  Mordovian
314
+ 1934–1990  Udmurt
315
+ 1935–1943  Kalmyk
316
+ 1936–1944  Checheno-Ingush
317
+ 1936–1944  Kabardino-Balkar
318
+ 1936–1990  Komi
319
+ 1936–1990  Mari
320
+
321
+ 1936–1990  North Ossetian
322
+ 1944–1957  Kabardin
323
+ 1956–1991  Karelian
324
+ 1957–1990  Checheno-Ingush
325
+ 1957–1991  Kabardino-Balkar
326
+ 1958–1990  Kalmyk
327
+ 1961–1992  Tuva
328
+ 1990–1991  Gorno-Altai
329
+ 1991–1992  Crimean
en/5864.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/5865.html.txt ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Coordinates: 40°N 100°W / 40°N 100°W / 40; -100
4
+
5
+ The United States of America (USA), commonly known as the United States (U.S. or US) or America, is a country mostly located in central North America, between Canada and Mexico. It consists of 50 states, a federal district, five major self-governing territories, and various possessions.[i] At 3.8 million square miles (9.8 million km2), it is the world's third- or fourth-largest country by total area.[e] With a 2019 estimated population of over 328 million,[7] the U.S. is the third most populous country in the world. The Americans are a racially and ethnically diverse population that has been shaped through centuries of immigration. The capital is Washington, D.C., and the most populous city is New York City.
6
+
7
+ Paleo-Indians migrated from Siberia to the North American mainland at least 12,000 years ago,[19] and European colonization began in the 16th century. The United States emerged from the thirteen British colonies established along the East Coast. Numerous disputes between Great Britain and the colonies led to the American Revolutionary War lasting between 1775 and 1783, leading to independence.[20] Beginning in the late 18th century, the United States vigorously expanded across North America, gradually acquiring new territories,[21] killing and displacing Native Americans, and admitting new states. By 1848, the United States spanned the continent.[21]
8
+ Slavery was legal in much of the United States until the second half of the 19th century, when the American Civil War led to its abolition.[22][23]
9
+
10
+ The Spanish–American War and World War I entrenched the U.S. as a world power, a status confirmed by the outcome of World War II. It was the first country to develop nuclear weapons and is the only country to have used them in warfare. During the Cold War, the United States and the Soviet Union competed in the Space Race, culminating with the 1969 Apollo 11 mission, the spaceflight that first landed humans on the Moon. The end of the Cold War and collapse of the Soviet Union in 1991 left the United States as the world's sole superpower.[24]
11
+
12
+ The United States is a federal republic and a representative democracy. It is a founding member of the United Nations, World Bank, International Monetary Fund, Organization of American States (OAS), NATO, and other international organizations. It is a permanent member of the United Nations Security Council.
13
+
14
+ A highly developed country, the United States is the world's largest economy and accounts for approximately a quarter of global gross domestic product (GDP).[25] The United States is the world's largest importer and the second-largest exporter of goods, by value.[26][27] Although its population is only 4.3% of the world total,[28] it holds 29.4% of the total wealth in the world, the largest share held by any country.[29] Despite income and wealth disparities, the United States continues to rank high in measures of socioeconomic performance, including average wage, median income, median wealth, human development, per capita GDP, and worker productivity.[30][31] It is the foremost military power in the world, making up more than a third of global military spending,[32] and is a leading political, cultural, and scientific force internationally.[33]
15
+
16
+ The first known use of the name "America" dates back to 1507, when it appeared on a world map created by the German cartographer Martin Waldseemüller. On this map, the name applied to South America in honor of the Italian explorer Amerigo Vespucci.[34] After returning from his expeditions, Vespucci first postulated that the West Indies did not represent Asia's eastern limit, as initially thought by Christopher Columbus, but instead were part of an entirely separate landmass thus far unknown to the Europeans.[35] In 1538, the Flemish cartographer Gerardus Mercator used the name "America" on his own world map, applying it to the entire Western Hemisphere.[36]
17
+
18
+ The first documentary evidence of the phrase "United States of America" dates from a January 2, 1776 letter written by Stephen Moylan, Esq., to Lt. Col. Joseph Reed, George Washington's aide-de-camp and Muster-Master General of the Continental Army. Moylan expressed his wish to go "with full and ample powers from the United States of America to Spain" to seek assistance in the revolutionary war effort.[37][38][39] The first known publication of the phrase "United States of America" was in an anonymous essay in The Virginia Gazette newspaper in Williamsburg, Virginia, on April 6, 1776.[40]
19
+
20
+ The second draft of the Articles of Confederation, prepared by John Dickinson and completed no later than June 17, 1776, declared "The name of this Confederation shall be the 'United States of America'".[41] The final version of the Articles sent to the states for ratification in late 1777 contains the sentence "The Stile of this Confederacy shall be 'The United States of America'".[42] In June 1776, Thomas Jefferson wrote the phrase "UNITED STATES OF AMERICA" in all capitalized letters in the headline of his "original Rough draught" of the Declaration of Independence.[41] This draft of the document did not surface until June 21, 1776, and it is unclear whether it was written before or after Dickinson used the term in his June 17 draft of the Articles of Confederation.[41]
21
+
22
+ The short form "United States" is also standard. Other common forms are the "U.S.," the "USA," and "America." Colloquial names are the "U.S. of A." and, internationally, the "States." "Columbia," a name popular in poetry and songs of the late 18th century, derives its origin from Christopher Columbus; it appears in the name "District of Columbia." Many landmarks and institutions in the Western Hemisphere bear his name, including the country of Colombia.[43]
23
+
24
+ The phrase "United States" was originally plural, a description of a collection of independent states—e.g., "the United States are"—including in the Thirteenth Amendment to the United States Constitution, ratified in 1865.[44] The singular form—e.g., "the United States is"—became popular after the end of the Civil War. The singular form is now standard; the plural form is retained in the idiom "these United States." The difference is more significant than usage; it is a difference between a collection of states and a unit.[45]
25
+
26
+ A citizen of the United States is an "American." "United States," "American" and "U.S." refer to the country adjectivally ("American values," "U.S. forces"). In English, the word "American" rarely refers to topics or subjects not directly connected with the United States.[46]
27
+
28
+ It has been generally accepted that the first inhabitants of North America migrated from Siberia by way of the Bering land bridge and arrived at least 12,000 years ago; however, increasing evidence suggests an even earlier arrival.[19][47][48] After crossing the land bridge, the Paleo-Indians moved southward along the Pacific coast[49] and through an interior ice-free corridor.[50] The Clovis culture, which appeared around 11,000 BC, was initially believed to represent the first wave of human settlement of the Americas.[51][52] It is likely these represent the first of three major waves of migration into North America.[53]
29
+
30
+ Over time, indigenous cultures in North America grew increasingly complex, and some, such as the pre-Columbian Mississippian culture in the southeast, developed advanced agriculture, grand architecture, and state-level societies.[54] The Mississippian culture flourished in the south from 800 to 1600 AD, extending from the Mexican border down through Florida.[55] Its city state Cahokia is the largest, most complex pre-Columbian archaeological site in the modern-day United States.[56] In the Four Corners region, Ancestral Puebloan culture developed from centuries of agricultural experimentation.[57]
31
+
32
+ Three UNESCO World Heritage Sites in the United States are credited to the Pueblos: Mesa Verde National Park, Chaco Culture National Historical Park, and Taos Pueblo.[58][59] The earthworks constructed by Native Americans of the Poverty Point culture have also been designated a UNESCO World Heritage site. In the southern Great Lakes region, the Iroquois Confederacy was established at some point between the twelfth and fifteenth centuries.[60] Most prominent along the Atlantic coast were the Algonquian tribes, who practiced hunting and trapping, along with limited cultivation.
33
+
34
+ With the progress of European colonization in the territories of the contemporary United States, the Native Americans were often conquered and displaced.[61] The native population of America declined after European arrival for various reasons,[62][63] primarily diseases such as smallpox and measles.[64][65]
35
+
36
+ Estimating the native population of North America at the time of European contact is difficult.[66][67] Douglas H. Ubelaker of the Smithsonian Institution estimated that there was a population of 92,916 in the south Atlantic states and a population of 473,616 in the Gulf states,[68] but most academics regard this figure as too low.[66] Anthropologist Henry F. Dobyns believed the populations were much higher, suggesting 1,100,000 along the shores of the gulf of Mexico, 2,211,000 people living between Florida and Massachusetts, 5,250,000 in the Mississippi Valley and tributaries and 697,000 people in the Florida peninsula.[66][67]
37
+
38
+ In the early days of colonization, many European settlers were subject to food shortages, disease, and attacks from Native Americans. Native Americans were also often at war with neighboring tribes and allied with Europeans in their colonial wars. In many cases, however, natives and settlers came to depend on each other. Settlers traded for food and animal pelts; natives for guns, ammunition and other European goods.[69] Natives taught many settlers to cultivate corn, beans, and squash. European missionaries and others felt it was important to "civilize" the Native Americans and urged them to adopt European agricultural techniques and lifestyles.[70][71]
39
+
40
+ With the advancement of European colonization in North America, the Native Americans were often conquered and displaced.[72] The first Europeans to arrive in the contiguous United States were Spanish conquistadors such as Juan Ponce de León, who made his first visit to Florida in 1513. Even earlier, Christopher Columbus landed in Puerto Rico on his 1493 voyage. The Spanish set up the first settlements in Florida and New Mexico such as Saint Augustine[73] and Santa Fe. The French established their own as well along the Mississippi River. Successful English settlement on the eastern coast of North America began with the Virginia Colony in 1607 at Jamestown and with the Pilgrims' Plymouth Colony in 1620. Many settlers were dissenting Christian groups who came seeking religious freedom. The continent's first elected legislative assembly, Virginia's House of Burgesses, was created in 1619. The Mayflower Compact, signed by the Pilgrims before disembarking, and the Fundamental Orders of Connecticut, established precedents for the pattern of representative self-government and constitutionalism that would develop throughout the American colonies.[74][75]
41
+
42
+ Most settlers in every colony were small farmers, though other industries were formed. Cash crops included tobacco, rice, and wheat. Extraction industries grew up in furs, fishing and lumber. Manufacturers produced rum and ships, and by the late colonial period, Americans were producing one-seventh of the world's iron supply.[76] Cities eventually dotted the coast to support local economies and serve as trade hubs. English colonists were supplemented by waves of Scotch-Irish immigrants and other groups. As coastal land grew more expensive, freed indentured servants claimed lands further west.[77]
43
+
44
+ A large-scale slave trade with English privateers began.[78] Because of less disease and better food and treatment, the life expectancy of slaves was much higher in North America than further south, leading to a rapid increase in the numbers of slaves.[79][80] Colonial society was largely divided over the religious and moral implications of slavery, and colonies passed acts for and against the practice.[81][82] But by the turn of the 18th century, African slaves were replacing indentured servants for cash crop labor, especially in the South.[83]
45
+
46
+ With the establishment of the Province of Georgia in 1732, the 13 colonies that would become the United States of America were administered by the British as overseas dependencies.[84] All nonetheless had local governments with elections open to most free men.[85] With extremely high birth rates, low death rates, and steady settlement, the colonial population grew rapidly. Relatively small Native American populations were eclipsed.[86] The Christian revivalist movement of the 1730s and 1740s known as the Great Awakening fueled interest both in religion and in religious liberty.[87]
47
+
48
+ During the Seven Years' War (known in the United States as the French and Indian War), British forces seized Canada from the French, but the francophone population remained politically isolated from the southern colonies. Excluding the Native Americans, who were being conquered and displaced, the 13 British colonies had a population of over 2.1 million in 1770, about a third that of Britain. Despite continuing, new arrivals, the rate of natural increase was such that by the 1770s only a small minority of Americans had been born overseas.[88] The colonies' distance from Britain had allowed the development of self-government, but their unprecedented success motivated monarchs to periodically seek to reassert royal authority.[89]
49
+
50
+ In 1774, the Spanish Navy ship Santiago, under Juan Pérez, entered and anchored in an inlet of Nootka Sound, Vancouver Island, in present-day British Columbia. Although the Spanish did not land, natives paddled to the ship to trade furs for abalone shells from California.[90] At the time, the Spanish were able to monopolize the trade between Asia and North America, granting limited licenses to the Portuguese. When the Russians began establishing a growing fur trading system in Alaska, the Spanish began to challenge the Russians, with Pérez's voyage being the first of many to the Pacific Northwest.[91][j]
51
+
52
+ During his third and final voyage, Captain James Cook became the first European to begin formal contact with Hawaii.[93] Captain Cook's last voyage included sailing along the coast of North America and Alaska searching for a Northwest Passage for approximately nine months.[94]
53
+
54
+ The American Revolutionary War was the first successful colonial war of independence against a European power. Americans had developed an ideology of "republicanism" asserting that government rested on the will of the people as expressed in their local legislatures. They demanded their rights as Englishmen and "no taxation without representation". The British insisted on administering the empire through Parliament, and the conflict escalated into war.[95]
55
+
56
+ The Second Continental Congress unanimously adopted the Declaration of Independence, which asserted that Great Britain was not protecting Americans' unalienable rights. July 4 is celebrated annually as Independence Day.[96] In 1777, the Articles of Confederation established a decentralized government that operated until 1789.[96]
57
+
58
+ Following the decisive Franco-American victory at Yorktown in 1781,[97] Britain signed the peace treaty of 1783, and American sovereignty was internationally recognized and the country was granted all lands east of the Mississippi River. Nationalists led the Philadelphia Convention of 1787 in writing the United States Constitution, ratified in state conventions in 1788. The federal government was reorganized into three branches, on the principle of creating salutary checks and balances, in 1789. George Washington, who had led the Continental Army to victory, was the first president elected under the new constitution. The Bill of Rights, forbidding federal restriction of personal freedoms and guaranteeing a range of legal protections, was adopted in 1791.[98]
59
+
60
+ Although the federal government criminalized the international slave trade in 1808, after 1820, cultivation of the highly profitable cotton crop exploded in the Deep South, and along with it, the slave population.[99][100][101] The Second Great Awakening, especially 1800–1840, converted millions to evangelical Protestantism. In the North, it energized multiple social reform movements, including abolitionism;[102] in the South, Methodists and Baptists proselytized among slave populations.[103]
61
+
62
+ Americans' eagerness to expand westward prompted a long series of American Indian Wars.[104] The Louisiana Purchase of French-claimed territory in 1803 almost doubled the nation's area.[105] The War of 1812, declared against Britain over various grievances and fought to a draw, strengthened U.S. nationalism.[106] A series of military incursions into Florida led Spain to cede it and other Gulf Coast territory in 1819.[107] The expansion was aided by steam power, when steamboats began traveling along America's large water systems, many of which were connected by new canals, such as the Erie and the I&M; then, even faster railroads began their stretch across the nation's land.[108]
63
+
64
+ From 1820 to 1850, Jacksonian democracy began a set of reforms which included wider white male suffrage; it led to the rise of the Second Party System of Democrats and Whigs as the dominant parties from 1828 to 1854. The Trail of Tears in the 1830s exemplified the Indian removal policy that forcibly resettled Indians into the west on Indian reservations. The U.S. annexed the Republic of Texas in 1845 during a period of expansionist Manifest destiny.[109] The 1846 Oregon Treaty with Britain led to U.S. control of the present-day American Northwest.[110] Victory in the Mexican–American War resulted in the 1848 Mexican Cession of California and much of the present-day American Southwest.[111]
65
+ The California Gold Rush of 1848–49 spurred migration to the Pacific coast, which led to the California Genocide[112][113][114][115] and the creation of additional western states.[116] After the Civil War, new transcontinental railways made relocation easier for settlers, expanded internal trade and increased conflicts with Native Americans.[117] In 1869, a new Peace Policy nominally promised to protect Native Americans from abuses, avoid further war, and secure their eventual U.S. citizenship. Nonetheless, large-scale conflicts continued throughout the West into the 1900s.
66
+
67
+ Irreconcilable sectional conflict regarding the slavery of Africans and African Americans ultimately led to the American Civil War.[118] Initially, states entering the Union had alternated between slave and free states, keeping a sectional balance in the Senate, while free states outstripped slave states in population and in the House of Representatives. But with additional western territory and more free-soil states, tensions between slave and free states mounted with arguments over federalism and disposition of the territories, as well as whether to expand or restrict slavery.[119]
68
+
69
+ With the 1860 election of Republican Abraham Lincoln, conventions in thirteen slave states ultimately declared secession and formed the Confederate States of America (the "South" or the "Confederacy"), while the federal government (the "Union") maintained that secession was illegal.[119] In order to bring about this secession, military action was initiated by the secessionists, and the Union responded in kind. The ensuing war would become the deadliest military conflict in American history, resulting in the deaths of approximately 618,000 soldiers as well as many civilians.[120] The Union initially simply fought to keep the country united. Nevertheless, as casualties mounted after 1863 and Lincoln delivered his Emancipation Proclamation, the main purpose of the war from the Union's viewpoint became the abolition of slavery. Indeed, when the Union ultimately won the war in April 1865, each of the states in the defeated South was required to ratify the Thirteenth Amendment, which prohibited slavery.
70
+
71
+ The government enacted three constitutional amendments in the years after the war: the aforementioned Thirteenth as well as the Fourteenth Amendment providing citizenship to the nearly four million African Americans who had been slaves,[121] and the Fifteenth Amendment ensuring in theory that African Americans had the right to vote. The war and its resolution led to a substantial increase in federal power[122] aimed at reintegrating and rebuilding the South while guaranteeing the rights of the newly freed slaves.
72
+
73
+ Reconstruction began in earnest following the war. While President Lincoln attempted to foster friendship and forgiveness between the Union and the former Confederacy, his assassination on April 14, 1865, drove a wedge between North and South again. Republicans in the federal government made it their goal to oversee the rebuilding of the South and to ensure the rights of African Americans. They persisted until the Compromise of 1877 when the Republicans agreed to cease protecting the rights of African Americans in the South in order for Democrats to concede the presidential election of 1876.
74
+
75
+ Southern white Democrats, calling themselves "Redeemers," took control of the South after the end of Reconstruction. From 1890 to 1910 the Redeemers established so-called Jim Crow laws, disenfranchising most blacks and some poor whites throughout the region. Blacks faced racial segregation, especially in the South.[123] They also occasionally experienced vigilante violence, including lynching.[124]
76
+
77
+ In the North, urbanization and an unprecedented influx of immigrants from Southern and Eastern Europe supplied a surplus of labor for the country's industrialization and transformed its culture.[126] National infrastructure including telegraph and transcontinental railroads spurred economic growth and greater settlement and development of the American Old West. The later invention of electric light and the telephone would also affect communication and urban life.[127]
78
+
79
+ The United States fought Indian Wars west of the Mississippi River from 1810 to at least 1890.[128] Most of these conflicts ended with the cession of Native American territory and their confinement to Indian reservations. This further expanded acreage under mechanical cultivation, increasing surpluses for international markets.[129] Mainland expansion also included the purchase of Alaska from Russia in 1867.[130] In 1893, pro-American elements in Hawaii overthrew the monarchy and formed the Republic of Hawaii, which the U.S. annexed in 1898. Puerto Rico, Guam, and the Philippines were ceded by Spain in the same year, following the Spanish–American War.[131] American Samoa was acquired by the United States in 1900 after the end of the Second Samoan Civil War.[132] The U.S. Virgin Islands were purchased from Denmark in 1917.[133]
80
+
81
+ Rapid economic development during the late 19th and early 20th centuries fostered the rise of many prominent industrialists. Tycoons like Cornelius Vanderbilt, John D. Rockefeller, and Andrew Carnegie led the nation's progress in railroad, petroleum, and steel industries. Banking became a major part of the economy, with J. P. Morgan playing a notable role. The American economy boomed, becoming the world's largest, and the United States achieved great power status.[134] These dramatic changes were accompanied by social unrest and the rise of populist, socialist, and anarchist movements.[135] This period eventually ended with the advent of the Progressive Era, which saw significant reforms including women's suffrage, alcohol prohibition, regulation of consumer goods, greater antitrust measures to ensure competition and attention to worker conditions.[136][137][138]
82
+
83
+ The United States remained neutral from the outbreak of World War I in 1914 until 1917, when it joined the war as an "associated power," alongside the formal Allies of World War I, helping to turn the tide against the Central Powers. In 1919, President Woodrow Wilson took a leading diplomatic role at the Paris Peace Conference and advocated strongly for the U.S. to join the League of Nations. However, the Senate refused to approve this and did not ratify the Treaty of Versailles that established the League of Nations.[139]
84
+
85
+ In 1920, the women's rights movement won passage of a constitutional amendment granting women's suffrage.[140] The 1920s and 1930s saw the rise of radio for mass communication and the invention of early television.[141] The prosperity of the Roaring Twenties ended with the Wall Street Crash of 1929 and the onset of the Great Depression. After his election as president in 1932, Franklin D. Roosevelt responded with the New Deal.[142] The Great Migration of millions of African Americans out of the American South began before World War I and extended through the 1960s;[143] whereas the Dust Bowl of the mid-1930s impoverished many farming communities and spurred a new wave of western migration.[144]
86
+
87
+ At first effectively neutral during World War II, the United States began supplying materiel to the Allies in March 1941 through the Lend-Lease program. On December 7, 1941, the Empire of Japan launched a surprise attack on Pearl Harbor, prompting the United States to join the Allies against the Axis powers.[145] Although Japan attacked the United States first, the U.S. nonetheless pursued a "Europe first" defense policy.[146] The United States thus left its vast Asian colony, the Philippines, isolated and fighting a losing struggle against Japanese invasion and occupation, as military resources were devoted to the European theater. During the war, the United States was referred to as one of the "Four Policemen"[147] of Allies power who met to plan the postwar world, along with Britain, the Soviet Union and China.[148][149] Although the nation lost around 400,000 military personnel,[150] it emerged relatively undamaged from the war with even greater economic and military influence.[151]
88
+
89
+ The United States played a leading role in the Bretton Woods and Yalta conferences with the United Kingdom, the Soviet Union, and other Allies, which signed agreements on new international financial institutions and Europe's postwar reorganization. As an Allied victory was won in Europe, a 1945 international conference held in San Francisco produced the United Nations Charter, which became active after the war.[152] The United States and Japan then fought each other in the largest naval battle in history, the Battle of Leyte Gulf.[153][154] The United States eventually developed the first nuclear weapons and used them on Japan in the cities of Hiroshima and Nagasaki; the Japanese surrendered on September 2, ending World War II.[155][156]
90
+
91
+ After World War II, the United States and the Soviet Union competed for power, influence, and prestige during what became known as the Cold War, driven by an ideological divide between capitalism and communism.[157] They dominated the military affairs of Europe, with the U.S. and its NATO allies on one side and the USSR and its Warsaw Pact allies on the other. The U.S. developed a policy of containment towards the expansion of communist influence. While the U.S. and Soviet Union engaged in proxy wars and developed powerful nuclear arsenals, the two countries avoided direct military conflict.[citation needed]
92
+
93
+ The United States often opposed Third World movements that it viewed as Soviet-sponsored, and occasionally pursued direct action for regime change against left-wing governments, even supporting right-wing authoritarian governments at times.[158] American troops fought communist Chinese and North Korean forces in the Korean War of 1950–53.[159] The Soviet Union's 1957 launch of the first artificial satellite and its 1961 launch of the first manned spaceflight initiated a "Space Race" in which the United States became the first nation to land a man on the moon in 1969.[159] A proxy war in Southeast Asia eventually evolved into full American participation, as the Vietnam War.[citation needed]
94
+
95
+ At home, the U.S. experienced sustained economic expansion and a rapid growth of its population and middle class. Construction of an Interstate Highway System transformed the nation's infrastructure over the following decades. Millions moved from farms and inner cities to large suburban housing developments.[160][161] In 1959 Hawaii became the 50th and last U.S. state added to the country.[162] The growing Civil Rights Movement used nonviolence to confront segregation and discrimination, with Martin Luther King Jr. becoming a prominent leader and figurehead. A combination of court decisions and legislation, culminating in the Civil Rights Act of 1968, sought to end racial discrimination.[163][164][165] Meanwhile, a counterculture movement grew which was fueled by opposition to the Vietnam war, black nationalism, and the sexual revolution.
96
+
97
+ The launch of a "War on Poverty" expanded entitlements and welfare spending, including the creation of Medicare and Medicaid, two programs that provide health coverage to the elderly and poor, respectively, and the means-tested Food Stamp Program and Aid to Families with Dependent Children.[166]
98
+
99
+ The 1970s and early 1980s saw the onset of stagflation. After his election in 1980, President Ronald Reagan responded to economic stagnation with free-market oriented reforms. Following the collapse of détente, he abandoned "containment" and initiated the more aggressive "rollback" strategy towards the USSR.[167][168][169][170][171] After a surge in female labor participation over the previous decade, by 1985 the majority of women aged 16 and over were employed.[172]
100
+
101
+ The late 1980s brought a "thaw" in relations with the USSR, and its collapse in 1991 finally ended the Cold War.[173][174][175][176] This brought about unipolarity[177] with the U.S. unchallenged as the world's dominant superpower. The concept of Pax Americana, which had appeared in the post-World War II period, gained wide popularity as a term for the post-Cold War new world order.[citation needed]
102
+
103
+ After the Cold War, the conflict in the Middle East triggered a crisis in 1990, when Iraq under Saddam Hussein invaded and attempted to annex Kuwait, an ally of the United States. Fearing the instability would spread to other regions, President George H. W. Bush launched Operation Desert Shield, a defensive force buildup in Saudi Arabia, and Operation Desert Storm, in a staging titled the Gulf War; waged by coalition forces from 34 nations, led by the United States against Iraq ending in the expulsion of Iraqi forces from Kuwait and restoration of the monarchy.[178]
104
+
105
+ Originating within U.S. military defense networks, the Internet spread to international academic platforms and then to the public in the 1990s, greatly affecting the global economy, society, and culture.[179] Due to the dot-com boom, stable monetary policy, and reduced social welfare spending, the 1990s saw the longest economic expansion in modern U.S. history.[180] Beginning in 1994, the U.S. entered into the North American Free Trade Agreement (NAFTA), prompting trade among the U.S., Canada, and Mexico to soar.[181]
106
+
107
+ On September 11, 2001, Al-Qaeda terrorists struck the World Trade Center in New York City and the Pentagon near Washington, D.C., killing nearly 3,000 people.[182] In response, the United States launched the War on Terror, which included a war in Afghanistan and the 2003–11 Iraq War.[183][184]
108
+
109
+ Government policy designed to promote affordable housing,[185] widespread failures in corporate and regulatory governance,[186] and historically low interest rates set by the Federal Reserve[187] led to the mid-2000s housing bubble, which culminated with the 2008 financial crisis, the nation's largest economic contraction since the Great Depression.[188] Barack Obama, the first African-American[189] and multiracial[190] president, was elected in 2008 amid the crisis,[191] and subsequently passed stimulus measures and the Dodd–Frank Act in an attempt to mitigate its negative effects and ensure there would not be a repeat of the crisis. In 2010, the Obama administration passed the Affordable Care Act, which made the most sweeping reforms to the nation's healthcare system in nearly five decades, including mandates, subsidies and insurance exchanges.[citation needed]
110
+
111
+ American forces in Iraq were withdrawn in large numbers in 2009 and 2010, and the war in the region was declared formally over in December 2011.[192] But months earlier, Operation Neptune Spear led to the death of the leader of Al-Qaeda in Pakistan.[193] In the presidential election of 2016, Republican Donald Trump was elected as the 45th president of the United States. On January 20, 2020, the first case of COVID-19 in the United States was confirmed.[194] As of July 2020, the United States has over 4 million COVID-19 cases and over 145,000 deaths.[195] The United States is, by far, the country with the most cases of COVID-19 since April 11, 2020.[196]
112
+
113
+ The 48 contiguous states and the District of Columbia occupy a combined area of 3,119,884.69 square miles (8,080,464.3 km2). Of this area, 2,959,064.44 square miles (7,663,941.7 km2) is contiguous land, composing 83.65% of total U.S. land area.[197][198] Hawaii, occupying an archipelago in the central Pacific, southwest of North America, is 10,931 square miles (28,311 km2) in area. The populated territories of Puerto Rico, American Samoa, Guam, Northern Mariana Islands, and U.S. Virgin Islands together cover 9,185 square miles (23,789 km2).[199] Measured by only land area, the United States is third in size behind Russia and China, just ahead of Canada.[200]
114
+
115
+ The United States is the world's third- or fourth-largest nation by total area (land and water), ranking behind Russia and Canada and nearly equal to China. The ranking varies depending on how two territories disputed by China and India are counted, and how the total size of the United States is measured.[e][201][202]
116
+
117
+ The coastal plain of the Atlantic seaboard gives way further inland to deciduous forests and the rolling hills of the Piedmont.[203] The Appalachian Mountains divide the eastern seaboard from the Great Lakes and the grasslands of the Midwest.[204] The Mississippi–Missouri River, the world's fourth longest river system, runs mainly north–south through the heart of the country. The flat, fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast.[204]
118
+
119
+ The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking around 14,000 feet (4,300 m) in Colorado.[205] Farther west are the rocky Great Basin and deserts such as the Chihuahua and Mojave.[206] The Sierra Nevada and Cascade mountain ranges run close to the Pacific coast, both ranges reaching altitudes higher than 14,000 feet (4,300 m). The lowest and highest points in the contiguous United States are in the state of California,[207] and only about 84 miles (135 km) apart.[208] At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali is the highest peak in the country and in North America.[209] Active volcanoes are common throughout Alaska's Alexander and Aleutian Islands, and Hawaii consists of volcanic islands. The supervolcano underlying Yellowstone National Park in the Rockies is the continent's largest volcanic feature.[210]
120
+
121
+ The United States, with its large size and geographic variety, includes most climate types. To the east of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south.[211] The Great Plains west of the 100th meridian are semi-arid. Much of the Western mountains have an alpine climate. The climate is arid in the Great Basin, desert in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon and Washington and southern Alaska. Most of Alaska is subarctic or polar. Hawaii and the southern tip of Florida are tropical, as well as its territories in the Caribbean and the Pacific.[212] States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley areas in the Midwest and South.[213] Overall, the United States has the world's most violent weather, receiving more high-impact extreme weather incidents than any other country in the world.[214]
122
+
123
+ The U.S. ecology is megadiverse: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and more than 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland.[216] The United States is home to 428 mammal species, 784 bird species, 311 reptile species, and 295 amphibian species,[217] as well as about 91,000 insect species.[218]
124
+
125
+ There are 62 national parks and hundreds of other federally managed parks, forests, and wilderness areas.[219] Altogether, the government owns about 28% of the country's land area,[220] mostly in the western states.[221] Most of this land is protected, though some is leased for oil and gas drilling, mining, logging, or cattle ranching, and about .86% is used for military purposes.[222][223]
126
+
127
+ Environmental issues include debates on oil and nuclear energy, dealing with air and water pollution, the economic costs of protecting wildlife, logging and deforestation,[224][225] and international responses to global warming.[226][227] The most prominent environmental agency is the Environmental Protection Agency (EPA), created by presidential order in 1970.[228] The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act.[229] The Endangered Species Act of 1973 is intended to protect threatened and endangered species and their habitats, which are monitored by the United States Fish and Wildlife Service.[230]
128
+
129
+ The U.S. Census Bureau officially estimated the country's population to be 328,239,523 as of July 1, 2019.[231] In addition, the Census Bureau provides a continuously updated U.S. Population Clock that approximates the latest population of the 50 states and District of Columbia based on the Bureau's most recent demographic trends.[234] According to the clock, on May 23, 2020, the U.S. population exceeded 329 million residents, with a net gain of one person every 19 seconds, or about 4,547 people per day. The United States is the third most populous nation in the world, after China and India. In 2018 the median age of the United States population was 38.1 years.[235]
130
+
131
+ In 2018, there were almost 90 million immigrants and U.S.-born children of immigrants (second-generation Americans) in the United States, accounting for 28% of the overall U.S. population.[236] The United States has a very diverse population; 37 ancestry groups have more than one million members.[237] German Americans are the largest ethnic group (more than 50 million)—followed by Irish Americans (circa 37 million), Mexican Americans (circa 31 million) and English Americans (circa 28 million).[238][239]
132
+
133
+ White Americans (mostly European ancestry) are the largest racial group at 73.1% of the population; African Americans are the nation's largest racial minority and third-largest ancestry group.[237] Asian Americans are the country's second-largest racial minority; the three largest Asian American ethnic groups are Chinese Americans, Filipino Americans, and Indian Americans.[237] The largest American community with European ancestry is German Americans, which consists of more than 14% of the total population.[240] In 2010, the U.S. population included an estimated 5.2 million people with some American Indian or Alaska Native ancestry (2.9 million exclusively of such ancestry) and 1.2 million with some native Hawaiian or Pacific island ancestry (0.5 million exclusively).[241] The census counted more than 19 million people of "Some Other Race" who were "unable to identify with any" of its five official race categories in 2010, more than 18.5 million (97%) of whom are of Hispanic ethnicity.[241]
134
+
135
+ In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents (including many eligible to become citizens), 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants.[242] Among current living immigrants to the U.S., the top five countries of birth are Mexico, China, India, the Philippines and El Salvador. Until 2017 and 2018, the United States led the world in refugee resettlement for decades, admitted more refugees than the rest of the world combined.[243] From fiscal year 1980 until 2017, 55% of refugees came from Asia, 27% from Europe, 13% from Africa, and 4% from Latin America.[243]
136
+
137
+ A 2017 United Nations report projected that the U.S. would be one of nine countries in which world population growth through 2050 would be concentrated.[244] A 2020 U.S. Census Bureau report projected the population of the country could be anywhere between 320 million and 447 million by 2060, depending on the rate of in-migration; in all projected scenarios, a lower fertility rate and increases in life expectancy would result in an aging population.[245] The United States has an annual birth rate of 13 per 1,000, which is five births per 1,000 below the world average.[246] Its population growth rate is positive at 0.7%, higher than that of many developed nations.[247]
138
+
139
+ About 82% of Americans live in urban areas (including suburbs);[202] about half of those reside in cities with populations over 50,000.[248] In 2008, 273 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities had over two million (namely New York, Los Angeles, Chicago, and Houston).[249] Estimates for the year 2018 show that 53 metropolitan areas have populations greater than one million. Many metros in the South, Southwest and West grew significantly between 2010 and 2018. The Dallas and Houston metros increased by more than a million people, while the Washington, D.C., Miami, Atlanta, and Phoenix metros all grew by more than 500,000 people.
140
+
141
+ English (specifically, American English) is the de facto national language of the United States. Although there is no official language at the federal level, some laws—such as U.S. naturalization requirements—standardize English. In 2010, about 230 million, or 80% of the population aged five years and older, spoke only English at home. 12% of the population speaks Spanish at home, making it the second most common language. Spanish is also the most widely taught second language.[250][251]
142
+
143
+ Both Hawaiian and English are official languages in Hawaii.[252] In addition to English, Alaska recognizes twenty official Native languages,[253][k] and South Dakota recognizes Sioux.[254] While neither has an official language, New Mexico has laws providing for the use of both English and Spanish, as Louisiana does for English and French.[255] Other states, such as California, mandate the publication of Spanish versions of certain government documents including court forms.[256]
144
+
145
+ Several insular territories grant official recognition to their native languages, along with English: Samoan[257] is officially recognized by American Samoa and Chamorro[258] is an official language of Guam. Both Carolinian and Chamorro have official recognition in the Northern Mariana Islands.[259]
146
+ Spanish is an official language of Puerto Rico and is more widely spoken than English there.[260]
147
+
148
+ The most widely taught foreign languages in the United States, in terms of enrollment numbers from kindergarten through university undergraduate education, are Spanish (around 7.2 million students), French (1.5 million), and German (500,000). Other commonly taught languages include Latin, Japanese, ASL, Italian, and Chinese.[261][262] 18% of all Americans claim to speak both English and another language.[263]
149
+
150
+ Religion in the United States (2017)[266]
151
+
152
+ The First Amendment of the U.S. Constitution guarantees the free exercise of religion and forbids Congress from passing laws respecting its establishment.
153
+
154
+ In a 2013 survey, 56% of Americans said religion played a "very important role in their lives," a far higher figure than that of any other Western nation.[267] In a 2009 Gallup poll, 42% of Americans said they attended church weekly or almost weekly; the figures ranged from a low of 23% in Vermont to a high of 63% in Mississippi.[268]
155
+
156
+ In a 2014 survey, 70.6% of adults in the United States identified themselves as Christians;[269] Protestants accounted for 46.5%, while Roman Catholics, at 20.8%, formed the largest single Christian group.[270] In 2014, 5.9% of the U.S. adult population claimed a non-Christian religion.[271] These include Judaism (1.9%), Islam (0.9%), Hinduism (0.7%), and Buddhism (0.7%).[271] The survey also reported that 22.8% of Americans described themselves as agnostic, atheist or simply having no religion—up from 8.2% in 1990.[270][272][273] There are also Unitarian Universalist, Scientologist, Baha'i, Sikh, Jain, Shinto, Zoroastrian, Confucian, Satanist, Taoist, Druid, Native American, Afro-American, traditional African, Wiccan, Gnostic, humanist and deist communities.[274][275]
157
+
158
+ Protestantism is the largest Christian religious grouping in the United States, accounting for almost half of all Americans. Baptists collectively form the largest branch of Protestantism at 15.4%,[276] and the Southern Baptist Convention is the largest individual Protestant denomination at 5.3% of the U.S. population.[276] Apart from Baptists, other Protestant categories include nondenominational Protestants, Methodists, Pentecostals, unspecified Protestants, Lutherans, Presbyterians, Congregationalists, other Reformed, Episcopalians/Anglicans, Quakers, Adventists, Holiness, Christian fundamentalists, Anabaptists, Pietists, and multiple others.[276]
159
+
160
+ As with other Western countries, the U.S. is becoming less religious. Irreligion is growing rapidly among Americans under 30.[277] Polls show that overall American confidence in organized religion has been declining since the mid to late 1980s,[278] and that younger Americans, in particular, are becoming increasingly irreligious.[271][279] In a 2012 study, the Protestant share of the U.S. population had dropped to 48%, thus ending its status as religious category of the majority for the first time.[280][281] Americans with no religion have 1.7 children compared to 2.2 among Christians. The unaffiliated are less likely to marry with 37% marrying compared to 52% of Christians.[282]
161
+
162
+ The Bible Belt is an informal term for a region in the Southern United States in which socially conservative evangelical Protestantism is a significant part of the culture and Christian church attendance across the denominations is generally higher than the nation's average. By contrast, religion plays the least important role in New England and in the Western United States.[268]
163
+
164
+ As of 2018[update], 52% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 32% had never been married.[283] Women now work mostly outside the home and receive the majority of bachelor's degrees.[284]
165
+
166
+ The U.S. teenage pregnancy rate is 26.5 per 1,000 women. The rate has declined by 57% since 1991.[285] Abortion is legal throughout the country. Abortion rates, currently 241 per 1,000 live births and 15 per 1,000 women aged 15–44, are falling but remain higher than most Western nations.[286] In 2013, the average age at first birth was 26 and 41% of births were to unmarried women.[287]
167
+
168
+ The total fertility rate in 2016 was 1820.5 births per 1000 women.[288] Adoption in the United States is common and relatively easy from a legal point of view (compared to other Western countries).[289] As of 2001[update], with more than 127,000 adoptions, the U.S. accounted for nearly half of the total number of adoptions worldwide.[needs update][290] Same-sex marriage is legal nationwide, and it is legal for same-sex couples to adopt. Polygamy is illegal throughout the U.S.[291]
169
+
170
+ In 2019, the U.S. had the world's highest rate of children living in single-parent households.[292]
171
+
172
+ The United States had a life expectancy of 78.6 years at birth in 2017, which was the third year of declines in life expectancy following decades of continuous increase. The recent decline, primarily among the age group 25 to 64, is largely due to sharp increases in the drug overdose and suicide rates; the country has one of the highest suicide rates among wealthy countries.[293][294] Life expectancy was highest among Asians and Hispanics and lowest among blacks.[295][296] According to CDC and Census Bureau data, deaths from suicide, alcohol and drug overdoses hit record highs in 2017.[297]
173
+
174
+ Increasing obesity in the United States and health improvements elsewhere contributed to lowering the country's rank in life expectancy from 11th in the world in 1987, to 42nd in 2007, and as of 2017 the country had the lowest life expectancy among Japan, Canada, Australia, the UK, and seven countries of western Europe.[298][299] Obesity rates have more than doubled in the last 30 years and are the highest in the industrialized world.[300][301] Approximately one-third of the adult population is obese and an additional third is overweight.[302] Obesity-related type 2 diabetes is considered epidemic by health care professionals.[303]
175
+
176
+ In 2010, coronary artery disease, lung cancer, stroke, chronic obstructive pulmonary diseases, and traffic accidents caused the most years of life lost in the U.S. Low back pain, depression, musculoskeletal disorders, neck pain, and anxiety caused the most years lost to disability. The most harmful risk factors were poor diet, tobacco smoking, obesity, high blood pressure, high blood sugar, physical inactivity, and alcohol use. Alzheimer's disease, drug abuse, kidney disease, cancer, and falls caused the most additional years of life lost over their age-adjusted 1990 per-capita rates.[304] U.S. teenage pregnancy and abortion rates are substantially higher than in other Western nations, especially among blacks and Hispanics.[305]
177
+
178
+ Health-care coverage in the United States is a combination of public and private efforts and is not universal. In 2017, 12.2% of the population did not carry health insurance.[306] The subject of uninsured and underinsured Americans is a major political issue.[307][308] Federal legislation, passed in early 2010, roughly halved the uninsured share of the population, though the bill and its ultimate effect are issues of controversy.[309][310] The U.S. health-care system far outspends any other nation, measured both in per capita spending and as percentage of GDP.[311] At the same time, the U.S. is a global leader in medical innovation.[312]
179
+
180
+ American public education is operated by state and local governments, regulated by the United States Department of Education through restrictions on federal grants. In most states, children are required to attend school from the age of six or seven (generally, kindergarten or first grade) until they turn 18 (generally bringing them through twelfth grade, the end of high school); some states allow students to leave school at 16 or 17.[313]
181
+
182
+ About 12% of children are enrolled in parochial or nonsectarian private schools. Just over 2% of children are homeschooled.[314] The U.S. spends more on education per student than any nation in the world, spending more than $11,000 per elementary student in 2010 and more than $12,000 per high school student.[315][needs update] Some 80% of U.S. college students attend public universities.[316]
183
+
184
+ Of Americans 25 and older, 84.6% graduated from high school, 52.6% attended some college, 27.2% earned a bachelor's degree, and 9.6% earned graduate degrees.[317] The basic literacy rate is approximately 99%.[202][318] The United Nations assigns the United States an Education Index of 0.97, tying it for 12th in the world.[319]
185
+
186
+ The United States has many private and public institutions of higher education. The majority of the world's top universities, as listed by various ranking organizations, are in the U.S.[320][321][322] There are also local community colleges with generally more open admission policies, shorter academic programs, and lower tuition.
187
+
188
+ In 2018, U21, a network of research-intensive universities, ranked the United States first in the world for breadth and quality of higher education, and 15th when GDP was a factor.[323] As for public expenditures on higher education, the U.S. trails some other OECD nations but spends more per student than the OECD average, and more than all nations in combined public and private spending.[315][324] As of 2018[update], student loan debt exceeded 1.5 trillion dollars.[325][326]
189
+
190
+ The United States is a federal republic of 50 states, a federal district, five territories and several uninhabited island possessions.[327][328][329] It is the world's oldest surviving federation. It is a federal republic and a representative democracy, "in which majority rule is tempered by minority rights protected by law."[330] For 2018, the U.S. ranked 25th on the Democracy Index.[331] On Transparency International's 2019 Corruption Perceptions Index its public sector position deteriorated from a score of 76 in 2015 to 69 in 2019.[332]
191
+
192
+ In the American federalist system, citizens are usually subject to three levels of government: federal, state, and local. The local government's duties are commonly split between county and municipal governments. In almost all cases, executive and legislative officials are elected by a plurality vote of citizens by district.
193
+
194
+ The government is regulated by a system of checks and balances defined by the U.S. Constitution, which serves as the country's supreme legal document.[333] The original text of the Constitution establishes the structure and responsibilities of the federal government and its relationship with the individual states. Article One protects the right to the "great writ" of habeas corpus. The Constitution has been amended 27 times;[334] the first ten amendments, which make up the Bill of Rights, and the Fourteenth Amendment form the central basis of Americans' individual rights. All laws and governmental procedures are subject to judicial review and any law ruled by the courts to be in violation of the Constitution is voided. The principle of judicial review, not explicitly mentioned in the Constitution, was established by the Supreme Court in Marbury v. Madison (1803)[335] in a decision handed down by Chief Justice John Marshall.[336]
195
+
196
+ The federal government comprises three branches:
197
+
198
+ The House of Representatives has 435 voting members, each representing a congressional district for a two-year term. House seats are apportioned among the states by population. Each state then draws single-member districts to conform with the census apportionment. The District of Columbia and the five major U.S. territories each have one member of Congress—these members are not allowed to vote.[341]
199
+
200
+ The Senate has 100 members with each state having two senators, elected at-large to six-year terms; one-third of Senate seats are up for election every two years. The District of Columbia and the five major U.S. territories do not have senators.[341] The president serves a four-year term and may be elected to the office no more than twice. The president is not elected by direct vote, but by an indirect electoral college system in which the determining votes are apportioned to the states and the District of Columbia.[342] The Supreme Court, led by the chief justice of the United States, has nine members, who serve for life.[343]
201
+
202
+ The state governments are structured in a roughly similar fashion, though Nebraska has a unicameral legislature.[344] The governor (chief executive) of each state is directly elected. Some state judges and cabinet officers are appointed by the governors of the respective states, while others are elected by popular vote.
203
+
204
+ The 50 states are the principal administrative divisions in the country. These are subdivided into counties or county equivalents and further divided into municipalities. The District of Columbia is a federal district that contains the capital of the United States, Washington, D.C.[345] The states and the District of Columbia choose the president of the United States. Each state has presidential electors equal to the number of their representatives and senators in Congress; the District of Columbia has three (because of the 23rd Amendment).[346] Territories of the United States such as Puerto Rico do not have presidential electors, and so people in those territories cannot vote for the president.[341]
205
+
206
+ The United States also observes tribal sovereignty of the American Indian nations to a limited degree, as it does with the states' sovereignty. American Indians are U.S. citizens and tribal lands are subject to the jurisdiction of the U.S. Congress and the federal courts. Like the states they have a great deal of autonomy, but also like the states, tribes are not allowed to make war, engage in their own foreign relations, or print and issue currency.[347]
207
+
208
+ Citizenship is granted at birth in all states, the District of Columbia, and all major U.S. territories except American Samoa.[348][349][m]
209
+
210
+ The United States has operated under a two-party system for most of its history.[352] For elective offices at most levels, state-administered primary elections choose the major party nominees for subsequent general elections. Since the general election of 1856, the major parties have been the Democratic Party, founded in 1824, and the Republican Party, founded in 1854. Since the Civil War, only one third-party presidential candidate—former president Theodore Roosevelt, running as a Progressive in 1912—has won as much as 20% of the popular vote. The president and vice president are elected by the Electoral College.[353]
211
+
212
+ In American political culture, the center-right Republican Party is considered "conservative" and the center-left Democratic Party is considered "liberal."[354][355] The states of the Northeast and West Coast and some of the Great Lakes states, known as "blue states," are relatively liberal. The "red states" of the South and parts of the Great Plains and Rocky Mountains are relatively conservative.
213
+
214
+ Republican Donald Trump, the winner of the 2016 presidential election, is serving as the 45th president of the United States.[356] Leadership in the Senate includes Republican vice president Mike Pence, Republican president pro tempore Chuck Grassley, Majority Leader Mitch McConnell, and Minority Leader Chuck Schumer.[357] Leadership in the House includes Speaker of the House Nancy Pelosi, Majority Leader Steny Hoyer, and Minority Leader Kevin McCarthy.[358]
215
+
216
+ In the 116th United States Congress, the House of Representatives is controlled by the Democratic Party and the Senate is controlled by the Republican Party, giving the U.S. a split Congress. The Senate consists of 53 Republicans and 45 Democrats with two Independents who caucus with the Democrats; the House consists of 233 Democrats, 196 Republicans, and 1 Libertarian.[359] Of state governors, there are 26 Republicans and 24 Democrats. Among the D.C. mayor and the five territorial governors, there are two Republicans, one Democrat, one New Progressive, and two Independents.[360]
217
+
218
+ The United States has an established structure of foreign relations. It is a permanent member of the United Nations Security Council. New York City is home to the United Nations Headquarters. Almost all countries have embassies in Washington, D.C., and many have consulates around the country. Likewise, nearly all nations host American diplomatic missions. However, Iran, North Korea, Bhutan, and the Republic of China (Taiwan) do not have formal diplomatic relations with the United States (although the U.S. still maintains unofficial relations with Bhutan and Taiwan).[361] It is a member of the G7,[362] G20, and OECD.
219
+
220
+ The United States has a "Special Relationship" with the United Kingdom[363] and strong ties with India, Canada,[364] Australia,[365] New Zealand,[366] the Philippines,[367] Japan,[368] South Korea,[369] Israel,[370] and several European Union countries, including France, Italy, Germany, Spain and Poland.[371] It works closely with fellow NATO members on military and security issues and with its neighbors through the Organization of American States and free trade agreements such as the trilateral North American Free Trade Agreement with Canada and Mexico. Colombia is traditionally considered by the United States as its most loyal ally in South America.[372][373]
221
+
222
+ The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands and Palau through the Compact of Free Association.[374]
223
+
224
+ Taxation in the United States is levied at the federal, state, and local government levels. This includes taxes on income, payroll, property, sales, imports, estates, and gifts, as well as various fees. Taxation in the United States is based on citizenship, not residency.[375] Both non-resident citizens and Green Card holders living abroad are taxed on their income irrespective of where they live or where their income is earned. The United States is one of the only countries in the world to do so.[376]
225
+
226
+ In 2010 taxes collected by federal, state and municipal governments amounted to 24.8% of GDP.[377] Based on CBO estimates,[378] under 2013 tax law the top 1% will be paying the highest average tax rates since 1979, while other income groups will remain at historic lows.[379] For 2018, the effective tax rate for the wealthiest 400 households was 23%, compared to 24.2% for the bottom half of U.S. households.[380]
227
+
228
+
229
+
230
+ During fiscal year 2012, the federal government spent $3.54 trillion on a budget or cash basis, down $60 billion or 1.7% vs. fiscal year 2011 spending of $3.60 trillion. Major categories of fiscal year 2012 spending included: Medicare & Medicaid (23%), Social Security (22%), Defense Department (19%), non-defense discretionary (17%), other mandatory (13%) and interest (6%).[382]
231
+
232
+ The total national debt of the United States in the United States was $18.527 trillion (106% of the GDP) in 2014.[383][n] The United States has the largest external debt in the world[387] and the 34th largest government debt as a % of GDP in the world.[388]
233
+
234
+ The president is the commander-in-chief of the country's armed forces and appoints its leaders, the Secretary of Defense and the Joint Chiefs of Staff. The United States Department of Defense administers the armed forces, including the Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is run by the Department of Homeland Security in peacetime and by the Department of the Navy during times of war. In 2008, the armed forces had 1.4 million personnel on active duty. The Reserves and National Guard brought the total number of troops to 2.3 million. The Department of Defense also employed about 700,000 civilians, not including contractors.[389]
235
+
236
+ Military service is voluntary, though conscription may occur in wartime through the Selective Service System.[390] American forces can be rapidly deployed by the Air Force's large fleet of transport aircraft, the Navy's 11 active aircraft carriers, and Marine expeditionary units at sea with the Navy's Atlantic and Pacific fleets. The military operates 865 bases and facilities abroad,[391] and maintains deployments greater than 100 active duty personnel in 25 foreign countries.[392]
237
+
238
+ The military budget of the United States in 2011 was more than $700 billion, 41% of global military spending. At 4.7% of GDP, the rate was the second-highest among the top 15 military spenders, after Saudi Arabia.[393] Defense spending plays a major role in science and technology investment, with roughly half of U.S. federal research and development funded by the Department of Defense.[394] Defense's share of the overall U.S. economy has generally declined in recent decades, from Cold War peaks of 14.2% of GDP in 1953 and 69.5% of federal outlays in 1954 to 4.7% of GDP and 18.8% of federal outlays in 2011.[395]
239
+
240
+ The country is one of the five recognized nuclear weapons states and possesses the second largest stockpile of nuclear weapons in the world.[396] More than 90% of the world's 14,000 nuclear weapons are owned by Russia and the United States.[397]
241
+
242
+ Law enforcement in the United States is primarily the responsibility of local police departments and sheriff's offices, with state police providing broader services. Federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have specialized duties, including protecting civil rights, national security and enforcing U.S. federal courts' rulings and federal laws.[398] State courts conduct most criminal trials while federal courts handle certain designated crimes as well as certain appeals from the state criminal courts.
243
+
244
+ A cross-sectional analysis of the World Health Organization Mortality Database from 2010 showed that United States "homicide rates were 7.0 times higher than in other high-income countries, driven by a gun homicide rate that was 25.2 times higher."[399] In 2016, the US murder rate was 5.4 per 100,000.[400] Gun ownership rights, guaranteed by the Second Amendment, continue to be the subject of contention.
245
+
246
+ The United States has the highest documented incarceration rate and largest prison population in the world.[401] As of 2020, the Prison Policy Initiative reported that there were some 2.3 million people incarcerated.[402] The imprisonment rate for all prisoners sentenced to more than a year in state or federal facilities is 478 per 100,000 in 2013.[403] According to the Federal Bureau of Prisons, the majority of inmates held in federal prisons are convicted of drug offenses.[404] About 9% of prisoners are held in privatized prisons.[402] The practice of privately operated prisons began in the 1980s and has been a subject of contention.[405]
247
+
248
+ Capital punishment is sanctioned in the United States for certain federal and military crimes, and at the state level in 30 states.[406][407] No executions took place from 1967 to 1977, owing in part to a U.S. Supreme Court ruling striking down arbitrary imposition of the death penalty. Since the decision there have been more than 1,300 executions, a majority of these taking place in three states: Texas, Virginia, and Oklahoma.[408] Meanwhile, several states have either abolished or struck down death penalty laws. In 2019, the country had the sixth-highest number of executions in the world, following China, Iran, Saudi Arabia, Iraq, and Egypt.[409]
249
+
250
+ According to the International Monetary Fund, the U.S. GDP of $16.8 trillion constitutes 24% of the gross world product at market exchange rates and over 19% of the gross world product at purchasing power parity (PPP).[417] The United States is the largest importer of goods and second-largest exporter, though exports per capita are relatively low. In 2010, the total U.S. trade deficit was $635 billion.[418] Canada, China, Mexico, Japan, and Germany are its top trading partners.[419]
251
+
252
+ From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7.[420] The country ranks ninth in the world in nominal GDP per capita[421] and sixth in GDP per capita at PPP.[417] The U.S. dollar is the world's primary reserve currency.[422]
253
+
254
+ In 2009, the private sector was estimated to constitute 86.4% of the economy.[425] While its economy has reached a postindustrial level of development, the United States remains an industrial power.[426] Consumer spending comprised 68% of the U.S. economy in 2015.[427] In August 2010, the American labor force consisted of 154.1 million people (50%). With 21.2 million people, government is the leading field of employment. The largest private employment sector is health care and social assistance, with 16.4 million people. It has a smaller welfare state and redistributes less income through government action than most European nations.[428]
255
+
256
+ The United States is the only advanced economy that does not guarantee its workers paid vacation[429] and is one of a few countries in the world without paid family leave as a legal right.[430] While federal law does not require sick leave, it is a common benefit for government workers and full-time employees at corporations.[431] 74% of full-time American workers get paid sick leave, according to the Bureau of Labor Statistics, although only 24% of part-time workers get the same benefits.[431] In 2009, the United States had the third-highest workforce productivity per person in the world, behind Luxembourg and Norway. It was fourth in productivity per hour, behind those two countries and the Netherlands.[432]
257
+
258
+
259
+
260
+ The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts were developed by the U.S. War Department by the Federal Armories during the first half of the 19th century. This technology, along with the establishment of a machine tool industry, enabled the U.S. to have large-scale manufacturing of sewing machines, bicycles, and other items in the late 19th century and became known as the American system of manufacturing. Factory electrification in the early 20th century and introduction of the assembly line and other labor-saving techniques created the system of mass production.[433] In the 21st century, approximately two-thirds of research and development funding comes from the private sector.[434] The United States leads the world in scientific research papers and impact factor.[435][436]
261
+
262
+ In 1876, Alexander Graham Bell was awarded the first U.S. patent for the telephone. Thomas Edison's research laboratory, one of the first of its kind, developed the phonograph, the first long-lasting light bulb, and the first viable movie camera.[437] The latter led to emergence of the worldwide entertainment industry. In the early 20th century, the automobile companies of Ransom E. Olds and Henry Ford popularized the assembly line. The Wright brothers, in 1903, made the first sustained and controlled heavier-than-air powered flight.[438]
263
+
264
+ The rise of fascism and Nazism in the 1920s and 30s led many European scientists, including Albert Einstein, Enrico Fermi, and John von Neumann, to immigrate to the United States.[439] During World War II, the Manhattan Project developed nuclear weapons, ushering in the Atomic Age, while the Space Race produced rapid advances in rocketry, materials science, and aeronautics.[440][441]
265
+
266
+ The invention of the transistor in the 1950s, a key active component in practically all modern electronics, led to many technological developments and a significant expansion of the U.S. technology industry.[442] This, in turn, led to the establishment of many new technology companies and regions around the country such as Silicon Valley in California. Advancements by American microprocessor companies such as Advanced Micro Devices (AMD), and Intel along with both computer software and hardware companies that include Adobe Systems, Apple Inc., IBM, Microsoft, and Sun Microsystems created and popularized the personal computer. The ARPANET was developed in the 1960s to meet Defense Department requirements, and became the first of a series of networks which evolved into the Internet.[443]
267
+
268
+ Accounting for 4.24% of the global population, Americans collectively possess 29.4% of the world's total wealth, and Americans make up roughly half of the world's population of millionaires.[444] The Global Food Security Index ranked the U.S. number one for food affordability and overall food security in March 2013.[445] Americans on average have more than twice as much living space per dwelling and per person as European Union residents, and more than every EU nation.[446] For 2017 the United Nations Development Programme ranked the United States 13th among 189 countries in its Human Development Index and 25th among 151 countries in its inequality-adjusted HDI (IHDI).[447]
269
+
270
+ Wealth, like income and taxes, is highly concentrated; the richest 10% of the adult population possess 72% of the country's household wealth, while the bottom half claim only 2%.[448] According to a September 2017 report by the Federal Reserve, the top 1% controlled 38.6% of the country's wealth in 2016.[449] According to a 2018 study by the OECD, the United States has a larger percentage of low-income workers than almost any other developed nation. This is largely because at-risk workers get almost no government support and are further set back by a very weak collective bargaining system.[450] The top one percent of income-earners accounted for 52 percent of the income gains from 2009 to 2015, where income is defined as market income excluding government transfers.[451] In 2018, U.S. income inequality reached the highest level ever recorded by the Census Bureau.[452]
271
+
272
+ After years of stagnation, median household income reached a record high in 2016 following two consecutive years of record growth. Income inequality remains at record highs however, with the top fifth of earners taking home more than half of all overall income.[454] The rise in the share of total annual income received by the top one percent, which has more than doubled from nine percent in 1976 to 20 percent in 2011, has significantly affected income inequality,[455] leaving the United States with one of the widest income distributions among OECD nations.[456] The extent and relevance of income inequality is a matter of debate.[457][458][459]
273
+
274
+ Between June 2007 and November 2008, the global recession led to falling asset prices around the world. Assets owned by Americans lost about a quarter of their value.[460] Since peaking in the second quarter of 2007, household wealth was down $14 trillion, but has since increased $14 trillion over 2006 levels.[461] At the end of 2014, household debt amounted to $11.8 trillion,[462] down from $13.8 trillion at the end of 2008.[463]
275
+
276
+ There were about 578,424 sheltered and unsheltered homeless persons in the US in January 2014, with almost two-thirds staying in an emergency shelter or transitional housing program.[464] In 2011, 16.7 million children lived in food-insecure households, about 35% more than 2007 levels, though only 1.1% of U.S. children, or 845,000, saw reduced food intake or disrupted eating patterns at some point during the year, and most cases were not chronic.[465] As of June 2018[update], 40 million people, roughly 12.7% of the U.S. population, were living in poverty, with 18.5 million of those living in deep poverty (a family income below one-half of the poverty threshold) and over five million live "in 'Third World' conditions." In 2016, 13.3 million children were living in poverty, which made up 32.6% of the impoverished population.[466] In 2017, the U.S. state or territory with the lowest poverty rate was New Hampshire (7.6%), and the one with the highest was American Samoa (65%).[467][468][469]
277
+
278
+ Personal transportation is dominated by automobiles, which operate on a network of 4 million miles (6.4 million kilometers) of public roads.[471] The United States has the world's second-largest automobile market,[472] and has the highest rate of per-capita vehicle ownership in the world, with 765 vehicles per 1,000 Americans (1996).[473][needs update] In 2017, there were 255,009,283 non-two wheel motor vehicles, or about 910 vehicles per 1,000 people.[474]
279
+
280
+ The civil airline industry is entirely privately owned and has been largely deregulated since 1978, while most major airports are publicly owned.[475] The three largest airlines in the world by passengers carried are US-based; American Airlines is number one after its 2013 acquisition by US Airways.[476] Of the world's 50 busiest passenger airports, 16 are in the United States, including the busiest, Hartsfield–Jackson Atlanta International Airport.[477]
281
+
282
+ The United States energy market is about 29,000 terawatt hours per year.[478] In 2005, 40% of this energy came from petroleum, 23% from coal, and 22% from natural gas. The remainder was supplied by nuclear and renewable energy sources.[479]
283
+
284
+ Since 2007, the total greenhouse gas emissions by the United States are the second highest by country, exceeded only by China.[480] The United States has historically been the world's largest producer of greenhouse gases, and greenhouse gas emissions per capita remain high.[481]
285
+
286
+ The United States is home to many cultures and a wide variety of ethnic groups, traditions, and values.[483][484] Aside from the Native American, Native Hawaiian, and Native Alaskan populations, nearly all Americans or their ancestors settled or immigrated within the past five centuries.[485] Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa.[483][486] More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as both a homogenizing melting pot, and a heterogeneous salad bowl in which immigrants and their descendants retain distinctive cultural characteristics.[483]
287
+
288
+ Americans have traditionally been characterized by a strong work ethic, competitiveness, and individualism,[487] as well as a unifying belief in an "American creed" emphasizing liberty, equality, private property, democracy, rule of law, and a preference for limited government.[488] Americans are extremely charitable by global standards. According to a 2006 British study, Americans gave 1.67% of GDP to charity, more than any other nation studied.[489][490][491]
289
+
290
+ The American Dream, or the perception that Americans enjoy high social mobility, plays a key role in attracting immigrants.[492] Whether this perception is accurate has been a topic of debate.[493][494][495][496][420][497] While mainstream culture holds that the United States is a classless society,[498] scholars identify significant differences between the country's social classes, affecting socialization, language, and values.[499] While Americans tend to greatly value socioeconomic achievement, being ordinary or average is also generally seen as a positive attribute.[500]
291
+
292
+ In the 18th and early 19th centuries, American art and literature took most of its cues from Europe. Writers such as Washington Irving, Nathaniel Hawthorne, Edgar Allan Poe, and Henry David Thoreau established a distinctive American literary voice by the middle of the 19th century. Mark Twain and poet Walt Whitman were major figures in the century's second half; Emily Dickinson, virtually unknown during her lifetime, is now recognized as an essential American poet.[501] A work seen as capturing fundamental aspects of the national experience and character—such as Herman Melville's Moby-Dick (1851), Twain's The Adventures of Huckleberry Finn (1885), F. Scott Fitzgerald's The Great Gatsby (1925) and Harper Lee's To Kill a Mockingbird (1960)—may be dubbed the "Great American Novel."[502]
293
+
294
+ Twelve U.S. citizens have won the Nobel Prize in Literature, most recently Bob Dylan in 2016. William Faulkner, Ernest Hemingway and John Steinbeck are often named among the most influential writers of the 20th century.[503] Popular literary genres such as the Western and hardboiled crime fiction developed in the United States. The Beat Generation writers opened up new literary approaches, as have postmodernist authors such as John Barth, Thomas Pynchon, and Don DeLillo.[504]
295
+
296
+ The transcendentalists, led by Thoreau and Ralph Waldo Emerson, established the first major American philosophical movement. After the Civil War, Charles Sanders Peirce and then William James and John Dewey were leaders in the development of pragmatism. In the 20th century, the work of W. V. O. Quine and Richard Rorty, and later Noam Chomsky, brought analytic philosophy to the fore of American philosophical academia. John Rawls and Robert Nozick also led a revival of political philosophy.
297
+
298
+ In the visual arts, the Hudson River School was a mid-19th-century movement in the tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene.[505] Georgia O'Keeffe, Marsden Hartley, and others experimented with new, individualistic styles. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. The tide of modernism and then postmodernism has brought fame to American architects such as Frank Lloyd Wright, Philip Johnson, and Frank Gehry.[506] Americans have long been important in the modern artistic medium of photography, with major photographers including Alfred Stieglitz, Edward Steichen, Edward Weston, and Ansel Adams.[507]
299
+
300
+ Mainstream American cuisine is similar to that in other Western countries. Wheat is the primary cereal grain with about three-quarters of grain products made of wheat flour[508] and many dishes use indigenous ingredients, such as turkey, venison, potatoes, sweet potatoes, corn, squash, and maple syrup which were consumed by Native Americans and early European settlers.[509] These homegrown foods are part of a shared national menu on one of America's most popular holidays, Thanksgiving, when some Americans make traditional foods to celebrate the occasion.[510]
301
+
302
+ The American fast food industry, the world's largest,[511] pioneered the drive-through format in the 1940s.[512] Characteristic dishes such as apple pie, fried chicken, pizza, hamburgers, and hot dogs derive from the recipes of various immigrants. French fries, Mexican dishes such as burritos and tacos, and pasta dishes freely adapted from Italian sources are widely consumed.[513] Americans drink three times as much coffee as tea.[514] Marketing by U.S. industries is largely responsible for making orange juice and milk ubiquitous breakfast beverages.[515][516]
303
+
304
+ Although little known at the time, Charles Ives's work of the 1910s established him as the first major U.S. composer in the classical tradition, while experimentalists such as Henry Cowell and John Cage created a distinctive American approach to classical composition. Aaron Copland and George Gershwin developed a new synthesis of popular and classical music.
305
+
306
+ The rhythmic and lyrical styles of African-American music have deeply influenced American music at large, distinguishing it from European and African traditions. Elements from folk idioms such as the blues and what is now known as old-time music were adopted and transformed into popular genres with global audiences. Jazz was developed by innovators such as Louis Armstrong and Duke Ellington early in the 20th century. Country music developed in the 1920s, and rhythm and blues in the 1940s.[517]
307
+
308
+ Elvis Presley and Chuck Berry were among the mid-1950s pioneers of rock and roll. Rock bands such as Metallica, the Eagles, and Aerosmith are among the highest grossing in worldwide sales.[518][519][520] In the 1960s, Bob Dylan emerged from the folk revival to become one of America's most celebrated songwriters and James Brown led the development of funk.
309
+
310
+ More recent American creations include hip hop and house music. American pop stars such as Elvis Presley, Michael Jackson, and Madonna have become global celebrities,[517] as have contemporary musical artists such as Taylor Swift, Britney Spears, Katy Perry, Beyoncé, Jay-Z, Eminem, Kanye West, and Ariana Grande.[521]
311
+
312
+ Hollywood, a northern district of Los Angeles, California, is one of the leaders in motion picture production.[522] The world's first commercial motion picture exhibition was given in New York City in 1894, using Thomas Edison's Kinetoscope.[523] Since the early 20th century, the U.S. film industry has largely been based in and around Hollywood, although in the 21st century an increasing number of films are not made there, and film companies have been subject to the forces of globalization.[524]
313
+
314
+ Director D. W. Griffith, the top American filmmaker during the silent film period, was central to the development of film grammar, and producer/entrepreneur Walt Disney was a leader in both animated film and movie merchandising.[525] Directors such as John Ford redefined the image of the American Old West, and, like others such as John Huston, broadened the possibilities of cinema with location shooting. The industry enjoyed its golden years, in what is commonly referred to as the "Golden Age of Hollywood," from the early sound period until the early 1960s,[526] with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures.[527][528] In the 1970s, "New Hollywood" or the "Hollywood Renaissance"[529] was defined by grittier films influenced by French and Italian realist pictures of the post-war period.[530] In more recent times, directors such as Steven Spielberg, George Lucas and James Cameron have gained renown for their blockbuster films, often characterized by high production costs and earnings.
315
+
316
+ Notable films topping the American Film Institute's AFI 100 list include Orson Welles's Citizen Kane (1941), which is frequently cited as the greatest film of all time,[531][532] Casablanca (1942), The Godfather (1972), Gone with the Wind (1939), Lawrence of Arabia (1962), The Wizard of Oz (1939), The Graduate (1967), On the Waterfront (1954), Schindler's List (1993), Singin' in the Rain (1952), It's a Wonderful Life (1946) and Sunset Boulevard (1950).[533] The Academy Awards, popularly known as the Oscars, have been held annually by the Academy of Motion Picture Arts and Sciences since 1929,[534] and the Golden Globe Awards have been held annually since January 1944.[535]
317
+
318
+ American football is by several measures the most popular spectator sport;[537] the National Football League (NFL) has the highest average attendance of any sports league in the world, and the Super Bowl is watched by tens of millions globally. Baseball has been regarded as the U.S. national sport since the late 19th century, with Major League Baseball (MLB) being the top league. Basketball and ice hockey are the country's next two leading professional team sports, with the top leagues being the National Basketball Association (NBA) and the National Hockey League (NHL). College football and basketball attract large audiences.[538] In soccer, the country hosted the 1994 FIFA World Cup, the men's national soccer team qualified for ten World Cups and the women's team has won the FIFA Women's World Cup four times; Major League Soccer is the sport's highest league in the United States (featuring 23 American and three Canadian teams). The market for professional sports in the United States is roughly $69 billion, roughly 50% larger than that of all of Europe, the Middle East, and Africa combined.[539]
319
+
320
+ Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri were the first ever Olympic Games held outside of Europe.[540] As of 2017[update], the United States has won 2,522 medals at the Summer Olympic Games, more than any other country, and 305 in the Winter Olympic Games, the second most behind Norway.[541]
321
+ While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, some of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate Western contact.[542] The most watched individual sports are golf and auto racing, particularly NASCAR.[543][544]
322
+
323
+ The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (FOX). The four major broadcast television networks are all commercial entities. Cable television offers hundreds of channels catering to a variety of niches.[545] Americans listen to radio programming, also largely commercial, on average just over two-and-a-half hours a day.[546]
324
+
325
+ In 1998, the number of U.S. commercial radio stations had grown to 4,793 AM stations and 5,662 FM stations. In addition, there are 1,460 public radio stations. Most of these stations are run by universities and public authorities for educational purposes and are financed by public or private funds, subscriptions, and corporate underwriting. Much public-radio broadcasting is supplied by NPR. NPR was incorporated in February 1970 under the Public Broadcasting Act of 1967; its television counterpart, PBS, was created by the same legislation. As of September 30, 2014[update], there are 15,433 licensed full-power radio stations in the U.S. according to the U.S. Federal Communications Commission (FCC).[547]
326
+
327
+ Well-known newspapers include The Wall Street Journal, The New York Times, and USA Today.[548] Although the cost of publishing has increased over the years, the price of newspapers has generally remained low, forcing newspapers to rely more on advertising revenue and on articles provided by a major wire service, such as the Associated Press or Reuters, for their national and world coverage. With very few exceptions, all the newspapers in the U.S. are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or in a situation that is increasingly rare, by individuals or families. Major cities often have "alternative weeklies" to complement the mainstream daily papers, such as New York City's The Village Voice or Los Angeles' LA Weekly. Major cities may also support a local business journal, trade papers relating to local industries, and papers for local ethnic and social groups. Aside from web portals and search engines, the most popular websites are Facebook, YouTube, Wikipedia, Yahoo!, eBay, Amazon, and Twitter.[549]
328
+
329
+ More than 800 publications are produced in Spanish, the second most commonly used language in the United States behind English.[550][551]
330
+
331
+ Internet sources
332
+
en/5866.html.txt ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Money is any item or verifiable record that is generally accepted as payment for goods and services and repayment of debts, such as taxes, in a particular country or socio-economic context.[1][2][3] The main functions of money are distinguished as: a medium of exchange, a unit of account, a store of value and sometimes, a standard of deferred payment.[4][5] Any item or verifiable record that fulfils these functions can be considered as money.
4
+
5
+ Money is historically an emergent market phenomenon establishing a commodity money, but nearly all contemporary money systems are based on fiat money.[4] Fiat money, like any check or note of debt, is without use value as a physical commodity.[citation needed] It derives its value by being declared by a government to be legal tender; that is, it must be accepted as a form of payment within the boundaries of the country, for "all debts, public and private".[6][better source needed] Counterfeit money can cause good money to lose its value.
6
+
7
+ The money supply of a country consists of currency (banknotes and coins) and, depending on the particular definition used, one or more types of bank money (the balances held in checking accounts, savings accounts, and other types of bank accounts). Bank money, which consists only of records (mostly computerized in modern banking), forms by far the largest part of broad money in developed countries.[7][8][9]
8
+
9
+ The word "money" is believed to originate from a temple of Juno, on Capitoline, one of Rome's seven hills. In the ancient world Juno was often associated with money. The temple of Juno Moneta at Rome was the place where the mint of Ancient Rome was located.[10] The name "Juno" may derive from the Etruscan goddess Uni (which means "the one", "unique", "unit", "union", "united") and "Moneta" either from the Latin word "monere" (remind, warn, or instruct) or the Greek word "moneres" (alone, unique).
10
+
11
+ In the Western world, a prevalent term for coin-money has been specie, stemming from Latin in specie, meaning 'in kind'.[11]
12
+
13
+ The use of barter-like methods may date back to at least 100,000 years ago, though there is no evidence of a society or economy that relied primarily on barter.[12][13] Instead, non-monetary societies operated largely along the principles of gift economy and debt.[14][15] When barter did in fact occur, it was usually between either complete strangers or potential enemies.[16]
14
+
15
+ Many cultures around the world eventually developed the use of commodity money. The Mesopotamian shekel was a unit of weight, and relied on the mass of something like 160 grains of barley.[17] The first usage of the term came from Mesopotamia circa 3000 BC. Societies in the Americas, Asia, Africa and Australia used shell money – often, the shells of the cowry (Cypraea moneta L. or C. annulus L.). According to Herodotus, the Lydians were the first people to introduce the use of gold and silver coins.[18] It is thought by modern scholars that these first stamped coins were minted around 650–600 BC.[19]
16
+
17
+ The system of commodity money eventually evolved into a system of representative money.[citation needed] This occurred because gold and silver merchants or banks would issue receipts to their depositors – redeemable for the commodity money deposited. Eventually, these receipts became generally accepted as a means of payment and were used as money. Paper money or banknotes were first used in China during the Song dynasty. These banknotes, known as "jiaozi", evolved from promissory notes that had been used since the 7th century. However, they did not displace commodity money, and were used alongside coins. In the 13th century, paper money became known in Europe through the accounts of travelers, such as Marco Polo and William of Rubruck.[20] Marco Polo's account of paper money during the Yuan dynasty is the subject of a chapter of his book, The Travels of Marco Polo, titled "How the Great Kaan Causeth the Bark of Trees, Made Into Something Like Paper, to Pass for Money All Over his Country."[21] Banknotes were first issued in Europe by Stockholms Banco in 1661, and were again also used alongside coins. The gold standard, a monetary system where the medium of exchange are paper notes that are convertible into pre-set, fixed quantities of gold, replaced the use of gold coins as currency in the 17th–19th centuries in Europe. These gold standard notes were made legal tender, and redemption into gold coins was discouraged. By the beginning of the 20th century almost all countries had adopted the gold standard, backing their legal tender notes with fixed amounts of gold.
18
+
19
+ After World War II and the Bretton Woods Conference, most countries adopted fiat currencies that were fixed to the U.S. dollar. The U.S. dollar was in turn fixed to gold. In 1971 the U.S. government suspended the convertibility of the U.S. dollar to gold. After this many countries de-pegged their currencies from the U.S. dollar, and most of the world's currencies became unbacked by anything except the governments' fiat of legal tender and the ability to convert the money into goods via payment. According to proponents of modern money theory, fiat money is also backed by taxes. By imposing taxes, states create demand for the currency they issue.[22]
20
+
21
+ Heterodox
22
+
23
+ In Money and the Mechanism of Exchange (1875), William Stanley Jevons famously analyzed money in terms of four functions: a medium of exchange, a common measure of value (or unit of account), a standard of value (or standard of deferred payment), and a store of value. By 1919, Jevons's four functions of money were summarized in the couplet:
24
+
25
+ This couplet would later become widely popular in macroeconomics textbooks.[24] Most modern textbooks now list only three functions, that of medium of exchange, unit of account, and store of value, not considering a standard of deferred payment as a distinguished function, but rather subsuming it in the others.[4][25][26]
26
+
27
+ There have been many historical disputes regarding the combination of money's functions, some arguing that they need more separation and that a single unit is insufficient to deal with them all. One of these arguments is that the role of money as a medium of exchange is in conflict with its role as a store of value: its role as a store of value requires holding it without spending, whereas its role as a medium of exchange requires it to circulate.[5] Others argue that storing of value is just deferral of the exchange, but does not diminish the fact that money is a medium of exchange that can be transported both across space and time. The term "financial capital" is a more general and inclusive term for all liquid instruments, whether or not they are a uniformly recognized tender.
28
+
29
+ When money is used to intermediate the exchange of goods and services, it is performing a function as a medium of exchange. It thereby avoids the inefficiencies of a barter system, such as the "coincidence of wants" problem. Money's most important usage is as a method for comparing the values of dissimilar objects.
30
+
31
+ A unit of account (in economics)[27] is a standard numerical monetary unit of measurement of the market value of goods, services, and other transactions. Also known as a "measure" or "standard" of relative worth and deferred payment, a unit of account is a necessary prerequisite for the formulation of commercial agreements that involve debt.
32
+
33
+ Money acts as a standard measure and common denomination of trade. It is thus a basis for quoting and bargaining of prices. It is necessary for developing efficient accounting systems.
34
+
35
+ While standard of deferred payment is distinguished by some texts,[5] particularly older ones, other texts subsume this under other functions.[4][25][26][clarification needed] A "standard of deferred payment" is an accepted way to settle a debt – a unit in which debts are denominated, and the status of money as legal tender, in those jurisdictions which have this concept, states that it may function for the discharge of debts. When debts are denominated in money, the real value of debts may change due to inflation and deflation, and for sovereign and international debts via debasement and devaluation.
36
+
37
+ To act as a store of value, a money must be able to be reliably saved, stored, and retrieved – and be predictably usable as a medium of exchange when it is retrieved. The value of the money must also remain stable over time. Some have argued that inflation, by reducing the value of money, diminishes the ability of the money to function as a store of value.[4]
38
+
39
+ To fulfill its various functions, money must have certain properties:[28]
40
+
41
+ In economics, money is any financial instrument that can fulfill the functions of money (detailed above). These financial instruments together are collectively referred to as the money supply of an economy. In other words, the money supply is the number of financial instruments within a specific economy available for purchasing goods or services. Since the money supply consists of various financial instruments (usually currency, demand deposits and various other types of deposits), the amount of money in an economy is measured by adding together these financial instruments creating a monetary aggregate.
42
+
43
+ Modern monetary theory distinguishes among different ways to measure the stock of money or money supply, reflected in different types of monetary aggregates, using a categorization system that focuses on the liquidity of the financial instrument used as money. The most commonly used monetary aggregates (or types of money) are conventionally designated M1, M2 and M3. These are successively larger aggregate categories: M1 is currency (coins and bills) plus demand deposits (such as checking accounts); M2 is M1 plus savings accounts and time deposits under $100,000; and M3 is M2 plus larger time deposits and similar institutional accounts. M1 includes only the most liquid financial instruments, and M3 relatively illiquid instruments. The precise definition of M1, M2 etc. may be different in different countries.
44
+
45
+ Another measure of money, M0, is also used; unlike the other measures, it does not represent actual purchasing power by firms and households in the economy.[citation needed] M0 is base money, or the amount of money actually issued by the central bank of a country. It is measured as currency plus deposits of banks and other institutions at the central bank. M0 is also the only money that can satisfy the reserve requirements of commercial banks.
46
+
47
+ In current economic systems, money is created by two procedures:
48
+
49
+ Legal tender, or narrow money (M0) is the cash money created by a Central Bank by minting coins and printing banknotes.
50
+
51
+ Bank money, or broad money (M1/M2) is the money created by private banks through the recording of loans as deposits of borrowing clients, with partial support indicated by the cash ratio. Currently, bank money is created as electronic money.
52
+
53
+ In most countries, the majority of money is mostly created as M1/M2 by commercial banks making loans. Contrary to some popular misconceptions, banks do not act simply as intermediaries, lending out deposits that savers place with them, and do not depend on central bank money (M0) to create new loans and deposits.[29]
54
+
55
+ "Market liquidity" describes how easily an item can be traded for another item, or into the common currency within an economy. Money is the most liquid asset because it is universally recognised and accepted as the common currency. In this way, money gives consumers the freedom to trade goods and services easily without having to barter.
56
+
57
+ Liquid financial instruments are easily tradable and have low transaction costs. There should be no (or minimal) spread between the prices to buy and sell the instrument being used as money.
58
+
59
+ Many items have been used as commodity money such as naturally scarce precious metals, conch shells, barley, beads etc., as well as many other things that are thought of as having value. Commodity money value comes from the commodity out of which it is made. The commodity itself constitutes the money, and the money is the commodity.[30] Examples of commodities that have been used as mediums of exchange include gold, silver, copper, rice, Wampum, salt, peppercorns, large stones, decorated belts, shells, alcohol, cigarettes, cannabis, candy, etc. These items were sometimes used in a metric of perceived value in conjunction to one another, in various commodity valuation or price system economies. Use of commodity money is similar to barter, but a commodity money provides a simple and automatic unit of account for the commodity which is being used as money. Although some gold coins such as the Krugerrand are considered legal tender, there is no record of their face value on either side of the coin. The rationale for this is that emphasis is laid on their direct link to the prevailing value of their fine gold content.[31]
60
+ American Eagles are imprinted with their gold content and legal tender face value.[32]
61
+
62
+ In 1875, the British economist William Stanley Jevons described the money used at the time as "representative money". Representative money is money that consists of token coins, paper money or other physical tokens such as certificates, that can be reliably exchanged for a fixed quantity of a commodity such as gold or silver. The value of representative money stands in direct and fixed relation to the commodity that backs it, while not itself being composed of that commodity.[33]
63
+
64
+ Fiat money or fiat currency is money whose value is not derived from any intrinsic value or guarantee that it can be converted into a valuable commodity (such as gold). Instead, it has value only by government order (fiat). Usually, the government declares the fiat currency (typically notes and coins from a central bank, such as the Federal Reserve System in the U.S.) to be legal tender, making it unlawful not to accept the fiat currency as a means of repayment for all debts, public and private.[34][35]
65
+
66
+ Some bullion coins such as the Australian Gold Nugget and American Eagle are legal tender, however, they trade based on the market price of the metal content as a commodity, rather than their legal tender face value (which is usually only a small fraction of their bullion value).[32][36]
67
+
68
+ Fiat money, if physically represented in the form of currency (paper or coins), can be accidentally damaged or destroyed. However, fiat money has an advantage over representative or commodity money, in that the same laws that created the money can also define rules for its replacement in case of damage or destruction. For example, the U.S. government will replace mutilated Federal Reserve Notes (U.S. fiat money) if at least half of the physical note can be reconstructed, or if it can be otherwise proven to have been destroyed.[37] By contrast, commodity money which has been lost or destroyed cannot be recovered.
69
+
70
+ These factors led to the shift of the store of value being the metal itself: at first silver, then both silver and gold, and at one point there was bronze as well. Now we have copper coins and other non-precious metals as coins. Metals were mined, weighed, and stamped into coins. This was to assure the individual taking the coin that he was getting a certain known weight of precious metal. Coins could be counterfeited, but they also created a new unit of account, which helped lead to banking. Archimedes' principle provided the next link: coins could now be easily tested for their fine weight of metal, and thus the value of a coin could be determined, even if it had been shaved, debased or otherwise tampered with (see Numismatics).
71
+
72
+ In most major economies using coinage, copper, silver and gold formed three tiers of coins. Gold coins were used for large purchases, payment of the military and backing of state activities. Silver coins were used for midsized transactions, and as a unit of account for taxes, dues, contracts and fealty, while copper coins represented the coinage of common transaction. This system had been used in ancient India since the time of the Mahajanapadas. In Europe, this system worked through the medieval period because there was virtually no new gold, silver or copper introduced through mining or conquest.[citation needed] Thus the overall ratios of the three coinages remained roughly equivalent.
73
+
74
+ In premodern China, the need for credit and for circulating a medium that was less of a burden than exchanging thousands of copper coins led to the introduction of paper money, commonly known today as "banknote"s. This economic phenomenon was a slow and gradual process that took place from the late Tang dynasty (618–907) into the Song dynasty (960–1279). It began as a means for merchants to exchange heavy coinage for receipts of deposit issued as promissory notes from shops of wholesalers, notes that were valid for temporary use in a small regional territory. In the 10th century, the Song dynasty government began circulating these notes amongst the traders in their monopolized salt industry. The Song government granted several shops the sole right to issue banknotes, and in the early 12th century the government finally took over these shops to produce state-issued currency. Yet the banknotes issued were still regionally valid and temporary; it was not until the mid 13th century that a standard and uniform government issue of paper money was made into an acceptable nationwide currency. The already widespread methods of woodblock printing and then Pi Sheng's movable type printing by the 11th century was the impetus for the massive production of paper money in premodern China.
75
+
76
+ At around the same time in the medieval Islamic world, a vigorous monetary economy was created during the 7th–12th centuries on the basis of the expanding levels of circulation of a stable high-value currency (the dinar). Innovations introduced by economists, traders and merchants of the Muslim world include the earliest uses of credit,[38] cheques, savings accounts, transactional accounts, loaning, trusts, exchange rates, the transfer of credit and debt,[39] and banking institutions for loans and deposits.[39][need quotation to verify]
77
+
78
+ In Europe, paper money was first introduced in Sweden in 1661. Sweden was rich in copper, thus, because of copper's low value, extraordinarily big coins (often weighing several kilograms) had to be made. The advantages of paper currency were numerous: it reduced transport of gold and silver, and thus lowered the risks; it made loaning gold or silver at interest easier, since the specie (gold or silver) never left the possession of the lender until someone else redeemed the note; and it allowed for a division of currency into credit and specie backed forms. It enabled the sale of stock in joint stock companies, and the redemption of those shares in paper.
79
+
80
+ However, these advantages held within them disadvantages. First, since a note has no intrinsic value, there was nothing to stop issuing authorities from printing more of it than they had specie to back it with. Second, because it increased the money supply, it increased inflationary pressures, a fact observed by David Hume in the 18th century. The result is that paper money would often lead to an inflationary bubble, which could collapse if people began demanding hard money, causing the demand for paper notes to fall to zero. The printing of paper money was also associated with wars, and financing of wars, and therefore regarded as part of maintaining a standing army. For these reasons, paper currency was held in suspicion and hostility in Europe and America. It was also addictive, since the speculative profits of trade and capital creation were quite large. Major nations established mints to print money and mint coins, and branches of their treasury to collect taxes and hold gold and silver stock.
81
+
82
+ At this time both silver and gold were considered legal tender, and accepted by governments for taxes. However, the instability in the ratio between the two grew over the course of the 19th century, with the increase both in supply of these metals, particularly silver, and of trade. This is called bimetallism and the attempt to create a bimetallic standard where both gold and silver backed currency remained in circulation occupied the efforts of inflationists. Governments at this point could use currency as an instrument of policy, printing paper currency such as the United States greenback, to pay for military expenditures. They could also set the terms at which they would redeem notes for specie, by limiting the amount of purchase, or the minimum amount that could be redeemed.
83
+
84
+ By 1900, most of the industrializing nations were on some form of gold standard, with paper notes and silver coins constituting the circulating medium. Private banks and governments across the world followed Gresham's law: keeping gold and silver paid, but paying out in notes. This did not happen all around the world at the same time, but occurred sporadically, generally in times of war or financial crisis, beginning in the early part of the 20th century and continuing across the world until the late 20th century, when the regime of floating fiat currencies came into force. One of the last countries to break away from the gold standard was the United States in 1971.
85
+
86
+ No country anywhere in the world today has an enforceable gold standard or silver standard currency system.
87
+
88
+ Commercial bank money or demand deposits are claims against financial institutions that can be used for the purchase of goods and services. A demand deposit account is an account from which funds can be withdrawn at any time by check or cash withdrawal without giving the bank or financial institution any prior notice. Banks have the legal obligation to return funds held in demand deposits immediately upon demand (or 'at call'). Demand deposit withdrawals can be performed in person, via checks or bank drafts, using automatic teller machines (ATMs), or through online banking.[40]
89
+
90
+ Commercial bank money is created through fractional-reserve banking, the banking practice where banks keep only a fraction of their deposits in reserve (as cash and other highly liquid assets) and lend out the remainder, while maintaining the simultaneous obligation to redeem all these deposits upon demand.[41][page needed][42] Commercial bank money differs from commodity and fiat money in two ways: firstly it is non-physical, as its existence is only reflected in the account ledgers of banks and other financial institutions, and secondly, there is some element of risk that the claim will not be fulfilled if the financial institution becomes insolvent. The process of fractional-reserve banking has a cumulative effect of money creation by commercial banks, as it expands the money supply (cash and demand deposits) beyond what it would otherwise be. Because of the prevalence of fractional reserve banking, the broad money supply of most countries is a multiple (greater than 1) of the amount of base money created by the country's central bank. That multiple (called the money multiplier) is determined by the reserve requirement or other financial ratio requirements imposed by financial regulators.
91
+
92
+ The money supply of a country is usually held to be the total amount of currency in circulation plus the total value of checking and savings deposits in the commercial banks in the country. In modern economies, relatively little of the money supply is in physical currency. For example, in December 2010 in the U.S., of the $8853.4 billion in broad money supply (M2), only $915.7 billion (about 10%) consisted of physical coins and paper money.[43]
93
+
94
+ The development of computer technology in the second part of the twentieth century allowed money to be represented digitally. By 1990, in the United States all money transferred between its central bank and commercial banks was in electronic form. By the 2000s most money existed as digital currency in bank databases.[44] In 2012, by number of transaction, 20 to 58 percent of transactions were electronic (dependant on country).[45]
95
+
96
+ Non-national digital currencies were developed in the early 2000s. In particular, Flooz and Beenz had gained momentum before the Dot-com bubble.[citation needed] Not much innovation occurred until the conception of Bitcoin in 2008, which introduced the concept of a cryptocurrency – a decentralised trustless currency.[46]
97
+
98
+ When gold and silver are used as money, the money supply can grow only if the supply of these metals is increased by mining. This rate of increase will accelerate during periods of gold rushes and discoveries, such as when Columbus discovered the New World and brought back gold and silver to Spain, or when gold was discovered in California in 1848. This causes inflation, as the value of gold goes down. However, if the rate of gold mining cannot keep up with the growth of the economy, gold becomes relatively more valuable, and prices (denominated in gold) will drop, causing deflation. Deflation was the more typical situation for over a century when gold and paper money backed by gold were used as money in the 18th and 19th centuries.
99
+
100
+ Modern day monetary systems are based on fiat money and are no longer tied to the value of gold. The control of the amount of money in the economy is known as monetary policy. Monetary policy is the process by which a government, central bank, or monetary authority manages the money supply to achieve specific goals. Usually the goal of monetary policy is to accommodate economic growth in an environment of stable prices. For example, it is clearly stated in the Federal Reserve Act that the Board of Governors and the Federal Open Market Committee should seek "to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates."[47]
101
+
102
+ A failed monetary policy can have significant detrimental effects on an economy and the society that depends on it. These include hyperinflation, stagflation, recession, high unemployment, shortages of imported goods, inability to export goods, and even total monetary collapse and the adoption of a much less efficient barter economy. This happened in Russia, for instance, after the fall of the Soviet Union.
103
+
104
+ Governments and central banks have taken both regulatory and free market approaches to monetary policy. Some of the tools used to control the money supply include:
105
+
106
+ In the US, the Federal Reserve is responsible for controlling the money supply, while in the Euro area the respective institution is the European Central Bank. Other central banks with significant impact on global finances are the Bank of Japan, People's Bank of China and the Bank of England.
107
+
108
+ For many years much of monetary policy was influenced by an economic theory known as monetarism. Monetarism is an economic theory which argues that management of the money supply should be the primary means of regulating economic activity. The stability of the demand for money prior to the 1980s was a key finding of Milton Friedman and Anna Schwartz[48] supported by the work of David Laidler,[49] and many others. The nature of the demand for money changed during the 1980s owing to technical, institutional, and legal factors[clarification needed] and the influence of monetarism has since decreased.
109
+
110
+ Counterfeit money is imitation currency produced without the legal sanction of the state or government. Producing or using counterfeit money is a form of fraud or forgery. Counterfeiting is almost as old as money itself. Plated copies (known as Fourrées) have been found of Lydian coins which are thought to be among the first western coins.[50] Before the introduction of paper money, the most prevalent method of counterfeiting involved mixing base metals with pure gold or silver. A form of counterfeiting is the production of documents by legitimate printers in response to fraudulent instructions. During World War II, the Nazis forged British pounds and American dollars. Today some of the finest counterfeit banknotes are called Superdollars because of their high quality and likeness to the real U.S. dollar. There has been significant counterfeiting of Euro banknotes and coins since the launch of the currency in 2002, but considerably less than for the U.S. dollar.[51]
111
+
112
+ Money laundering is the process in which the proceeds of crime are transformed into ostensibly legitimate money or other assets. However, in a number of legal and regulatory systems the term money laundering has become conflated with other forms of financial crime, and sometimes used more generally to include misuse of the financial system (involving things such as securities, digital currencies, credit cards, and traditional currency), including terrorism financing, tax evasion, and evading of international sanctions.
en/5867.html.txt ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A metric system is a system of measurement that succeeded the decimalised system based on the metre introduced in France in the 1790s. The historical development of these systems culminated in the definition of the International System of Units (SI), under the oversight of an international standards body.
4
+
5
+ The historical evolution of metric systems has resulted in the recognition of several principles. Each of the fundamental dimensions of nature is expressed by a single base unit of measure. The definition of base units has increasingly been realised from natural principles, rather than by copies of physical artefacts. For quantities derived from the fundamental base units of the system, units derived from the base units are used–e.g., the square metre is the derived unit for area, a quantity derived from length. These derived units are coherent, which means that they involve only products of powers of the base units, without empirical factors. For any given quantity whose unit has a special name and symbol, an extended set of smaller and larger units is defined that are related in a systematic system of factors of powers of ten. The unit of time should be the second; the unit of length should be either the metre or a decimal multiple of it; and the unit of mass should be the gram or a decimal multiple of it.
6
+
7
+ Metric systems have evolved since the 1790s, as science and technology have evolved, in providing a single universal measuring system. Before and in addition to the SI, some other examples of metric systems are the following: the MKS system of units and the MKSA systems, which are the direct forerunners of the SI; the centimetre–gram–second (CGS) system and its subtypes, the CGS electrostatic (cgs-esu) system, the CGS electromagnetic (cgs-emu) system, and their still-popular blend, the Gaussian system; the metre–tonne–second (MTS) system; and the gravitational metric systems, which can be based on either the metre or the centimetre, and either the gram(-force) or the kilogram(-force).
8
+
9
+ The French revolution (1789-99) provided an opportunity for the French to reform their unwieldy and archaic system of many local weights and measures. Charles Maurice de Talleyrand championed a new system based on natural units, proposing to the French National Assembly in 1790 that such a system be developed. Talleyrand had ambitions that a new natural and standardised system would be embraced worldwide, and was keen to involve other countries in its development. Great Britain ignored invitations to co-operate, so the French Academy of Sciences decided in 1791 to go it alone and they set up a commission for the purpose. The commission decided that the standard of length should be based on the size of the Earth. They defined that length to be the 'metre' and its length as one ten-millionth of the length of a quadrant on the Earth's surface from the equator to the north pole. In 1799, after the length of that quadrant had been surveyed, the new system was launched in France.[1]:145–149
10
+
11
+ The units of the metric system, originally taken from observable features of nature, are now defined by seven physical constants being given exact numerical values in terms of the units. In the modern form of the International System of Units (SI), the seven base units are: metre for length, kilogram for mass, second for time, ampere for electric current, kelvin for temperature, candela for luminous intensity and mole for amount of substance. These, together with their derived units, can measure any physical quantity. Derived units may have their own unit name, such as the watt (J/s) and lux (cd/m2), or may just be expressed as combinations of base units, such as velocity (m/s) and acceleration (m/s2).[2]
12
+
13
+ The metric system was designed to have properties that make it easy to use and widely applicable, including units based on the natural world, decimal ratios, prefixes for multiples and sub-multiples, and a structure of base and derived units. It is also a coherent system, which means that its units do not introduce conversion factors not already present in equations relating quantities. It has a property called rationalisation that eliminates certain constants of proportionality in equations of physics.
14
+
15
+ The metric system is extensible, and new derived units are defined as needed in fields such as radiology and chemistry. For example, the katal, a derived unit for catalytic activity equivalent to a one mole per second (1 mol/s), was added in 1999.
16
+
17
+ Although the metric system has changed and developed since its inception, its basic concepts have hardly changed. Designed for transnational use, it consisted of a basic set of units of measurement, now known as base units. Derived units were built up from the base units using logical rather than empirical relationships while multiples and submultiples of both base and derived units were decimal-based and identified by a standard set of prefixes.
18
+
19
+ The base units used in a measurement system must be realisable. Each of the definitions of the base units in the SI is accompanied by a defined mise en pratique [practical realisation] that describes in detail at least one way in which the base unit can be measured.[4] Where possible, definitions of the base units were developed so that any laboratory equipped with proper instruments would be able to realise a standard without reliance on an artefact held by another country. In practice, such realisation is done under the auspices of a mutual acceptance arrangement.[5]
20
+
21
+ In the SI, the standard metre is defined as exactly 1/299,792,458 of the distance that light travels in a second. The realisation of the metre depends in turn on precise realisation of the second. There are both astronomical observation methods and laboratory measurement methods that are used to realise units of the standard metre. Because the speed of light is now exactly defined in terms of the metre, more precise measurement of the speed of light does not result in a more accurate figure for its velocity in standard units, but rather a more accurate definition of the metre. The accuracy of the measured speed of light is considered to be within 1 m/s, and the realisation of the metre is within about 3 parts in 1,000,000,000, or a proportion of 0.3x10−8:1.
22
+
23
+ The kilogram was originally defined as the mass of a man-made artefact of platinum-iridium held in a laboratory in France, until the new definition was introduced in May 2019. Replicas made in 1879 at the time of the artefact's fabrication and distributed to signatories of the Metre Convention serve as de facto standards of mass in those countries. Additional replicas have been fabricated since as additional countries have joined the convention. The replicas were subject to periodic validation by comparison to the original, called the IPK. It became apparent that either the IPK or the replicas or both were deteriorating, and are no longer comparable: they had diverged by 50 μg since fabrication, so figuratively, the accuracy of the kilogram was no better than 5 parts in a hundred million or a proportion of 5x10−8:1. The accepted redefinition of SI base units replaced the IPK with an exact definition of the Planck constant, which defines the kilogram in terms of the second and metre.
24
+
25
+ The metric system base units were originally adopted because they represented fundamental orthogonal dimensions of measurement corresponding to how we perceive nature: a spatial dimension, a time dimension, one for inertia, and later, a more subtle one for the dimension of an "invisible substance" known as electricity or more generally, electromagnetism. One and only one unit in each of these dimensions was defined, unlike older systems where multiple perceptual quantities with the same dimension were prevalent, like inches, feet and yards or ounces, pounds and tons. Units for other quantities like area and volume, which are also spatial dimensional quantities, were derived from the fundamental ones by logical relationships, so that a unit of square area for example, was the unit of length squared.
26
+
27
+ Many derived units were already in use before and during the time the metric system evolved, because they represented convenient abstractions of whatever base units were defined for the system, especially in the sciences. So analogous units were scaled in terms of the units of the newly established metric system, and their names adopted into the system. Many of these were associated with electromagnetism. Other perceptual units, like volume, which were not defined in terms of base units, were incorporated into the system with definitions in the metric base units, so that the system remained simple. It grew in number of units, but the system retained a uniform structure.
28
+
29
+ Some customary systems of weights and measures had duodecimal ratios, which meant quantities were conveniently divisible by 2, 3, 4, and 6. But it was difficult to do arithmetic with things like ​1⁄4 pound or ​1⁄3 foot. There was no system of notation for successive fractions: for example, ​1⁄3 of ​1⁄3 of a foot was not an inch or any other unit. But the system of counting in decimal ratios did have notation, and the system had the algebraic property of multiplicative closure: a fraction of a fraction, or a multiple of a fraction was a quantity in the system, like ​1⁄10 of ​1⁄10 which is ​1⁄100. So a decimal radix became the ratio between unit sizes of the metric system.
30
+
31
+ In the metric system, multiples and submultiples of units follow a decimal pattern.[Note 1]
32
+
33
+ A common set of decimal-based prefixes that have the effect of multiplication or division by an integer power of ten can be applied to units that are themselves too large or too small for practical use. The concept of using consistent classical (Latin or Greek) names for the prefixes was first proposed in a report by the French Revolutionary Commission on Weights and Measures in May 1793.[3]:89–96 The prefix kilo, for example, is used to multiply the unit by 1000, and the prefix milli is to indicate a one-thousandth part of the unit. Thus the kilogram and kilometre are a thousand grams and metres respectively, and a milligram and millimetre are one thousandth of a gram and metre respectively. These relations can be written symbolically as:[6]
34
+
35
+ In the early days, multipliers that were positive powers of ten were given Greek-derived prefixes such as kilo- and mega-, and those that were negative powers of ten were given Latin-derived prefixes such as centi- and milli-. However, 1935 extensions to the prefix system did not follow this convention: the prefixes nano- and micro-, for example have Greek roots.[1]:222–223 During the 19th century the prefix myria-, derived from the Greek word μύριοι (mýrioi), was used as a multiplier for 10000.[7]
36
+
37
+ When applying prefixes to derived units of area and volume that are expressed in terms of units of length squared or cubed, the square and cube operators are applied to the unit of length including the prefix, as illustrated below.[6]
38
+
39
+ Prefixes are not usually used to indicate multiples of a second greater than 1; the non-SI units of minute, hour and day are used instead. On the other hand, prefixes are used for multiples of the non-SI unit of volume, the litre (l, L) such as millilitres (ml).[6]
40
+
41
+ Each variant of the metric system has a degree of coherence—the derived units are directly related to the base units without the need for intermediate conversion factors.[8] For example, in a coherent system the units of force, energy and power are chosen so that the equations
42
+
43
+ hold without the introduction of unit conversion factors. Once a set of coherent units have been defined, other relationships in physics that use those units will automatically be true. Therefore, Einstein's mass–energy equation, E = mc2, does not require extraneous constants when expressed in coherent units.[9]
44
+
45
+ The CGS system had two units of energy, the erg that was related to mechanics and the calorie that was related to thermal energy; so only one of them (the erg) could bear a coherent relationship to the base units. Coherence was a design aim of SI, which resulted in only one unit of energy being defined – the joule.[10]
46
+
47
+ Maxwell's equations of electromagnetism contained a factor relating to steradians, representative of the fact that electric charges and magnetic fields may be considered to emanate from a point and propagate equally in all directions, i.e. spherically. This factor appeared awkwardly in many equations of physics dealing with the dimensionality of electromagnetism and sometimes other things.
48
+
49
+ A number of different metric system have been developed, all using the Mètre des Archives and Kilogramme des Archives (or their descendants) as their base units, but differing in the definitions of the various derived units.
50
+
51
+ In 1832, Gauss used the astronomical second as a base unit in defining the gravitation of the earth, and together with the gram and millimetre, became the first system of mechanical units.
52
+
53
+ The centimetre–gram–second system of units (CGS) was the first coherent metric system, having been developed in the 1860s and promoted by Maxwell and Thomson. In 1874, this system was formally promoted by the British Association for the Advancement of Science (BAAS).[11] The system's characteristics are that density is expressed in g/cm3, force expressed in dynes and mechanical energy in ergs. Thermal energy was defined in calories, one calorie being the energy required to raise the temperature of one gram of water from 15.5 °C to 16.5 °C. The meeting also recognised two sets of units for electrical and magnetic properties – the electrostatic set of units and the electromagnetic set of units.[12]
54
+
55
+ Several systems of electrical units were defined following discovery of Ohm's law in 1824.
56
+
57
+ The CGS units of electricity were cumbersome to work with. This was remedied at the 1893 International Electrical Congress held in Chicago by defining the "international" ampere and ohm using definitions based on the metre, kilogram and second.[13]
58
+
59
+ During the same period in which the CGS system was being extended to include electromagnetism, other systems were developed, distinguished by their choice of coherent base unit, including the Practical System of Electric Units, or QES (quad–eleventhgram–second) system, was being used.[14]:268[15]:17 Here, the base units are the quad, equal to 107 m (approximately a quadrant of the earth's circumference), the eleventhgram, equal to 10−11 g, and the second. These were chosen so that the corresponding electrical units of potential difference, current and resistance had a convenient magnitude.
60
+
61
+ In 1901, Giovanni Giorgi showed that by adding an electrical unit as a fourth base unit, the various anomalies in electromagnetic systems could be resolved. The metre–kilogram–second–coulomb (MKSC) and metre–kilogram–second–ampere (MKSA) systems are examples of such systems.[16]
62
+
63
+ The International System of Units (Système international d'unités or SI) is the current international standard metric system and is also the system most widely used around the world. It is an extension of Giorgi's MKSA system – its base units are the metre, kilogram, second, ampere, kelvin, candela and mole.[10]
64
+ The MKS (metre–kilogram–second) system came into existence in 1889, when artefacts for the metre and kilogram were fabricated according to the Metre Convention. Early in the 20th century, an unspecified electrical unit was added, and the system was called MKSX. When it became apparent that the unit would be the ampere, the system was referred to as the MKSA system, and was the direct predecessor of the SI.
65
+
66
+ The metre–tonne–second system of units (MTS) was based on the metre, tonne and second – the unit of force was the sthène and the unit of pressure was the pièze. It was invented in France for industrial use and from 1933 to 1955 was used both in France and in the Soviet Union.[17][18]
67
+
68
+ Gravitational metric systems use the kilogram-force (kilopond) as a base unit of force, with mass measured in a unit known as the hyl, Technische Masseneinheit (TME), mug or metric slug.[19] Although the CGPM passed a resolution in 1901 defining the standard value of acceleration due to gravity to be 980.665 cm/s2, gravitational units are not part of the International System of Units (SI).[20]
69
+
70
+ The International System of Units is the modern metric system. It is based on the metre–kilogram–second–ampere (MKSA) system of units from early in the 20th century. It also includes numerous coherent derived units for common quantities like power (watt) and irradience (lumen). Electrical units were taken from the International system then in use. Other units like those for energy (joule) were modelled on those from the older CGS system, but scaled to be coherent with MKSA units. Two additional base units, degree Kelvin equivalent to degree Celsius for thermodynamic temperature, and candela, roughly equivalent to the international candle unit of illumination, were introduced. Later, another base unit, the mole, a unit of mass equivalent to Avogadro's number of specified molecules, was added along with several other derived units.
71
+
72
+ The system was promulgated by the General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM) in 1960. At that time, the metre was redefined in terms of the wavelength of a spectral line of the krypton-86[Note 2] atom, and the standard metre artefact from 1889 was retired.
73
+
74
+ Today, the International system of units consists of 7 base units and innumerable coherent derived units including 22 with special names. The last new derived unit, the katal for catalytic activity, was added in 1999. Some of the base units are now realised in terms of invariant constants of physics. As a consequence, the speed of light has now become an exactly defined constant, and defines the metre as ​1⁄299,792,458 of the distance light travels in a second. Until 2019, the kilogram was defined by a man-made artefact of deteriorating platinum-iridium. The range of decimal prefixes has been extended to those for 1024, yotta, and 10−24, yocto, which are unfamiliar because nothing in our everyday lives is that big or that small.
75
+
76
+ The International System of Units has been adopted as the official system of weights and measures by all nations in the world except for Myanmar, Liberia, and the United States, while the United States is the only industrialised country where the metric system is not the predominant system of units.[21]
en/5868.html.txt ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The universe (Latin: universus) is all of space and time[a] and their contents,[10] including planets, stars, galaxies, and all other forms of matter and energy. While the spatial size of the entire universe is unknown,[3] it is possible to measure the size of the observable universe, which is currently estimated to be 93 billion light-years in diameter. In various multiverse hypotheses, a universe is one of many causally disconnected[11] constituent parts of a larger multiverse, which itself comprises all of space and time and its contents;[12] as a consequence, ‘the universe’ and ‘the multiverse’ are synonymous in such theories.
6
+
7
+ The earliest cosmological models of the universe were developed by ancient Greek and Indian philosophers and were geocentric, placing Earth at the center.[13][14] Over the centuries, more precise astronomical observations led Nicolaus Copernicus to develop the heliocentric model with the Sun at the center of the Solar System. In developing the law of universal gravitation, Isaac Newton built upon Copernicus' work as well as Johannes Kepler's laws of planetary motion and observations by Tycho Brahe.
8
+
9
+ Further observational improvements led to the realization that the Sun is one of hundreds of billions of stars in the Milky Way, which is one of at least two trillion galaxies in the universe. Many of the stars in our galaxy have planets. At the largest scale, galaxies are distributed uniformly and the same in all directions, meaning that the universe has neither an edge nor a center. At smaller scales, galaxies are distributed in clusters and superclusters which form immense filaments and voids in space, creating a vast foam-like structure.[15] Discoveries in the early 20th century have suggested that the universe had a beginning and that space has been expanding since then,[16] and is currently still expanding at an increasing rate.[17]
10
+
11
+ The Big Bang theory is the prevailing cosmological description of the development of the universe. According to estimation of this theory, space and time emerged together 13.799±0.021 billion years ago[2] and the energy and matter initially present have become less dense as the universe expanded. After an initial accelerated expansion called the inflationary epoch at around 10−32 seconds, and the separation of the four known fundamental forces, the universe gradually cooled and continued to expand, allowing the first subatomic particles and simple atoms to form. Dark matter gradually gathered, forming a foam-like structure of filaments and voids under the influence of gravity. Giant clouds of hydrogen and helium were gradually drawn to the places where dark matter was most dense, forming the first galaxies, stars, and everything else seen today. It is possible to see objects that are now further away than 13.799 billion light-years because space itself has expanded, and it is still expanding today. This means that objects which are now up to 46.5 billion light-years away can still be seen in their distant past, because in the past, when their light was emitted, they were much closer to Earth.
12
+
13
+ From studying the movement of galaxies, it has been discovered that the universe contains much more matter than is accounted for by visible objects; stars, galaxies, nebulas and interstellar gas. This unseen matter is known as dark matter[18] (dark means that there is a wide range of strong indirect evidence that it exists, but we have not yet detected it directly). The ΛCDM model is the most widely accepted model of our universe. It suggests that about 69.2%±1.2% [2015] of the mass and energy in the universe is a cosmological constant (or, in extensions to ΛCDM, other forms of dark energy, such as a scalar field) which is responsible for the current expansion of space, and about 25.8%±1.1% [2015] is dark matter.[19] Ordinary ('baryonic') matter is therefore only 4.84%±0.1% [2015] of the physical universe.[19] Stars, planets, and visible gas clouds only form about 6% of ordinary matter, or about 0.29% of the entire universe.[20]
14
+
15
+ There are many competing hypotheses about the ultimate fate of the universe and about what, if anything, preceded the Big Bang, while other physicists and philosophers refuse to speculate, doubting that information about prior states will ever be accessible. Some physicists have suggested various multiverse hypotheses, in which our universe might be one among many universes that likewise exist.[3][21][22]
16
+
17
+ The physical universe is defined as all of space and time[a] (collectively referred to as spacetime) and their contents.[10] Such contents comprise all of energy in its various forms, including electromagnetic radiation and matter, and therefore planets, moons, stars, galaxies, and the contents of intergalactic space.[23][24][25] The universe also includes the physical laws that influence energy and matter, such as conservation laws, classical mechanics, and relativity.[26]
18
+
19
+ The universe is often defined as "the totality of existence", or everything that exists, everything that has existed, and everything that will exist.[26] In fact, some philosophers and scientists support the inclusion of ideas and abstract concepts—such as mathematics and logic—in the definition of the universe.[28][29][30] The word universe may also refer to concepts such as the cosmos, the world, and nature.[31][32]
20
+
21
+ The word universe derives from the Old French word univers, which in turn derives from the Latin word universum.[33] The Latin word was used by Cicero and later Latin authors in many of the same senses as the modern English word is used.[34]
22
+
23
+ A term for "universe" among the ancient Greek philosophers from Pythagoras onwards was τὸ πᾶν, tò pân ("the all"), defined as all matter and all space, and τὸ ὅλον, tò hólon ("all things"), which did not necessarily include the void.[35][36] Another synonym was ὁ κόσμος, ho kósmos (meaning the world, the cosmos).[37] Synonyms are also found in Latin authors (totum, mundus, natura)[38] and survive in modern languages, e.g., the German words Das All, Weltall, and Natur for universe. The same synonyms are found in English, such as everything (as in the theory of everything), the cosmos (as in cosmology), the world (as in the many-worlds interpretation), and nature (as in natural laws or natural philosophy).[39]
24
+
25
+ The prevailing model for the evolution of the universe is the Big Bang theory.[40][41] The Big Bang model states that the earliest state of the universe was an extremely hot and dense one, and that the universe subsequently expanded and cooled. The model is based on general relativity and on simplifying assumptions such as homogeneity and isotropy of space. A version of the model with a cosmological constant (Lambda) and cold dark matter, known as the Lambda-CDM model, is the simplest model that provides a reasonably good account of various observations about the universe. The Big Bang model accounts for observations such as the correlation of distance and redshift of galaxies, the ratio of the number of hydrogen to helium atoms, and the microwave radiation background.
26
+
27
+ The initial hot, dense state is called the Planck epoch, a brief period extending from time zero to one Planck time unit of approximately 10−43 seconds. During the Planck epoch, all types of matter and all types of energy were concentrated into a dense state, and gravity—currently the weakest by far of the four known forces—is believed to have been as strong as the other fundamental forces, and all the forces may have been unified. Since the Planck epoch, space has been expanding to its present scale, with a very short but intense period of cosmic inflation believed to have occurred within the first 10−32 seconds.[42] This was a kind of expansion different from those we can see around us today. Objects in space did not physically move; instead the metric that defines space itself changed. Although objects in spacetime cannot move faster than the speed of light, this limitation does not apply to the metric governing spacetime itself. This initial period of inflation is believed to explain why space appears to be very flat, and much larger than light could travel since the start of the universe.[clarification needed]
28
+
29
+ Within the first fraction of a second of the universe's existence, the four fundamental forces had separated. As the universe continued to cool down from its inconceivably hot state, various types of subatomic particles were able to form in short periods of time known as the quark epoch, the hadron epoch, and the lepton epoch. Together, these epochs encompassed less than 10 seconds of time following the Big Bang. These elementary particles associated stably into ever larger combinations, including stable protons and neutrons, which then formed more complex atomic nuclei through nuclear fusion. This process, known as Big Bang nucleosynthesis, only lasted for about 17 minutes and ended about 20 minutes after the Big Bang, so only the fastest and simplest reactions occurred. About 25% of the protons and all the neutrons in the universe, by mass, were converted to helium, with small amounts of deuterium (a form of hydrogen) and traces of lithium. Any other element was only formed in very tiny quantities. The other 75% of the protons remained unaffected, as hydrogen nuclei.
30
+
31
+ After nucleosynthesis ended, the universe entered a period known as the photon epoch. During this period, the universe was still far too hot for matter to form neutral atoms, so it contained a hot, dense, foggy plasma of negatively charged electrons, neutral neutrinos and positive nuclei. After about 377,000 years, the universe had cooled enough that electrons and nuclei could form the first stable atoms. This is known as recombination for historical reasons; in fact electrons and nuclei were combining for the first time. Unlike plasma, neutral atoms are transparent to many wavelengths of light, so for the first time the universe also became transparent. The photons released ("decoupled") when these atoms formed can still be seen today; they form the cosmic microwave background (CMB).
32
+
33
+ As the universe expands, the energy density of electromagnetic radiation decreases more quickly than does that of matter because the energy of a photon decreases with its wavelength. At around 47,000 years, the energy density of matter became larger than that of photons and neutrinos, and began to dominate the large scale behavior of the universe. This marked the end of the radiation-dominated era and the start of the matter-dominated era.
34
+
35
+ In the earliest stages of the universe, tiny fluctuations within the universe's density led to concentrations of dark matter gradually forming. Ordinary matter, attracted to these by gravity, formed large gas clouds and eventually, stars and galaxies, where the dark matter was most dense, and voids where it was least dense. After around 100 - 300 million years,[citation needed] the first stars formed, known as Population III stars. These were probably very massive, luminous, non metallic and short-lived. They were responsible for the gradual reionization of the universe between about 200-500 million years and 1 billion years, and also for seeding the universe with elements heavier than helium, through stellar nucleosynthesis.[43] The universe also contains a mysterious energy—possibly a scalar field—called dark energy, the density of which does not change over time. After about 9.8 billion years, the universe had expanded sufficiently so that the density of matter was less than the density of dark energy, marking the beginning of the present dark-energy-dominated era.[44] In this era, the expansion of the universe is accelerating due to dark energy.
36
+
37
+ Of the four fundamental interactions, gravitation is the dominant at astronomical length scales. Gravity's effects are cumulative; by contrast, the effects of positive and negative charges tend to cancel one another, making electromagnetism relatively insignificant on astronomical length scales. The remaining two interactions, the weak and strong nuclear forces, decline very rapidly with distance; their effects are confined mainly to sub-atomic length scales.
38
+
39
+ The universe appears to have much more matter than antimatter, an asymmetry possibly related to the CP violation.[45] This imbalance between matter and antimatter is partially responsible for the existence of all matter existing today, since matter and antimatter, if equally produced at the Big Bang, would have completely annihilated each other and left only photons as a result of their interaction.[46][47] The universe also appears to have neither net momentum nor angular momentum, which follows accepted physical laws if the universe is finite. These laws are Gauss's law and the non-divergence of the stress-energy-momentum pseudotensor.[48]
40
+
41
+ This diagram shows Earth's location in the universe on increasingly larger scales. The images, labeled along their left edge, increase in size from left to right, then from top to bottom.
42
+
43
+ The size of the universe is somewhat difficult to define. According to the general theory of relativity, far regions of space may never interact with ours even in the lifetime of the universe due to the finite speed of light and the ongoing expansion of space. For example, radio messages sent from Earth may never reach some regions of space, even if the universe were to exist forever: space may expand faster than light can traverse it.[49]
44
+
45
+ Distant regions of space are assumed to exist and to be part of reality as much as we are, even though we can never interact with them. The spatial region that we can affect and be affected by is the observable universe. The observable universe depends on the location of the observer. By traveling, an observer can come into contact with a greater region of spacetime than an observer who remains still. Nevertheless, even the most rapid traveler will not be able to interact with all of space. Typically, the observable universe is taken to mean the portion of the universe that is observable from our vantage point in the Milky Way.
46
+
47
+ The proper distance—the distance as would be measured at a specific time, including the present—between Earth and the edge of the observable universe is 46 billion light-years[50] (14 billion parsecs),[51] making the diameter of the observable universe about 93 billion light-years (28 billion parsecs).[50] The distance the light from the edge of the observable universe has travelled is very close to the age of the universe times the speed of light, 13.8 billion light-years (4.2×10^9 pc), but this does not represent the distance at any given time because the edge of the observable universe and the Earth have since moved further apart.[52] For comparison, the diameter of a typical galaxy is 30,000 light-years (9,198 parsecs), and the typical distance between two neighboring galaxies is 3 million light-years (919.8 kiloparsecs).[53] As an example, the Milky Way is roughly 100,000–180,000 light-years in diameter,[54][55] and the nearest sister galaxy to the Milky Way, the Andromeda Galaxy, is located roughly 2.5 million light-years away.[56]
48
+
49
+ Because we cannot observe space beyond the edge of the observable universe, it is unknown whether the size of the universe in its totality is finite or infinite.[3][57][58] Estimates suggest that the whole universe, if finite, must be more than 250 times larger than the observable universe.[59] Some disputed[60] estimates for the total size of the universe, if finite, reach as high as
50
+
51
+
52
+
53
+
54
+ 10
55
+
56
+
57
+ 10
58
+
59
+
60
+ 10
61
+
62
+ 122
63
+
64
+
65
+
66
+
67
+
68
+
69
+
70
+
71
+ {\displaystyle 10^{10^{10^{122}}}}
72
+
73
+ megaparsecs, as implied by a suggested resolution of the No-Boundary Proposal.[61][b]
74
+
75
+ Astronomers calculate the age of the universe by assuming that the Lambda-CDM model accurately describes the evolution of the Universe from a very uniform, hot, dense primordial state to its present state and measuring the cosmological parameters which constitute the model.[citation needed] This model is well understood theoretically and supported by recent high-precision astronomical observations such as WMAP and Planck.[citation needed] Commonly, the set of observations fitted includes the cosmic microwave background anisotropy, the brightness/redshift relation for Type Ia supernovae, and large-scale galaxy clustering including the baryon acoustic oscillation feature.[citation needed] Other observations, such as the Hubble constant, the abundance of galaxy clusters, weak gravitational lensing and globular cluster ages, are generally consistent with these, providing a check of the model, but are less accurately measured at present.[citation needed] Assuming that the Lambda-CDM model is correct, the measurements of the parameters using a variety of techniques by numerous experiments yield a best value of the age of the universe as of 2015 of 13.799 ± 0.021 billion years.[2]
76
+
77
+ Over time, the universe and its contents have evolved; for example, the relative population of quasars and galaxies has changed[62] and space itself has expanded. Due to this expansion, scientists on Earth can observe the light from a galaxy 30 billion light-years away even though that light has traveled for only 13 billion years; the very space between them has expanded. This expansion is consistent with the observation that the light from distant galaxies has been redshifted; the photons emitted have been stretched to longer wavelengths and lower frequency during their journey. Analyses of Type Ia supernovae indicate that the spatial expansion is accelerating.[63][64]
78
+
79
+ The more matter there is in the universe, the stronger the mutual gravitational pull of the matter. If the universe were too dense then it would re-collapse into a gravitational singularity. However, if the universe contained too little matter then the self-gravity would be too weak for astronomical structures, like galaxies or planets, to form. Since the Big Bang, the universe has expanded monotonically. Perhaps unsurprisingly, our universe has just the right mass-energy density, equivalent to about 5 protons per cubic metre, which has allowed it to expand for the last 13.8 billion years, giving time to form the universe as observed today.[65]
80
+
81
+ There are dynamical forces acting on the particles in the universe which affect the expansion rate. Before 1998, it was expected that the expansion rate would be decreasing as time went on due to the influence of gravitational interactions in the universe; and thus there is an additional observable quantity in the universe called the deceleration parameter, which most cosmologists expected to be positive and related to the matter density of the universe. In 1998, the deceleration parameter was measured by two different groups to be negative, approximately -0.55, which technically implies that the second derivative of the cosmic scale factor
82
+
83
+
84
+
85
+
86
+
87
+
88
+ a
89
+ ¨
90
+
91
+
92
+
93
+
94
+
95
+ {\displaystyle {\ddot {a}}}
96
+
97
+ has been positive in the last 5-6 billion years.[17][66] This acceleration does not, however, imply that the Hubble parameter is currently increasing; see deceleration parameter for details.
98
+
99
+ Spacetimes are the arenas in which all physical events take place. The basic elements of spacetimes are events. In any given spacetime, an event is defined as a unique position at a unique time. A spacetime is the union of all events (in the same way that a line is the union of all of its points), formally organized into a manifold.[67]
100
+
101
+ The universe appears to be a smooth spacetime continuum consisting of three spatial dimensions and one temporal (time) dimension (an event in the spacetime of the physical universe can therefore be identified by a set of four coordinates: (x, y, z, t) ). On the average, space is observed to be very nearly flat (with a curvature close to zero), meaning that Euclidean geometry is empirically true with high accuracy throughout most of the Universe.[68] Spacetime also appears to have a simply connected topology, in analogy with a sphere, at least on the length-scale of the observable universe. However, present observations cannot exclude the possibilities that the universe has more dimensions (which is postulated by theories such as the string theory) and that its spacetime may have a multiply connected global topology, in analogy with the cylindrical or toroidal topologies of two-dimensional spaces.[69][70] The spacetime of the universe is usually interpreted from a Euclidean perspective, with space as consisting of three dimensions, and time as consisting of one dimension, the "fourth dimension".[71] By combining space and time into a single manifold called Minkowski space, physicists have simplified a large number of physical theories, as well as described in a more uniform way the workings of the universe at both the supergalactic and subatomic levels.
102
+
103
+ Spacetime events are not absolutely defined spatially and temporally but rather are known to be relative to the motion of an observer. Minkowski space approximates the universe without gravity; the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity.
104
+
105
+ General relativity describes how spacetime is curved and bent by mass and energy (gravity). The topology or geometry of the universe includes both local geometry in the observable universe and global geometry. Cosmologists often work with a given space-like slice of spacetime called the comoving coordinates. The section of spacetime which can be observed is the backward light cone, which delimits the cosmological horizon. The cosmological horizon (also called the particle horizon or the light horizon) is the maximum distance from which particles can have traveled to the observer in the age of the universe. This horizon represents the boundary between the observable and the unobservable regions of the universe.[72][73] The existence, properties, and significance of a cosmological horizon depend on the particular cosmological model.
106
+
107
+ An important parameter determining the future evolution of the universe theory is the density parameter, Omega (Ω), defined as the average matter density of the universe divided by a critical value of that density. This selects one of three possible geometries depending on whether Ω is equal to, less than, or greater than 1. These are called, respectively, the flat, open and closed universes.[74]
108
+
109
+ Observations, including the Cosmic Background Explorer (COBE), Wilkinson Microwave Anisotropy Probe (WMAP), and Planck maps of the CMB, suggest that the universe is infinite in extent with a finite age, as described by the Friedmann–Lemaître–Robertson–Walker (FLRW) models.[75][69][76][77] These FLRW models thus support inflationary models and the standard model of cosmology, describing a flat, homogeneous universe presently dominated by dark matter and dark energy.[78][79]
110
+
111
+ The universe may be fine-tuned; the Fine-tuned universe hypothesis is the proposition that the conditions that allow the existence of observable life in the universe can only occur when certain universal fundamental physical constants lie within a very narrow range of values, so that if any of several fundamental constants were only slightly different, the universe would have been unlikely to be conducive to the establishment and development of matter, astronomical structures, elemental diversity, or life as it is understood.[80] The proposition is discussed among philosophers, scientists, theologians, and proponents of creationism.
112
+
113
+ The universe is composed almost completely of dark energy, dark matter, and ordinary matter. Other contents are electromagnetic radiation (estimated to constitute from 0.005% to close to 0.01% of the total mass-energy of the universe) and antimatter.[81][82][83]
114
+
115
+ The proportions of all types of matter and energy have changed over the history of the universe.[84] The total amount of electromagnetic radiation generated within the universe has decreased by 1/2 in the past 2 billion years.[85][86] Today, ordinary matter, which includes atoms, stars, galaxies, and life, accounts for only 4.9% of the contents of the Universe.[8] The present overall density of this type of matter is very low, roughly 4.5 × 10−31 grams per cubic centimetre, corresponding to a density of the order of only one proton for every four cubic metres of volume.[6] The nature of both dark energy and dark matter is unknown. Dark matter, a mysterious form of matter that has not yet been identified, accounts for 26.8% of the cosmic contents. Dark energy, which is the energy of empty space and is causing the expansion of the universe to accelerate, accounts for the remaining 68.3% of the contents.[8][87][88]
116
+
117
+ Matter, dark matter, and dark energy are distributed homogeneously throughout the universe over length scales longer than 300 million light-years or so.[89] However, over shorter length-scales, matter tends to clump hierarchically; many atoms are condensed into stars, most stars into galaxies, most galaxies into clusters, superclusters and, finally, large-scale galactic filaments. The observable universe contains more than 2 trillion (1012) galaxies[90] and, overall, as many as an estimated 1×1024 stars[91][92] (more stars than all the grains of sand on planet Earth).[93] Typical galaxies range from dwarfs with as few as ten million[94] (107) stars up to giants with one trillion[95] (1012) stars. Between the larger structures are voids, which are typically 10–150 Mpc (33 million–490 million ly) in diameter. The Milky Way is in the Local Group of galaxies, which in turn is in the Laniakea Supercluster.[96] This supercluster spans over 500 million light-years, while the Local Group spans over 10 million light-years.[97] The Universe also has vast regions of relative emptiness; the largest known void measures 1.8 billion ly (550 Mpc) across.[98]
118
+
119
+ The observable universe is isotropic on scales significantly larger than superclusters, meaning that the statistical properties of the universe are the same in all directions as observed from Earth. The universe is bathed in highly isotropic microwave radiation that corresponds to a thermal equilibrium blackbody spectrum of roughly 2.72548 kelvins.[7] The hypothesis that the large-scale universe is homogeneous and isotropic is known as the cosmological principle.[100] A universe that is both homogeneous and isotropic looks the same from all vantage points[101] and has no center.[102]
120
+
121
+ An explanation for why the expansion of the universe is accelerating remains elusive. It is often attributed to "dark energy", an unknown form of energy that is hypothesized to permeate space.[103] On a mass–energy equivalence basis, the density of dark energy (~ 7 × 10−30 g/cm3) is much less than the density of ordinary matter or dark matter within galaxies. However, in the present dark-energy era, it dominates the mass–energy of the universe because it is uniform across space.[104][105]
122
+
123
+ Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously,[106] and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to vacuum energy. Scalar fields having only a slight amount of spatial inhomogeneity would be difficult to distinguish from a cosmological constant.
124
+
125
+ Dark matter is a hypothetical kind of matter that is invisible to the entire electromagnetic spectrum, but which accounts for most of the matter in the universe. The existence and properties of dark matter are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. Other than neutrinos, a form of hot dark matter, dark matter has not been detected directly, making it one of the greatest mysteries in modern astrophysics. Dark matter neither emits nor absorbs light or any other electromagnetic radiation at any significant level. Dark matter is estimated to constitute 26.8% of the total mass–energy and 84.5% of the total matter in the universe.[87][107]
126
+
127
+ The remaining 4.9% of the mass–energy of the universe is ordinary matter, that is, atoms, ions, electrons and the objects they form. This matter includes stars, which produce nearly all of the light we see from galaxies, as well as interstellar gas in the interstellar and intergalactic media, planets, and all the objects from everyday life that we can bump into, touch or squeeze.[108] As a matter of fact, the great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10 per cent of the ordinary matter contribution to the mass-energy density of the universe.[109]
128
+
129
+ Ordinary matter commonly exists in four states (or phases): solid, liquid, gas, and plasma. However, advances in experimental techniques have revealed other previously theoretical phases, such as Bose–Einstein condensates and fermionic condensates.
130
+
131
+ Ordinary matter is composed of two types of elementary particles: quarks and leptons.[110] For example, the proton is formed of two up quarks and one down quark; the neutron is formed of two down quarks and one up quark; and the electron is a kind of lepton. An atom consists of an atomic nucleus, made up of protons and neutrons, and electrons that orbit the nucleus. Because most of the mass of an atom is concentrated in its nucleus, which is made up of baryons, astronomers often use the term baryonic matter to describe ordinary matter, although a small fraction of this "baryonic matter" is electrons.
132
+
133
+ Soon after the Big Bang, primordial protons and neutrons formed from the quark–gluon plasma of the early universe as it cooled below two trillion degrees. A few minutes later, in a process known as Big Bang nucleosynthesis, nuclei formed from the primordial protons and neutrons. This nucleosynthesis formed lighter elements, those with small atomic numbers up to lithium and beryllium, but the abundance of heavier elements dropped off sharply with increasing atomic number. Some boron may have been formed at this time, but the next heavier element, carbon, was not formed in significant amounts. Big Bang nucleosynthesis shut down after about 20 minutes due to the rapid drop in temperature and density of the expanding universe. Subsequent formation of heavier elements resulted from stellar nucleosynthesis and supernova nucleosynthesis.[111]
134
+
135
+ Ordinary matter and the forces that act on matter can be described in terms of elementary particles.[112] These particles are sometimes described as being fundamental, since they have an unknown substructure, and it is unknown whether or not they are composed of smaller and even more fundamental particles.[113][114] Of central importance is the Standard Model, a theory that is concerned with electromagnetic interactions and the weak and strong nuclear interactions.[115] The Standard Model is supported by the experimental confirmation of the existence of particles that compose matter: quarks and leptons, and their corresponding "antimatter" duals, as well as the force particles that mediate interactions: the photon, the W and Z bosons, and the gluon.[113] The Standard Model predicted the existence of the recently discovered Higgs boson, a particle that is a manifestation of a field within the universe that can endow particles with mass.[116][117] Because of its success in explaining a wide variety of experimental results, the Standard Model is sometimes regarded as a "theory of almost everything".[115] The Standard Model does not, however, accommodate gravity. A true force-particle "theory of everything" has not been attained.[118]
136
+
137
+ A hadron is a composite particle made of quarks held together by the strong force. Hadrons are categorized into two families: baryons (such as protons and neutrons) made of three quarks, and mesons (such as pions) made of one quark and one antiquark. Of the hadrons, protons are stable, and neutrons bound within atomic nuclei are stable. Other hadrons are unstable under ordinary conditions and are thus insignificant constituents of the modern universe. From approximately 10−6 seconds after the Big Bang, during a period is known as the hadron epoch, the temperature of the universe had fallen sufficiently to allow quarks to bind together into hadrons, and the mass of the universe was dominated by hadrons. Initially the temperature was high enough to allow the formation of hadron/anti-hadron pairs, which kept matter and antimatter in thermal equilibrium. However, as the temperature of the universe continued to fall, hadron/anti-hadron pairs were no longer produced. Most of the hadrons and anti-hadrons were then eliminated in particle-antiparticle annihilation reactions, leaving a small residual of hadrons by the time the universe was about one second old.[119]:244–66
138
+
139
+ A lepton is an elementary, half-integer spin particle that does not undergo strong interactions but is subject to the Pauli exclusion principle; no two leptons of the same species can be in exactly the same state at the same time.[120] Two main classes of leptons exist: charged leptons (also known as the electron-like leptons), and neutral leptons (better known as neutrinos). Electrons are stable and the most common charged lepton in the universe, whereas muons and taus are unstable particle that quickly decay after being produced in high energy collisions, such as those involving cosmic rays or carried out in particle accelerators.[121][122] Charged leptons can combine with other particles to form various composite particles such as atoms and positronium. The electron governs nearly all of chemistry, as it is found in atoms and is directly tied to all chemical properties. Neutrinos rarely interact with anything, and are consequently rarely observed. Neutrinos stream throughout the universe but rarely interact with normal matter.[123]
140
+
141
+ The lepton epoch was the period in the evolution of the early universe in which the leptons dominated the mass of the universe. It started roughly 1 second after the Big Bang, after the majority of hadrons and anti-hadrons annihilated each other at the end of the hadron epoch. During the lepton epoch the temperature of the universe was still high enough to create lepton/anti-lepton pairs, so leptons and anti-leptons were in thermal equilibrium. Approximately 10 seconds after the Big Bang, the temperature of the universe had fallen to the point where lepton/anti-lepton pairs were no longer created.[124] Most leptons and anti-leptons were then eliminated in annihilation reactions, leaving a small residue of leptons. The mass of the universe was then dominated by photons as it entered the following photon epoch.[125][126]
142
+
143
+ A photon is the quantum of light and all other forms of electromagnetic radiation. It is the force carrier for the electromagnetic force, even when static via virtual photons. The effects of this force are easily observable at the microscopic and at the macroscopic level because the photon has zero rest mass; this allows long distance interactions. Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, exhibiting properties of waves and of particles.
144
+
145
+ The photon epoch started after most leptons and anti-leptons were annihilated at the end of the lepton epoch, about 10 seconds after the Big Bang. Atomic nuclei were created in the process of nucleosynthesis which occurred during the first few minutes of the photon epoch. For the remainder of the photon epoch the universe contained a hot dense plasma of nuclei, electrons and photons. About 380,000 years after the Big Bang, the temperature of the Universe fell to the point where nuclei could combine with electrons to create neutral atoms. As a result, photons no longer interacted frequently with matter and the universe became transparent. The highly redshifted photons from this period form the cosmic microwave background. Tiny variations in temperature and density detectable in the CMB were the early "seeds" from which all subsequent structure formation took place.[119]:244–66
146
+
147
+ General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. It is the basis of current cosmological models of the universe. General relativity generalizes special relativity and Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of partial differential equations. In general relativity, the distribution of matter and energy determines the geometry of spacetime, which in turn describes the acceleration of matter. Therefore, solutions of the Einstein field equations describe the evolution of the universe. Combined with measurements of the amount, type, and distribution of matter in the universe, the equations of general relativity describe the evolution of the universe over time.[127]
148
+
149
+ With the assumption of the cosmological principle that the universe is homogeneous and isotropic everywhere, a specific solution of the field equations that describes the universe is the metric tensor called the Friedmann–Lemaître–Robertson–Walker metric,
150
+
151
+ where (r, θ, φ) correspond to a spherical coordinate system. This metric has only two undetermined parameters. An overall dimensionless length scale factor R describes the size scale of the universe as a function of time; an increase in R is the expansion of the universe.[128] A curvature index k describes the geometry. The index k is defined so that it can take only one of three values: 0, corresponding to flat Euclidean geometry; 1, corresponding to a space of positive curvature; or −1, corresponding to a space of positive or negative curvature.[129] The value of R as a function of time t depends upon k and the cosmological constant Λ.[127] The cosmological constant represents the energy density of the vacuum of space and could be related to dark energy.[88] The equation describing how R varies with time is known as the Friedmann equation after its inventor, Alexander Friedmann.[130]
152
+
153
+ The solutions for R(t) depend on k and Λ, but some qualitative features of such solutions are general. First and most importantly, the length scale R of the universe can remain constant only if the universe is perfectly isotropic with positive curvature (k=1) and has one precise value of density everywhere, as first noted by Albert Einstein.[127] However, this equilibrium is unstable: because the universe is known to be inhomogeneous on smaller scales, R must change over time. When R changes, all the spatial distances in the universe change in tandem; there is an overall expansion or contraction of space itself. This accounts for the observation that galaxies appear to be flying apart; the space between them is stretching. The stretching of space also accounts for the apparent paradox that two galaxies can be 40 billion light-years apart, although they started from the same point 13.8 billion years ago[131] and never moved faster than the speed of light.
154
+
155
+ Second, all solutions suggest that there was a gravitational singularity in the past, when R went to zero and matter and energy were infinitely dense. It may seem that this conclusion is uncertain because it is based on the questionable assumptions of perfect homogeneity and isotropy (the cosmological principle) and that only the gravitational interaction is significant. However, the Penrose–Hawking singularity theorems show that a singularity should exist for very general conditions. Hence, according to Einstein's field equations, R grew rapidly from an unimaginably hot, dense state that existed immediately following this singularity (when R had a small, finite value); this is the essence of the Big Bang model of the universe. Understanding the singularity of the Big Bang likely requires a quantum theory of gravity, which has not yet been formulated.[132]
156
+
157
+ Third, the curvature index k determines the sign of the mean spatial curvature of spacetime[129] averaged over sufficiently large length scales (greater than about a billion light-years). If k=1, the curvature is positive and the universe has a finite volume.[133] A universe with positive curvature is often visualized as a three-dimensional sphere embedded in a four-dimensional space. Conversely, if k is zero or negative, the universe has an infinite volume.[133] It may seem counter-intuitive that an infinite and yet infinitely dense universe could be created in a single instant at the Big Bang when R=0, but exactly that is predicted mathematically when k does not equal 1. By analogy, an infinite plane has zero curvature but infinite area, whereas an infinite cylinder is finite in one direction and a torus is finite in both. A toroidal universe could behave like a normal universe with periodic boundary conditions.
158
+
159
+ The ultimate fate of the universe is still unknown, because it depends critically on the curvature index k and the cosmological constant Λ. If the universe were sufficiently dense, k would equal +1, meaning that its average curvature throughout is positive and the universe will eventually recollapse in a Big Crunch,[134] possibly starting a new universe in a Big Bounce. Conversely, if the universe were insufficiently dense, k would equal 0 or −1 and the universe would expand forever, cooling off and eventually reaching the Big Freeze and the heat death of the universe.[127] Modern data suggests that the rate of expansion of the universe is not decreasing, as originally expected, but increasing; if this continues indefinitely, the universe may eventually reach a Big Rip. Observationally, the universe appears to be flat (k = 0), with an overall density that is very close to the critical value between recollapse and eternal expansion.[135]
160
+
161
+ Some speculative theories have proposed that our universe is but one of a set of disconnected universes, collectively denoted as the multiverse, challenging or enhancing more limited definitions of the universe.[21][136] Scientific multiverse models are distinct from concepts such as alternate planes of consciousness and simulated reality.
162
+
163
+ Max Tegmark developed a four-part classification scheme for the different types of multiverses that scientists have suggested in response to various Physics problems. An example of such multiverses is the one resulting from the chaotic inflation model of the early universe.[137] Another is the multiverse resulting from the many-worlds interpretation of quantum mechanics. In this interpretation, parallel worlds are generated in a manner similar to quantum superposition and decoherence, with all states of the wave functions being realized in separate worlds. Effectively, in the many-worlds interpretation the multiverse evolves as a universal wavefunction. If the Big Bang that created our multiverse created an ensemble of multiverses, the wave function of the ensemble would be entangled in this sense.[138]
164
+
165
+ The least controversial, but still highly disputed, category of multiverse in Tegmark's scheme is Level I. The multiverses of this level are composed by distant spacetime events "in our own universe". Tegmark and others[139] have argued that, if space is infinite, or sufficiently large and uniform, identical instances of the history of Earth's entire Hubble volume occur every so often, simply by chance. Tegmark calculated that our nearest so-called doppelgänger, is 1010115 metres away from us (a double exponential function larger than a googolplex).[140][141] However, the arguments used are of speculative nature.[142] Additionally, it would be impossible to scientifically verify the existence of an identical Hubble volume.
166
+
167
+ It is possible to conceive of disconnected spacetimes, each existing but unable to interact with one another.[140][143] An easily visualized metaphor of this concept is a group of separate soap bubbles, in which observers living on one soap bubble cannot interact with those on other soap bubbles, even in principle.[144] According to one common terminology, each "soap bubble" of spacetime is denoted as a universe, whereas our particular spacetime is denoted as the universe,[21] just as we call our moon the Moon. The entire collection of these separate spacetimes is denoted as the multiverse.[21] With this terminology, different universes are not causally connected to each other.[21] In principle, the other unconnected universes may have different dimensionalities and topologies of spacetime, different forms of matter and energy, and different physical laws and physical constants, although such possibilities are purely speculative.[21] Others consider each of several bubbles created as part of chaotic inflation to be separate universes, though in this model these universes all share a causal origin.[21]
168
+
169
+ Historically, there have been many ideas of the cosmos (cosmologies) and its origin (cosmogonies). Theories of an impersonal universe governed by physical laws were first proposed by the Greeks and Indians.[14] Ancient Chinese philosophy encompassed the notion of the universe including both all of space and all of time.[145] Over the centuries, improvements in astronomical observations and theories of motion and gravitation led to ever more accurate descriptions of the universe. The modern era of cosmology began with Albert Einstein's 1915 general theory of relativity, which made it possible to quantitatively predict the origin, evolution, and conclusion of the universe as a whole. Most modern, accepted theories of cosmology are based on general relativity and, more specifically, the predicted Big Bang.[146]
170
+
171
+ Many cultures have stories describing the origin of the world and universe. Cultures generally regard these stories as having some truth. There are however many differing beliefs in how these stories apply amongst those believing in a supernatural origin, ranging from a god directly creating the universe as it is now to a god just setting the "wheels in motion" (for example via mechanisms such as the big bang and evolution).[147]
172
+
173
+ Ethnologists and anthropologists who study myths have developed various classification schemes for the various themes that appear in creation stories.[148][149] For example, in one type of story, the world is born from a world egg; such stories include the Finnish epic poem Kalevala, the Chinese story of Pangu or the Indian Brahmanda Purana. In related stories, the universe is created by a single entity emanating or producing something by him- or herself, as in the Tibetan Buddhism concept of Adi-Buddha, the ancient Greek story of Gaia (Mother Earth), the Aztec goddess Coatlicue myth, the ancient Egyptian god Atum story, and the Judeo-Christian Genesis creation narrative in which the Abrahamic God created the universe. In another type of story, the universe is created from the union of male and female deities, as in the Maori story of Rangi and Papa. In other stories, the universe is created by crafting it from pre-existing materials, such as the corpse of a dead god—as from Tiamat in the Babylonian epic Enuma Elish or from the giant Ymir in Norse mythology—or from chaotic materials, as in Izanagi and Izanami in Japanese mythology. In other stories, the universe emanates from fundamental principles, such as Brahman and Prakrti, the creation myth of the Serers,[150] or the yin and yang of the Tao.
174
+
175
+ The pre-Socratic Greek philosophers and Indian philosophers developed some of the earliest philosophical concepts of the universe.[14][151] The earliest Greek philosophers noted that appearances can be deceiving, and sought to understand the underlying reality behind the appearances. In particular, they noted the ability of matter to change forms (e.g., ice to water to steam) and several philosophers proposed that all the physical materials in the world are different forms of a single primordial material, or arche. The first to do so was Thales, who proposed this material to be water. Thales' student, Anaximander, proposed that everything came from the limitless apeiron. Anaximenes proposed the primordial material to be air on account of its perceived attractive and repulsive qualities that cause the arche to condense or dissociate into different forms. Anaxagoras proposed the principle of Nous (Mind), while Heraclitus proposed fire (and spoke of logos). Empedocles proposed the elements to be earth, water, air and fire. His four-element model became very popular. Like Pythagoras, Plato believed that all things were composed of number, with Empedocles' elements taking the form of the Platonic solids. Democritus, and later philosophers—most notably Leucippus—proposed that the universe is composed of indivisible atoms moving through a void (vacuum), although Aristotle did not believe that to be feasible because air, like water, offers resistance to motion. Air will immediately rush in to fill a void, and moreover, without resistance, it would do so indefinitely fast.[14]
176
+
177
+ Although Heraclitus argued for eternal change, his contemporary Parmenides made the radical suggestion that all change is an illusion, that the true underlying reality is eternally unchanging and of a single nature. Parmenides denoted this reality as τὸ ἐν (The One). Parmenides' idea seemed implausible to many Greeks, but his student Zeno of Elea challenged them with several famous paradoxes. Aristotle responded to these paradoxes by developing the notion of a potential countable infinity, as well as the infinitely divisible continuum. Unlike the eternal and unchanging cycles of time, he believed that the world is bounded by the celestial spheres and that cumulative stellar magnitude is only finitely multiplicative.
178
+
179
+ The Indian philosopher Kanada, founder of the Vaisheshika school, developed a notion of atomism and proposed that light and heat were varieties of the same substance.[152] In the 5th century AD, the Buddhist atomist philosopher Dignāga proposed atoms to be point-sized, durationless, and made of energy. They denied the existence of substantial matter and proposed that movement consisted of momentary flashes of a stream of energy.[153]
180
+
181
+ The notion of temporal finitism was inspired by the doctrine of creation shared by the three Abrahamic religions: Judaism, Christianity and Islam. The Christian philosopher, John Philoponus, presented the philosophical arguments against the ancient Greek notion of an infinite past and future. Philoponus' arguments against an infinite past were used by the early Muslim philosopher, Al-Kindi (Alkindus); the Jewish philosopher, Saadia Gaon (Saadia ben Joseph); and the Muslim theologian, Al-Ghazali (Algazel).[154]
182
+
183
+ Astronomical models of the universe were proposed soon after astronomy began with the Babylonian astronomers, who viewed the universe as a flat disk floating in the ocean, and this forms the premise for early Greek maps like those of Anaximander and Hecataeus of Miletus.
184
+
185
+ Later Greek philosophers, observing the motions of the heavenly bodies, were concerned with developing models of the universe-based more profoundly on empirical evidence. The first coherent model was proposed by Eudoxus of Cnidos. According to Aristotle's physical interpretation of the model, celestial spheres eternally rotate with uniform motion around a stationary Earth. Normal matter is entirely contained within the terrestrial sphere.
186
+
187
+ De Mundo (composed before 250 BC or between 350 and 200 BC), stated, "Five elements, situated in spheres in five regions, the less being in each case surrounded by the greater—namely, earth surrounded by water, water by air, air by fire, and fire by ether—make up the whole universe".[155]
188
+
189
+ This model was also refined by Callippus and after concentric spheres were abandoned, it was brought into nearly perfect agreement with astronomical observations by Ptolemy. The success of such a model is largely due to the mathematical fact that any function (such as the position of a planet) can be decomposed into a set of circular functions (the Fourier modes). Other Greek scientists, such as the Pythagorean philosopher Philolaus, postulated (according to Stobaeus account) that at the center of the universe was a "central fire" around which the Earth, Sun, Moon and Planets revolved in uniform circular motion.[156]
190
+
191
+ The Greek astronomer Aristarchus of Samos was the first known individual to propose a heliocentric model of the universe. Though the original text has been lost, a reference in Archimedes' book The Sand Reckoner describes Aristarchus's heliocentric model. Archimedes wrote:
192
+
193
+ You, King Gelon, are aware the universe is the name given by most astronomers to the sphere the center of which is the center of the Earth, while its radius is equal to the straight line between the center of the Sun and the center of the Earth. This is the common account as you have heard from astronomers. But Aristarchus has brought out a book consisting of certain hypotheses, wherein it appears, as a consequence of the assumptions made, that the universe is many times greater than the universe just mentioned. His hypotheses are that the fixed stars and the Sun remain unmoved, that the Earth revolves about the Sun on the circumference of a circle, the Sun lying in the middle of the orbit, and that the sphere of fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface
194
+
195
+ Aristarchus thus believed the stars to be very far away, and saw this as the reason why stellar parallax had not been observed, that is, the stars had not been observed to move relative each other as the Earth moved around the Sun. The stars are in fact much farther away than the distance that was generally assumed in ancient times, which is why stellar parallax is only detectable with precision instruments. The geocentric model, consistent with planetary parallax, was assumed to be an explanation for the unobservability of the parallel phenomenon, stellar parallax. The rejection of the heliocentric view was apparently quite strong, as the following passage from Plutarch suggests (On the Apparent Face in the Orb of the Moon):
196
+
197
+ Cleanthes [a contemporary of Aristarchus and head of the Stoics] thought it was the duty of the Greeks to indict Aristarchus of Samos on the charge of impiety for putting in motion the Hearth of the Universe [i.e. the Earth], ... supposing the heaven to remain at rest and the Earth to revolve in an oblique circle, while it rotates, at the same time, about its own axis
198
+
199
+ The only other astronomer from antiquity known by name who supported Aristarchus's heliocentric model was Seleucus of Seleucia, a Hellenistic astronomer who lived a century after Aristarchus.[157][158][159] According to Plutarch, Seleucus was the first to prove the heliocentric system through reasoning, but it is not known what arguments he used. Seleucus' arguments for a heliocentric cosmology were probably related to the phenomenon of tides.[160] According to Strabo (1.1.9), Seleucus was the first to state that the tides are due to the attraction of the Moon, and that the height of the tides depends on the Moon's position relative to the Sun.[161] Alternatively, he may have proved heliocentricity by determining the constants of a geometric model for it, and by developing methods to compute planetary positions using this model, like what Nicolaus Copernicus later did in the 16th century.[162] During the Middle Ages, heliocentric models were also proposed by the Indian astronomer Aryabhata,[163] and by the Persian astronomers Albumasar[164] and Al-Sijzi.[165]
200
+
201
+ The Aristotelian model was accepted in the Western world for roughly two millennia, until Copernicus revived Aristarchus's perspective that the astronomical data could be explained more plausibly if the Earth rotated on its axis and if the Sun were placed at the center of the universe.
202
+
203
+ In the center rests the Sun. For who would place this lamp of a very beautiful temple in another or better place than this wherefrom it can illuminate everything at the same time?
204
+
205
+ As noted by Copernicus himself, the notion that the Earth rotates is very old, dating at least to Philolaus (c. 450 BC), Heraclides Ponticus (c. 350 BC) and Ecphantus the Pythagorean. Roughly a century before Copernicus, the Christian scholar Nicholas of Cusa also proposed that the Earth rotates on its axis in his book, On Learned Ignorance (1440).[166] Al-Sijzi[167] also proposed that the Earth rotates on its axis. Empirical evidence for the Earth's rotation on its axis, using the phenomenon of comets, was given by Tusi (1201–1274) and Ali Qushji (1403–1474).[168]
206
+
207
+ This cosmology was accepted by Isaac Newton, Christiaan Huygens and later scientists.[169] Edmund Halley (1720)[170] and Jean-Philippe de Chéseaux (1744)[171] noted independently that the assumption of an infinite space filled uniformly with stars would lead to the prediction that the nighttime sky would be as bright as the Sun itself; this became known as Olbers' paradox in the 19th century.[172] Newton believed that an infinite space uniformly filled with matter would cause infinite forces and instabilities causing the matter to be crushed inwards under its own gravity.[169] This instability was clarified in 1902 by the Jeans instability criterion.[173] One solution to these paradoxes is the Charlier Universe, in which the matter is arranged hierarchically (systems of orbiting bodies that are themselves orbiting in a larger system, ad infinitum) in a fractal way such that the universe has a negligibly small overall density; such a cosmological model had also been proposed earlier in 1761 by Johann Heinrich Lambert.[53][174] A significant astronomical advance of the 18th century was the realization by Thomas Wright, Immanuel Kant and others of nebulae.[170]
208
+
209
+ In 1919, when the Hooker Telescope was completed, the prevailing view still was that the universe consisted entirely of the Milky Way Galaxy. Using the Hooker Telescope, Edwin Hubble identified Cepheid variables in several spiral nebulae and in 1922–1923 proved conclusively that Andromeda Nebula and Triangulum among others, were entire galaxies outside our own, thus proving that universe consists of a multitude of galaxies.[175]
210
+
211
+ The modern era of physical cosmology began in 1917, when Albert Einstein first applied his general theory of relativity to model the structure and dynamics of the universe.[176]
212
+
213
+ Footnotes
214
+
215
+ Citations
216
+
217
+ "Two systems of Hindu thought propound physical theories suggestively similar to those of Greece. Kanada, founder of the Vaisheshika philosophy, held that the world is composed of atoms as many in kind as the various elements. The Jains more nearly approximated to Democritus by teaching that all atoms were of the same kind, producing different effects by diverse modes of combinations. Kanada believed light and heat to be varieties of the same substance; Udayana taught that all heat comes from the Sun; and Vachaspati, like Newton, interpreted light as composed of minute particles emitted by substances and striking the eye."
218
+
219
+ "The Buddhists denied the existence of substantial matter altogether. Movement consists for them of moments, it is a staccato movement, momentary flashes of a stream of energy... "Everything is evanescent",... says the Buddhist, because there is no stuff... Both systems [Sānkhya, and later Indian Buddhism] share in common a tendency to push the analysis of existence up to its minutest, last elements which are imagined as absolute qualities, or things possessing only one unique quality. They are called "qualities" (guna-dharma) in both systems in the sense of absolute qualities, a kind of atomic, or intra-atomic, energies of which the empirical things are composed. Both systems, therefore, agree in denying the objective reality of the categories of Substance and Quality,... and of the relation of Inference uniting them. There is in Sānkhya philosophy no separate existence of qualities. What we call quality is but a particular manifestation of a subtle entity. To every new unit of quality corresponds a subtle quantum of matter which is called guna, "quality", but represents a subtle substantive entity. The same applies to early Buddhism where all qualities are substantive... or, more precisely, dynamic entities, although they are also called dharmas ('qualities')."
en/5869.html.txt ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A university (Latin: universitas, 'a whole') is an institution of higher (or tertiary) education and research, which awards academic degrees in various academic disciplines. Universities typically provide undergraduate education and postgraduate education.
4
+
5
+ The word university is derived from the Latin universitas magistrorum et scholarium, which roughly means "community of teachers and scholars".[1] The modern university system has roots in the European medieval university, which was created in Italy and evolved from cathedral schools for the clergy during the High Middle Ages.[2]
6
+
7
+ The original Latin word universitas refers in general to "a number of persons associated into one body, a society, company, community, guild, corporation, etc".[3] At the time of the emergence of urban town life and medieval guilds, specialized "associations of students and teachers with collective legal rights usually guaranteed by charters issued by princes, prelates, or the towns in which they were located" came to be denominated by this general term. Like other guilds, they were self-regulating and determined the qualifications of their members.[4]
8
+
9
+ In modern usage the word has come to mean "An institution of higher education offering tuition in mainly non-vocational subjects and typically having the power to confer degrees,"[5] with the earlier emphasis on its corporate organization considered as applying historically to Medieval universities.[6]
10
+
11
+ The original Latin word referred to degree-awarding institutions of learning in Western and Central Europe, where this form of legal organisation was prevalent and from where the institution spread around the world.
12
+
13
+ An important idea in the definition of a university is the notion of academic freedom. The first documentary evidence of this comes from early in the life of the University of Bologna, which adopted an academic charter, the Constitutio Habita,[7] in 1158 or 1155,[8] which guaranteed the right of a traveling scholar to unhindered passage in the interests of education. Today this is claimed as the origin of "academic freedom".[9] This is now widely recognised internationally - on 18 September 1988, 430 university rectors signed the Magna Charta Universitatum,[10] marking the 900th anniversary of Bologna's foundation. The number of universities signing the Magna Charta Universitatum continues to grow, drawing from all parts of the world.
14
+
15
+ According to Encyclopædia Britannica, the earliest universities were founded in Asia and Africa, predating the first European medieval universities.[11] Scholars occassionally call the University of Al Quaraouiyine (name given in 1963), founded in Morocco by Fatima al-Fihri in 859, a university,[12][13][14][15] although Jacques Verger writes that this is done out of scholarly convenience.[16] Several scholars consider that al-Qarawiyyin was founded[17][18] and run[19][20][21][22][23] as a madrasa until after World War II. They date the transformation of the madrasa of al-Qarawiyyin into a university to its modern reorganization in 1963.[24][25][19] In the wake of these reforms, al-Qarawiyyin was officially renamed "University of Al Quaraouiyine" two years later.[24]
16
+
17
+ Some scholars, including Makdisi, have argued that early medieval universities were influenced by the madrasas in Al-Andalus, the Emirate of Sicily, and the Middle East during the Crusades.[26][27][28] Norman Daniel, however, views this argument as overstated.[29] Roy Lowe and Yoshihito Yasuhara have recently drawn on the well-documented influences of scholarship from the Islamic world on the universities of Western Europe to call for a reconsideration of the development of higher education, turning away from a concern with local institutional structures to a broader consideration within a global context.[30]
18
+
19
+ The university is generally regarded as a formal institution that has its origin in the Medieval Christian tradition.[31][32] European higher education took place for hundreds of years in cathedral schools or monastic schools (scholae monasticae), in which monks and nuns taught classes; evidence of these immediate forerunners of the later university at many places dates back to the 6th century.[33] The earliest universities were developed under the aegis of the Latin Church by papal bull as studia generalia and perhaps from cathedral schools. It is possible, however, that the development of cathedral schools into universities was quite rare, with the University of Paris being an exception.[34] Later they were also founded by Kings (University of Naples Federico II, Charles University in Prague, Jagiellonian University in Kraków) or municipal administrations (University of Cologne, University of Erfurt). In the early medieval period, most new universities were founded from pre-existing schools, usually when these schools were deemed to have become primarily sites of higher education. Many historians state that universities and cathedral schools were a continuation of the interest in learning promoted by The residence of a religious community.[35] Pope Gregory VII was critical in promoting and regulating the concept of modern university as his 1079 Papal Decree ordered the regulated establishment of cathedral schools that transformed themselves into the first European universities.[36]
20
+
21
+ The first universities in Europe with a form of corporate/guild structure were the University of Bologna (1088), the University of Paris (c.1150, later associated with the Sorbonne), and the University of Oxford (1167).
22
+
23
+ The University of Bologna began as a law school teaching the ius gentium or Roman law of peoples which was in demand across Europe for those defending the right of incipient nations against empire and church. Bologna's special claim to Alma Mater Studiorum[clarification needed] is based on its autonomy, its awarding of degrees, and other structural arrangements, making it the oldest continuously operating institution[8] independent of kings, emperors or any kind of direct religious authority.[37][38]
24
+
25
+ The conventional date of 1088, or 1087 according to some,[39] records when Irnerius commences teaching Emperor Justinian's 6th-century codification of Roman law, the Corpus Iuris Civilis, recently discovered at Pisa. Lay students arrived in the city from many lands entering into a contract to gain this knowledge, organising themselves into 'Nationes', divided between that of the Cismontanes and that of the Ultramontanes. The students "had all the power … and dominated the masters".[40][41]
26
+
27
+ In Europe, young men proceeded to university when they had completed their study of the trivium–the preparatory arts of grammar, rhetoric and dialectic or logic–and the quadrivium: arithmetic, geometry, music, and astronomy.
28
+
29
+ All over Europe rulers and city governments began to create universities to satisfy a European thirst for knowledge, and the belief that society would benefit from the scholarly expertise generated from these institutions. Princes and leaders of city governments perceived the potential benefits of having a scholarly expertise develop with the ability to address difficult problems and achieve desired ends. The emergence of humanism was essential to this understanding of the possible utility of universities as well as the revival of interest in knowledge gained from ancient Greek texts.[42]
30
+
31
+ The rediscovery of Aristotle's works–more than 3000 pages of it would eventually be translated–fuelled a spirit of inquiry into natural processes that had already begun to emerge in the 12th century. Some scholars believe that these works represented one of the most important document discoveries in Western intellectual history.[43] Richard Dales, for instance, calls the discovery of Aristotle's works "a turning point in the history of Western thought."[44] After Aristotle re-emerged, a community of scholars, primarily communicating in Latin, accelerated the process and practice of attempting to reconcile the thoughts of Greek antiquity, and especially ideas related to understanding the natural world, with those of the church. The efforts of this "scholasticism" were focused on applying Aristotelian logic and thoughts about natural processes to biblical passages and attempting to prove the viability of those passages through reason. This became the primary mission of lecturers, and the expectation of students.
32
+
33
+ The university culture developed differently in northern Europe than it did in the south, although the northern (primarily Germany, France and Great Britain) and southern universities (primarily Italy) did have many elements in common. Latin was the language of the university, used for all texts, lectures, disputations and examinations. Professors lectured on the books of Aristotle for logic, natural philosophy, and metaphysics; while Hippocrates, Galen, and Avicenna were used for medicine. Outside of these commonalities, great differences separated north and south, primarily in subject matter. Italian universities focused on law and medicine, while the northern universities focused on the arts and theology. There were distinct differences in the quality of instruction in these areas which were congruent with their focus, so scholars would travel north or south based on their interests and means. There was also a difference in the types of degrees awarded at these universities. English, French and German universities usually awarded bachelor's degrees, with the exception of degrees in theology, for which the doctorate was more common. Italian universities awarded primarily doctorates. The distinction can be attributed to the intent of the degree holder after graduation – in the north the focus tended to be on acquiring teaching positions, while in the south students often went on to professional positions.[45] The structure of northern universities tended to be modeled after the system of faculty governance developed at the University of Paris. Southern universities tended to be patterned after the student-controlled model begun at the University of Bologna.[46] Among the southern universities, a further distinction has been noted between those of northern Italy, which followed the pattern of Bologna as a "self-regulating, independent corporation of scholars" and those of southern Italy and Iberia, which were "founded by royal and imperial charter to serve the needs of government."[47]
34
+
35
+ During the Early Modern period (approximately late 15th century to 1800), the universities of Europe would see a tremendous amount of growth, productivity and innovative research. At the end of the Middle Ages, about 400 years after the first European university was founded, there were twenty-nine universities spread throughout Europe. In the 15th century, twenty-eight new ones were created, with another eighteen added between 1500 and 1625.[48] This pace continued until by the end of the 18th century there were approximately 143 universities in Europe, with the highest concentrations in the German Empire (34), Italian countries (26), France (25), and Spain (23) – this was close to a 500% increase over the number of universities toward the end of the Middle Ages. This number does not include the numerous universities that disappeared, or institutions that merged with other universities during this time.[49] The identification of a university was not necessarily obvious during the Early Modern period, as the term is applied to a burgeoning number of institutions. In fact, the term "university" was not always used to designate a higher education institution. In Mediterranean countries, the term studium generale was still often used, while "Academy" was common in Northern European countries.[50]
36
+
37
+ The propagation of universities was not necessarily a steady progression, as the 17th century was rife with events that adversely affected university expansion. Many wars, and especially the Thirty Years' War, disrupted the university landscape throughout Europe at different times. War, plague, famine, regicide, and changes in religious power and structure often adversely affected the societies that provided support for universities. Internal strife within the universities themselves, such as student brawling and absentee professors, acted to destabilize these institutions as well. Universities were also reluctant to give up older curricula, and the continued reliance on the works of Aristotle defied contemporary advancements in science and the arts.[51] This era was also affected by the rise of the nation-state. As universities increasingly came under state control, or formed under the auspices of the state, the faculty governance model (begun by the University of Paris) became more and more prominent. Although the older student-controlled universities still existed, they slowly started to move toward this structural organization. Control of universities still tended to be independent, although university leadership was increasingly appointed by the state.[52]
38
+
39
+ Although the structural model provided by the University of Paris, where student members are controlled by faculty "masters", provided a standard for universities, the application of this model took at least three different forms. There were universities that had a system of faculties whose teaching addressed a very specific curriculum; this model tended to train specialists. There was a collegiate or tutorial model based on the system at University of Oxford where teaching and organization was decentralized and knowledge was more of a generalist nature. There were also universities that combined these models, using the collegiate model but having a centralized organization.[53]
40
+
41
+ Early Modern universities initially continued the curriculum and research of the Middle Ages: natural philosophy, logic, medicine, theology, mathematics, astronomy, astrology, law, grammar and rhetoric. Aristotle was prevalent throughout the curriculum, while medicine also depended on Galen and Arabic scholarship. The importance of humanism for changing this state-of-affairs cannot be underestimated.[54] Once humanist professors joined the university faculty, they began to transform the study of grammar and rhetoric through the studia humanitatis. Humanist professors focused on the ability of students to write and speak with distinction, to translate and interpret classical texts, and to live honorable lives.[55] Other scholars within the university were affected by the humanist approaches to learning and their linguistic expertise in relation to ancient texts, as well as the ideology that advocated the ultimate importance of those texts.[56] Professors of medicine such as Niccolò Leoniceno, Thomas Linacre and William Cop were often trained in and taught from a humanist perspective as well as translated important ancient medical texts. The critical mindset imparted by humanism was imperative for changes in universities and scholarship. For instance, Andreas Vesalius was educated in a humanist fashion before producing a translation of Galen, whose ideas he verified through his own dissections. In law, Andreas Alciatus infused the Corpus Juris with a humanist perspective, while Jacques Cujas humanist writings were paramount to his reputation as a jurist. Philipp Melanchthon cited the works of Erasmus as a highly influential guide for connecting theology back to original texts, which was important for the reform at Protestant universities.[57] Galileo Galilei, who taught at the Universities of Pisa and Padua, and Martin Luther, who taught at the University of Wittenberg (as did Melanchthon), also had humanist training. The task of the humanists was to slowly permeate the university; to increase the humanist presence in professorships and chairs, syllabi and textbooks so that published works would demonstrate the humanistic ideal of science and scholarship.[58]
42
+
43
+ Although the initial focus of the humanist scholars in the university was the discovery, exposition and insertion of ancient texts and languages into the university, and the ideas of those texts into society generally, their influence was ultimately quite progressive. The emergence of classical texts brought new ideas and led to a more creative university climate (as the notable list of scholars above attests to). A focus on knowledge coming from self, from the human, has a direct implication for new forms of scholarship and instruction, and was the foundation for what is commonly known as the humanities. This disposition toward knowledge manifested in not simply the translation and propagation of ancient texts, but also their adaptation and expansion. For instance, Vesalius was imperative for advocating the use of Galen, but he also invigorated this text with experimentation, disagreements and further research.[59] The propagation of these texts, especially within the universities, was greatly aided by the emergence of the printing press and the beginning of the use of the vernacular, which allowed for the printing of relatively large texts at reasonable prices.[60]
44
+
45
+ Examining the influence of humanism on scholars in medicine, mathematics, astronomy and physics may suggest that humanism and universities were a strong impetus for the scientific revolution. Although the connection between humanism and the scientific discovery may very well have begun within the confines of the university, the connection has been commonly perceived as having been severed by the changing nature of science during the Scientific Revolution. Historians such as Richard S. Westfall have argued that the overt traditionalism of universities inhibited attempts to re-conceptualize nature and knowledge and caused an indelible tension between universities and scientists.[61] This resistance to changes in science may have been a significant factor in driving many scientists away from the university and toward private benefactors, usually in princely courts, and associations with newly forming scientific societies.[62]
46
+
47
+ Other historians find incongruity in the proposition that the very place where the vast number of the scholars that influenced the scientific revolution received their education should also be the place that inhibits their research and the advancement of science. In fact, more than 80% of the European scientists between 1450–1650 included in the Dictionary of Scientific Biography were university trained, of which approximately 45% held university posts.[63] It was the case that the academic foundations remaining from the Middle Ages were stable, and they did provide for an environment that fostered considerable growth and development. There was considerable reluctance on the part of universities to relinquish the symmetry and comprehensiveness provided by the Aristotelian system, which was effective as a coherent system for understanding and interpreting the world. However, university professors still utilized some autonomy, at least in the sciences, to choose epistemological foundations and methods. For instance, Melanchthon and his disciples at University of Wittenberg were instrumental for integrating Copernican mathematical constructs into astronomical debate and instruction.[64] Another example was the short-lived but fairly rapid adoption of Cartesian epistemology and methodology in European universities, and the debates surrounding that adoption, which led to more mechanistic approaches to scientific problems as well as demonstrated an openness to change. There are many examples which belie the commonly perceived intransigence of universities.[65] Although universities may have been slow to accept new sciences and methodologies as they emerged, when they did accept new ideas it helped to convey legitimacy and respectability, and supported the scientific changes through providing a stable environment for instruction and material resources.[66]
48
+
49
+ Regardless of the way the tension between universities, individual scientists, and the scientific revolution itself is perceived, there was a discernible impact on the way that university education was constructed. Aristotelian epistemology provided a coherent framework not simply for knowledge and knowledge construction, but also for the training of scholars within the higher education setting. The creation of new scientific constructs during the scientific revolution, and the epistemological challenges that were inherent within this creation, initiated the idea of both the autonomy of science and the hierarchy of the disciplines. Instead of entering higher education to become a "general scholar" immersed in becoming proficient in the entire curriculum, there emerged a type of scholar that put science first and viewed it as a vocation in itself. The divergence between those focused on science and those still entrenched in the idea of a general scholar exacerbated the epistemological tensions that were already beginning to emerge.[67]
50
+
51
+ The epistemological tensions between scientists and universities were also heightened by the economic realities of research during this time, as individual scientists, associations and universities were vying for limited resources. There was also competition from the formation of new colleges funded by private benefactors and designed to provide free education to the public, or established by local governments to provide a knowledge hungry populace with an alternative to traditional universities.[68] Even when universities supported new scientific endeavors, and the university provided foundational training and authority for the research and conclusions, they could not compete with the resources available through private benefactors.[69]
52
+
53
+ By the end of the early modern period, the structure and orientation of higher education had changed in ways that are eminently recognizable for the modern context. Aristotle was no longer a force providing the epistemological and methodological focus for universities and a more mechanistic orientation was emerging. The hierarchical place of theological knowledge had for the most part been displaced and the humanities had become a fixture, and a new openness was beginning to take hold in the construction and dissemination of knowledge that were to become imperative for the formation of the modern state.
54
+
55
+ By the 18th century, universities published their own research journals and by the 19th century, the German and the French university models had arisen. The German, or Humboldtian model, was conceived by Wilhelm von Humboldt and based on Friedrich Schleiermacher's liberal ideas pertaining to the importance of freedom, seminars, and laboratories in universities.[citation needed] The French university model involved strict discipline and control over every aspect of the university.
56
+
57
+ Until the 19th century, religion played a significant role in university curriculum; however, the role of religion in research universities decreased in the 19th century, and by the end of the 19th century, the German university model had spread around the world. Universities concentrated on science in the 19th and 20th centuries and became increasingly accessible to the masses. In the United States, the Johns Hopkins University was the first to adopt the (German) research university model; this pioneered the adoption by most other American universities. In Britain, the move from Industrial Revolution to modernity saw the arrival of new civic universities with an emphasis on science and engineering, a movement initiated in 1960 by Sir Keith Murray (chairman of the University Grants Committee) and Sir Samuel Curran, with the formation of the University of Strathclyde.[72] The British also established universities worldwide, and higher education became available to the masses not only in Europe.
58
+
59
+ In 1963, the Robbins Report on universities in the United Kingdom concluded that such institutions should have four main "objectives essential to any properly balanced system: instruction in skills; the promotion of the general powers of the mind so as to produce not mere specialists but rather cultivated men and women; to maintain research in balance with teaching, since teaching should not be separated from the advancement of learning and the search for truth; and to transmit a common culture and common standards of citizenship."[73]
60
+
61
+ In the early 21st century, concerns were raised over the increasing managerialisation and standardisation of universities worldwide. Neo-liberal management models have in this sense been critiqued for creating "corporate universities (where) power is transferred from faculty to managers, economic justifications dominate, and the familiar 'bottom line' eclipses pedagogical or intellectual concerns".[74] Academics' understanding of time, pedagogical pleasure, vocation, and collegiality have been cited as possible ways of alleviating such problems.[75]
62
+
63
+ A national university is generally a university created or run by a national state but at the same time represents a state autonomic institution which functions as a completely independent body inside of the same state. Some national universities are closely associated with national cultural, religious or political aspirations, for instance the National University of Ireland, which formed partly from the Catholic University of Ireland which was created almost immediately and specifically in answer to the non-denominational universities which had been set up in Ireland in 1850. In the years leading up to the Easter Rising, and in no small part a result of the Gaelic Romantic revivalists, the NUI collected a large amount of information on the Irish language and Irish culture.[citation needed] Reforms in Argentina were the result of the University Revolution of 1918 and its posterior reforms by incorporating values that sought for a more equal and laic[further explanation needed] higher education system.
64
+
65
+ Universities created by bilateral or multilateral treaties between states are intergovernmental. An example is the Academy of European Law, which offers training in European law to lawyers, judges, barristers, solicitors, in-house counsel and academics. EUCLID (Pôle Universitaire Euclide, Euclid University) is chartered as a university and umbrella organisation dedicated to sustainable development in signatory countries, and the United Nations University engages in efforts to resolve the pressing global problems that are of concern to the United Nations, its peoples and member states. The European University Institute, a post-graduate university specialised in the social sciences, is officially an intergovernmental organisation, set up by the member states of the European Union.
66
+
67
+ Although each institution is organized differently, nearly all universities have a board of trustees; a president, chancellor, or rector; at least one vice president, vice-chancellor, or vice-rector; and deans of various divisions. Universities are generally divided into a number of academic departments, schools or faculties. Public university systems are ruled over by government-run higher education boards[citation needed]. They review financial requests and budget proposals and then allocate funds for each university in the system. They also approve new programs of instruction and cancel or make changes in existing programs. In addition, they plan for the further coordinated growth and development of the various institutions of higher education in the state or country. However, many public universities in the world have a considerable degree of financial, research and pedagogical autonomy. Private universities are privately funded and generally have broader independence from state policies. However, they may have less independence from business corporations depending on the source of their finances.
68
+
69
+ The funding and organization of universities varies widely between different countries around the world. In some countries universities are predominantly funded by the state, while in others funding may come from donors or from fees which students attending the university must pay. In some countries the vast majority of students attend university in their local town, while in other countries universities attract students from all over the world, and may provide university accommodation for their students.[76]
70
+
71
+ The definition of a university varies widely, even within some countries. Where there is clarification, it is usually set by a government agency. For example:
72
+
73
+ In Australia, the Tertiary Education Quality and Standards Agency (TEQSA) is Australia's independent national regulator of the higher education sector. Students rights within university are also protected by the Education Services for Overseas Students Act (ESOS).
74
+
75
+ In the United States there is no nationally standardized definition for the term university, although the term has traditionally been used to designate research institutions and was once reserved for doctorate-granting research institutions. Some states, such as Massachusetts, will only grant a school "university status" if it grants at least two doctoral degrees.[77]
76
+
77
+ In the United Kingdom, the Privy Council is responsible for approving the use of the word university in the name of an institution, under the terms of the Further and Higher Education Act 1992.[78]
78
+
79
+ In India, a new designation deemed universities has been created for institutions of higher education that are not universities, but work at a very high standard in a specific area of study ("An Institution of Higher Education, other than universities, working at a very high standard in specific area of study, can be declared by the Central Government on the advice of the University Grants Commission as an Institution 'Deemed-to-be-university'"). Institutions that are 'deemed-to-be-university' enjoy the academic status and the privileges of a university.[79] Through this provision many schools that are commercial in nature and have been established just to exploit the demand for higher education have sprung up.[80]
80
+
81
+ In Canada, college generally refers to a two-year, non-degree-granting institution, while university connotes a four-year, degree-granting institution. Universities may be sub-classified (as in the Macleans rankings) into large research universities with many PhD-granting programs and medical schools (for example, McGill University); "comprehensive" universities that have some PhDs but are not geared toward research (such as Waterloo); and smaller, primarily undergraduate universities (such as St. Francis Xavier).
82
+
83
+ In Germany, universities are institutions of higher education which have the power to confer bachelor, master and PhD degrees. They are explicitly recognised as such by law and cannot be founded without government approval. The term Universität (i.e. the German term for university) is protected by law and any use without official approval is a criminal offense. Most of them are public institutions, though a few private universities exist. Such universities are always research universities. Apart from these universities, Germany has other institutions of higher education (Hochschule, Fachhochschule). Fachhochschule means a higher education institution which is similar to the former polytechnics in the British education system, the English term used for these German institutions is usually 'university of applied sciences'. They can confer master's degrees but no PhDs. They are similar to the model of teaching universities with less research and the research undertaken being highly practical. Hochschule can refer to various kinds of institutions, often specialised in a certain field (e.g. music, fine arts, business). They might or might not have the power to award PhD degrees, depending on the respective government legislation. If they award PhD degrees, their rank is considered equivalent to that of universities proper (Universität), if not, their rank is equivalent to universities of applied sciences.
84
+
85
+ Colloquially, the term university may be used to describe a phase in one's life: "When I was at university..." (in the United States and Ireland, college is often used instead: "When I was in college..."). In Ireland, Australia, New Zealand, Canada, the United Kingdom, Nigeria, the Netherlands, Italy, Spain and the German-speaking countries, university is often contracted to uni. In Ghana, New Zealand, Bangladesh and in South Africa it is sometimes called "varsity" (although this has become uncommon in New Zealand in recent years). "Varsity" was also common usage in the UK in the 19th century.[citation needed] "Varsity" is still in common usage in Scotland.[citation needed]
86
+
87
+ In many countries, students are required to pay tuition fees.
88
+ Many students look to get 'student grants' to cover the cost of university. In 2016, the average outstanding student loan balance per borrower in the United States was US$30,000.[81] In many U.S. states, costs are anticipated to rise for students as a result of decreased state funding given to public universities.[82]
89
+
90
+ There are several major exceptions on tuition fees. In many European countries, it is possible to study without tuition fees. Public universities in Nordic countries were entirely without tuition fees until around 2005. Denmark, Sweden and Finland then moved to put in place tuition fees for foreign students. Citizens of EU and EEA member states and citizens from Switzerland remain exempted from tuition fees, and the amounts of public grants granted to promising foreign students were increased to offset some of the impact.[83] The situation in Germany is similar; public universities usually do not charge tuition fees apart from a small administrative fee. For degrees of a postgraduate professional level sometimes tuition fees are levied. Private universities, however, almost always charge tuition fees.
91
+
92
+ The Quaraouiyine Mosque, founded in 859, is the most famous mosque of Morocco and attracted continuous investment by Muslim rulers.
93
+
94
+ As for the nature of its curriculum, it was typical of other major madrasahs such as al-Azhar and Al Quaraouiyine, though many of the texts used at the institution came from Muslim Spain...Al Quaraouiyine began its life as a small mosque constructed in 859 C.E. by means of an endowment bequeathed by a wealthy woman of much piety, Fatima bint Muhammed al-Fahri.
95
+
96
+ The Adjustments of Original Institutions of the Higher Learning: the Madrasah. Significantly, the institutional adjustments of the madrasahs affected both the structure and the content of these institutions. In terms of structure, the adjustments were twofold: the reorganization of the available original madaris and the creation of new institutions. This resulted in two different types of Islamic teaching institutions in al-Maghrib. The first type was derived from the fusion of old madaris with new universities. For example, Morocco transformed Al-Qarawiyin (859 A.D.) into a university under the supervision of the ministry of education in 1963.
97
+
98
+ Higher education has always been an integral part of Morocco, going back to the ninth century when the Karaouine Mosque was established. The madrasa, known today as Al Qayrawaniyan University, became part of the state university system in 1947.
99
+
100
+ Madrasa, in modern usage, the name of an institution of learning where the Islamic sciences are taught, i.e. a college for higher studies, as opposed to an elementary school of traditional type (kuttab); in mediaeval usage, essentially a college of law in which the other Islamic sciences, including literary and philosophical ones, were ancillary subjects only.
101
+
102
+ A madrasa is a college of Islamic law. The madrasa was an educational institution in which Islamic law (fiqh) was taught according to one or more Sunni rites: Maliki, Shafi'i, Hanafi, or Hanbali. It was supported by an endowment or charitable trust (waqf) that provided for at least one chair for one professor of law, income for other faculty or staff, scholarships for students, and funds for the maintenance of the building. Madrasas contained lodgings for the professor and some of his students. Subjects other than law were frequently taught in madrasas, and even Sufi seances were held in them, but there could be no madrasa without law as technically the major subject.
103
+
104
+ In studying an institution which is foreign and remote in point of time, as is the case of the medieval madrasa, one runs the double risk of attributing to it characteristics borrowed from one's own institutions and one's own times. Thus gratuitous transfers may be made from one culture to the other, and the time factor may be ignored or dismissed as being without significance. One cannot therefore be too careful in attempting a comparative study of these two institutions: the madrasa and the university. But in spite of the pitfalls inherent in such a study, albeit sketchy, the results which may be obtained are well worth the risks involved. In any case, one cannot avoid making comparisons when certain unwarranted statements have already been made and seem to be currently accepted without question. The most unwarranted of these statements is the one which makes of the "madrasa" a "university".
105
+
106
+ al-qarawiyin is the oldest university in Morocco. It was founded as a mosque in Fès in the middle of the ninth century. It has been a destination for students and scholars of Islamic sciences and Arabic studies throughout the history of Morocco. There were also other religious schools like the madras of ibn yusuf and other schools in the sus. This system of basic education called al-ta'lim al-aSil was funded by the sultans of Morocco and many famous traditional families. After independence, al-qarawiyin maintained its reputation, but it seemed important to transform it into a university that would prepare graduates for a modern country while maintaining an emphasis on Islamic studies. Hence, al-qarawiyin university was founded in February 1963 and, while the dean's residence was kept in Fès, the new university initially had four colleges located in major regions of the country known for their religious influences and madrasas. These colleges were kuliyat al-shari's in Fès, kuliyat uSul al-din in Tétouan, kuliyat al-lugha al-'arabiya in Marrakech (all founded in 1963), and kuliyat al-shari'a in Ait Melloul near Agadir, which was founded in 1979.
en/587.html.txt ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Soviet victory:[1]
6
+
7
+ Army Group B:
8
+
9
+ Army Group Don[Note 2]
10
+
11
+ Stalingrad Front:
12
+
13
+ Don Front[Note 3]
14
+
15
+ Southwestern Front[Note 4]
16
+
17
+ 1941
18
+
19
+ 1942
20
+
21
+ 1943
22
+
23
+ 1944
24
+
25
+ 1945
26
+
27
+ In the Battle of Stalingrad (23 August 1942 – 2 February 1943),[18][19][20][21] Germany and its allies fought the Soviet Union for control of the city of Stalingrad (now Volgograd) in Southern Russia. Marked by fierce close-quarters combat and direct assaults on civilians in air raids, it is one of the bloodiest battles in the history of warfare, with an estimated 2 million total casualties.[22] After their defeat at Stalingrad, the German High Command had to withdraw considerable military forces from the Western Front to replace their losses.[1]
28
+
29
+ The German offensive to capture Stalingrad began in August 1942, using the 6th Army and elements of the 4th Panzer Army. The attack was supported by intense Luftwaffe bombing that reduced much of the city to rubble. The fighting degenerated into house-to-house fighting, as both sides poured reinforcements into the city. By mid-November, the Germans had pushed the Soviet defenders back at great cost into narrow zones along the west bank of the Volga River.
30
+
31
+ On 19 November, the Red Army launched Operation Uranus, a two-pronged attack targeting the weaker Romanian and Hungarian armies protecting the 6th Army's flanks.[23] The Axis forces on the flanks were overrun and the 6th Army was cut off and surrounded in the Stalingrad area. Adolf Hitler was determined to hold the city at all costs and barred the 6th Army from attempting a breakout; instead, attempts were made to supply it by air and to break the encirclement from the outside. Heavy fighting continued for another two months. At the beginning of February 1943, the Axis forces in Stalingrad, having exhausted their ammunition and food, surrendered[24]:932 after five months, one week and three days of fighting.
32
+
33
+ By the spring of 1942, despite the failure of Operation Barbarossa to decisively defeat the Soviet Union in a single campaign, the Wehrmacht had captured vast expanses of territory, including Ukraine, Belarus, and the Baltic republics. Elsewhere, the war had been progressing well: the U-boat offensive in the Atlantic had been very successful and Erwin Rommel had just captured Tobruk.[25]:522 In the east, they had stabilised their front in a line running from Leningrad in the north to Rostov in the south. There were a number of salients, but these were not particularly threatening. Hitler was confident that he could master the Red Army after the winter of 1942, because even though Army Group Centre (Heeresgruppe Mitte) had suffered heavy losses west of Moscow the previous winter, 65% of its infantry had not been engaged and had been rested and re-equipped. Neither Army Group North nor Army Group South had been particularly hard pressed over the winter.[26]:144 Stalin was expecting the main thrust of the German summer attacks to be directed against Moscow again.[1]:498
34
+
35
+ With the initial operations being very successful, the Germans decided that their summer campaign in 1942 would be directed at the southern parts of the Soviet Union. The initial objectives in the region around Stalingrad were the destruction of the industrial capacity of the city and the deployment of forces to block the Volga River. The river was a key route from the Caucasus and the Caspian Sea to central Russia. Its capture would disrupt commercial river traffic. The Germans cut the pipeline from the oilfields when they captured Rostov on 23 July. The capture of Stalingrad would make the delivery of Lend Lease supplies via the Persian Corridor much more difficult.[24]:909[27][28]:88
36
+
37
+ On 23 July 1942, Hitler personally rewrote the operational objectives for the 1942 campaign, greatly expanding them to include the occupation of the city of Stalingrad. Both sides began to attach propaganda value to the city, based on it bearing the name of the leader of the Soviet Union. Hitler proclaimed that after Stalingrad's capture, its male citizens were to be killed and all women and children were to be deported because its population was "thoroughly communistic" and "especially dangerous".[29] It was assumed that the fall of the city would also firmly secure the northern and western flanks of the German armies as they advanced on Baku, with the aim of securing these strategic petroleum resources for Germany.[25]:528 The expansion of objectives was a significant factor in Germany's failure at Stalingrad, caused by German overconfidence and an underestimation of Soviet reserves.[30]
38
+
39
+ The Soviets realised that they were pressed for time and resources. They ordered that anyone strong enough to hold a rifle be sent to fight.[31]:94
40
+
41
+ If I do not get the oil of Maikop and Grozny then I must finish [liquidieren; "kill off", "liquidate"] this war.
42
+
43
+ Army Group South was selected for a sprint forward through the southern Russian steppes into the Caucasus to capture the vital Soviet oil fields there. The planned summer offensive, code-named Fall Blau (Case Blue), was to include the German 6th, 17th, 4th Panzer and 1st Panzer Armies. Army Group South had overrun the Ukrainian Soviet Socialist Republic in 1941. Poised in Eastern Ukraine, it was to spearhead the offensive.[32]
44
+
45
+ Hitler intervened, however, ordering the Army Group to split in two. Army Group South (A), under the command of Wilhelm List, was to continue advancing south towards the Caucasus as planned with the 17th Army and First Panzer Army. Army Group South (B), including Friedrich Paulus's 6th Army and Hermann Hoth's 4th Panzer Army, was to move east towards the Volga and Stalingrad. Army Group B was commanded by General Maximilian von Weichs.[24]:915
46
+
47
+ The start of Case Blue had been planned for late May 1942. However, a number of German and Romanian units that were to take part in Blau were besieging Sevastopol on the Crimean Peninsula. Delays in ending the siege pushed back the start date for Blau several times, and the city did not fall until early July.
48
+
49
+ Operation Fridericus I by the Germans against the "Isium bulge", pinched off the Soviet salient in the Second Battle of Kharkov, and resulted in the envelopment of a large Soviet force between 17 May and 29 May. Similarly, Operation Wilhelm attacked Voltshansk on 13 June, and Operation Fridericus attacked Kupiansk on 22 June.[33]
50
+
51
+ Blau finally opened as Army Group South began its attack into southern Russia on 28 June 1942. The German offensive started well. Soviet forces offered little resistance in the vast empty steppes and started streaming eastward. Several attempts to re-establish a defensive line failed when German units outflanked them. Two major pockets were formed and destroyed: the first, northeast of Kharkov, on 2 July, and a second, around Millerovo, Rostov Oblast, a week later. Meanwhile, the Hungarian 2nd Army and the German 4th Panzer Army had launched an assault on Voronezh, capturing the city on 5 July.
52
+
53
+ The initial advance of the 6th Army was so successful that Hitler intervened and ordered the 4th Panzer Army to join Army Group South (A) to the south. A massive traffic jam resulted when the 4th Panzer and the 1st Panzer both required the few roads in the area. Both armies were stopped dead while they attempted to clear the resulting mess of thousands of vehicles. The delay was long, and it is thought that it cost the advance at least one week. With the advance now slowed, Hitler changed his mind and reassigned the 4th Panzer Army back to the attack on Stalingrad.
54
+
55
+ By the end of July, the Germans had pushed the Soviets across the Don River. At this point, the Don and Volga Rivers are only 65 km (40 mi) apart, and the Germans left their main supply depots west of the Don, which had important implications later in the course of the battle. The Germans began using the armies of their Italian, Hungarian and Romanian allies to guard their left (northern) flank. Occasionally Italian actions were mentioned in official German communiques.[34][35][36][37] Italian forces were generally held in little regard by the Germans, and were accused of having low morale: in reality, the Italian divisions fought comparatively well, with the 3rd Mountain Infantry Division Ravenna and 5th Infantry Division Cosseria proving to have good morale, according to a German liaison officer,[38] and were forced to retreat only after a massive armoured attack in which German reinforcements had failed to arrive in time, according to a German historian.[39] Indeed the Italians distinguished themselves in numerous battles, such as the Battle of Nikolayevka.
56
+
57
+ On 25 July the Germans faced stiff resistance with a Soviet bridgehead west of Kalach. "We had had to pay a high cost in men and material...left on the Kalach battlefield were numerous burnt-out or shot-up German tanks."[33]:33–34, 39–40
58
+
59
+ The Germans formed bridgeheads across the Don on 20 August, with the 295th and 76th Infantry Divisions enabling the XIVth Panzer Corps "to thrust to the Volga north of Stalingrad." The German 6th Army was only a few dozen kilometres from Stalingrad. The 4th Panzer Army, ordered south on 13 July to block the Soviet retreat "weakened by the 17th Army and the 1st Panzer Army", had turned northwards to help take the city from the south.[33]:28, 30, 40, 48, 57
60
+
61
+ To the south, Army Group A was pushing far into the Caucasus, but their advance slowed as supply lines grew overextended. The two German army groups were not positioned to support one another due to the great distances involved.
62
+
63
+ After German intentions became clear in July 1942, Stalin appointed General Andrey Yeryomenko as commander of the Southeastern Front on 1 August 1942. Yeryomenko and Commissar Nikita Khrushchev were tasked with planning the defence of Stalingrad.[40]:25, 48 The eastern border of Stalingrad was the wide River Volga, and over the river, additional Soviet units were deployed. These units became the newly formed 62nd Army, which Yeryomenko placed under the command of Lieutenant General Vasiliy Chuikov on 11 September 1942.[33]:80 The situation was extremely dire. When asked how he interpreted his task, he responded, "We will defend the city or die in the attempt."[41]:127 The 62nd Army's mission was to defend Stalingrad at all costs. Chuikov's generalship during the battle earned him one of his two Hero of the Soviet Union awards.
64
+
65
+ During the defence of Stalingrad, the Red Army deployed five armies (28th, 51st, 57th, 62nd and 64th Armies) in and around the city and an additional nine armies in the encirclement counter offensive.[41]:435–438 The nine armies amassed for the counter offensive were the 24th Army, 65th Army, 66th Army and 16th Air Army from the north as part of the Don Front offensive and 1st Guards Army, 5th Tank, 21st Army, 2nd Air Army and 17th Air Army from the south as part of the Southwestern Front.
66
+
67
+ David Glantz indicated[42] that four hard-fought battles – collectively known as the Kotluban Operations – north of Stalingrad, where the Soviets made their greatest stand, decided Germany's fate before the Nazis ever set foot in the city itself, and were a turning point in the war. Beginning in late August, continuing in September and into October, the Soviets committed between two and four armies in hastily coordinated and poorly controlled attacks against the Germans' northern flank. The actions resulted in more than 200,000 Soviet Army casualties but did slow the German assault.
68
+
69
+ On 23 August the 6th Army reached the outskirts of Stalingrad in pursuit of the 62nd and 64th Armies, which had fallen back into the city. Kleist later said after the war:[43]
70
+
71
+ The capture of Stalingrad was subsidiary to the main aim. It was only of importance as a convenient place, in the bottleneck between Don and the Volga, where we could block an attack on our flank by Russian forces coming from the east. At the start, Stalingrad was no more than a name on the map to us.[43]
72
+
73
+ The Soviets had enough warning of the German advance to ship grain, cattle, and railway cars across the Volga and out of harm's way, but most civilian residents were not evacuated. This "harvest victory" left the city short of food even before the German attack began. Before the Heer reached the city itself, the Luftwaffe had rendered the River Volga, vital for bringing supplies into the city, unusable to Soviet shipping. Between 25 and 31 July, 32 Soviet ships were sunk, with another nine crippled.[44]:69
74
+
75
+ The battle began with the heavy bombing of the city by Generaloberst Wolfram von Richthofen's Luftflotte 4, which in the summer and autumn of 1942 was the single most powerful air formation in the world. Some 1,000 tons of bombs were dropped in 48 hours, more than in London at the height of the Blitz.[2]:122 Stalin refused to evacuate civilian population from the city, so when bombing began 400,000 civilians were trapped within city boundaries. The exact number of civilians killed during the course of the battle is unknown but was most likely very high. Around 40,000 civilians were moved to Germany as slave workers, some fled the city during battle and a small number were evacuated by the Soviets. In February 1943 only between 10,000 to 60,000 civilians were still alive in Stalingrad. Much of the city was quickly turned to rubble, although some factories continued production while workers joined in the fighting. The Stalingrad Tractor Factory continued to turn out T-34 tanks literally until German troops burst into the plant. The 369th (Croatian) Reinforced Infantry Regiment was the only non-German unit[45] selected by the Wehrmacht to enter Stalingrad city during assault operations. It fought as part of the 100th Jäger Division.
76
+
77
+ Stalin rushed all available troops to the east bank of the Volga, some from as far away as Siberia. All the regular ferries were quickly destroyed by the Luftwaffe, which then targeted troop barges being towed slowly across the river by tugs.[40] It has been said that Stalin prevented civilians from leaving the city in the belief that their presence would encourage greater resistance from the city's defenders.[41]:106 Civilians, including women and children, were put to work building trenchworks and protective fortifications. A massive German air raid on 23 August caused a firestorm, killing hundreds and turning Stalingrad into a vast landscape of rubble and burnt ruins. Ninety percent of the living space in the Voroshilovskiy area was destroyed. Between 23 and 26 August, Soviet reports indicate 955 people were killed and another 1,181 wounded as a result of the bombing.[2]:73 Casualties of 40,000 were greatly exaggerated,[5]:188–89 and after 25 August the Soviets did not record any civilian and military casualties as a result of air raids.[Note 6]
78
+
79
+ Vasily Chuikov
80
+
81
+ The Soviet Air Force, the Voyenno-Vozdushnye Sily (VVS), was swept aside by the Luftwaffe. The VVS bases in the immediate area lost 201 aircraft between 23 and 31 August, and despite meagre reinforcements of some 100 aircraft in August, it was left with just 192 serviceable aircraft, 57 of which were fighters.[2]:74 The Soviets continued to pour aerial reinforcements into the Stalingrad area in late September, but continued to suffer appalling losses; the Luftwaffe had complete control of the skies.
82
+
83
+ The burden of the initial defence of the city fell on the 1077th Anti-Aircraft Regiment,[41]:106 a unit made up mainly of young female volunteers who had no training for engaging ground targets. Despite this, and with no support available from other units, the AA gunners stayed at their posts and took on the advancing panzers. The German 16th Panzer Division reportedly had to fight the 1077th's gunners "shot for shot" until all 37 anti-aircraft guns were destroyed or overrun. The 16th Panzer was shocked to find that, due to Soviet manpower shortages, it had been fighting female soldiers.[41]:108[47] In the early stages of the battle, the NKVD organised poorly armed "Workers' militias" similar to those that had defended the city twenty-four years earlier, composed of civilians not directly involved in war production for immediate use in the battle. The civilians were often sent into battle without rifles.[41]:109 Staff and students from the local technical university formed a "tank destroyer" unit. They assembled tanks from leftover parts at the tractor factory. These tanks, unpainted and lacking gun-sights, were driven directly from the factory floor to the front line. They could only be aimed at point-blank range through the bore of their gun barrels.[41]:110
84
+
85
+ By the end of August, Army Group South (B) had finally reached the Volga, north of Stalingrad. Another advance to the river south of the city followed, while the Soviets abandoned their Rossoshka position for the inner defensive ring west of Stalingrad. The wings of the 6th Army and the 4th Panzer Army met near Jablotchni along the Zaritza on 2 Sept.[33]:65 By 1 September, the Soviets could only reinforce and supply their forces in Stalingrad by perilous crossings of the Volga under constant bombardment by artillery and aircraft.
86
+
87
+ On 5 September, the Soviet 24th and 66th Armies organized a massive attack against XIV Panzer Corps. The Luftwaffe helped repel the offensive by heavily attacking Soviet artillery positions and defensive lines. The Soviets were forced to withdraw at midday after only a few hours. Of the 120 tanks the Soviets had committed, 30 were lost to air attack.[2]:75
88
+
89
+ Soviet operations were constantly hampered by the Luftwaffe. On 18 September, the Soviet 1st Guards and 24th Army launched an offensive against VIII Army Corps at Kotluban. VIII. Fliegerkorps dispatched wave after wave of Stuka dive-bombers to prevent a breakthrough. The offensive was repelled. The Stukas claimed 41 of the 106 Soviet tanks knocked out that morning, while escorting Bf 109s destroyed 77 Soviet aircraft.[2]:80
90
+ Amid the debris of the wrecked city, the Soviet 62nd and 64th Armies, which included the Soviet 13th Guards Rifle Division, anchored their defence lines with strong-points in houses and factories.
91
+
92
+ Fighting within the ruined city was fierce and desperate. Lieutenant General Alexander Rodimtsev was in charge of the 13th Guards Rifle Division, and received one of two Heroes of the Soviet Union awarded during the battle for his actions. Stalin's Order No. 227 of 27 July 1942 decreed that all commanders who ordered unauthorised retreats would be subject to a military tribunal.[48] Deserters and presumed malingerers were captured or executed after fighting.[49] During the battle the 62nd Army had the most arrests and executions: 203 in all, of which 49 were executed, while 139 were sent to penal companies and battalions.[50][51][52][53] The Germans pushing forward into Stalingrad suffered heavy casualties.
93
+
94
+ By 12 September, at the time of their retreat into the city, the Soviet 62nd Army had been reduced to 90 tanks, 700 mortars and just 20,000 personnel.[41] The remaining tanks were used as immobile strong-points within the city. The initial German attack on 14 September attempted to take the city in a rush. The 51st Army Corps' 295th Infantry Division went after the Mamayev Kurgan hill, the 71st attacked the central rail station and toward the central landing stage on the Volga, while 48th Panzer Corps attacked south of the Tsaritsa River. Rodimtsev's 13th Guards Rifle Division had been hurried up to cross the river and join the defenders inside the city.[54] Assigned to counterattack at the Mamayev Kurgan and at Railway Station No. 1, it suffered particularly heavy losses.
95
+
96
+ Though initially successful, the German attacks stalled in the face of Soviet reinforcements brought in from across the Volga. The Soviet 13th Guards Rifle Division, assigned to counterattack at the Mamayev Kurgan and at Railway Station No. 1, suffered particularly heavy losses. Over 30 percent of its soldiers were killed in the first 24 hours, and just 320 out of the original 10,000 survived the entire battle. Both objectives were retaken, but only temporarily. The railway station changed hands 14 times in six hours. By the following evening, the 13th Guards Rifle Division had ceased to exist.
97
+
98
+ Combat raged for three days at the giant grain elevator in the south of the city. About fifty Red Army defenders, cut off from resupply, held the position for five days and fought off ten different assaults before running out of ammunition and water. Only forty dead Soviet fighters were found, though the Germans had thought there were many more due to the intensity of resistance. The Soviets burned large amounts of grain during their retreat in order to deny the enemy food. Paulus chose the grain elevator and silos as the symbol of Stalingrad for a patch he was having designed to commemorate the battle after a German victory.
99
+
100
+ German military doctrine was based on the principle of combined-arms teams and close cooperation between tanks, infantry, engineers, artillery and ground-attack aircraft. Some Soviet commanders adopted the tactic of always keeping their front-line positions as close to the Germans as physically possible; Chuikov called this "hugging" the Germans. This slowed the German advance and reduced the effectiveness of the German advantage in supporting fire.[55]
101
+
102
+ The Red Army gradually adopted a strategy to hold for as long as possible all the ground in the city. Thus, they converted multi-floored apartment blocks, factories, warehouses, street corner residences and office buildings into a series of well defended strong-points with small 5–10-man units.[55] Manpower in the city was constantly refreshed by bringing additional troops over the Volga. When a position was lost, an immediate attempt was usually made to re-take it with fresh forces.
103
+
104
+ Bitter fighting raged for every ruin, street, factory, house, basement, and staircase. Even the sewers were the sites of firefights. The Germans called this unseen urban warfare Rattenkrieg ("Rat War"),[56] and bitterly joked about capturing the kitchen but still fighting for the living room and the bedroom. Buildings had to be cleared room by room through the bombed-out debris of residential areas, office blocks, basements and apartment high-rises. Some of the taller buildings, blasted into roofless shells by earlier German aerial bombardment, saw floor-by-floor, close-quarters combat, with the Germans and Soviets on alternate levels, firing at each other through holes in the floors.[55] Fighting on and around Mamayev Kurgan, a prominent hill above the city, was particularly merciless; indeed, the position changed hands many times.[33]:67–68[40]:?[57]
105
+
106
+ In another part of the city, a Soviet platoon under the command of Sergeant Yakov Pavlov fortified a four-story building that oversaw a square 300 meters from the river bank, later called Pavlov's House. The soldiers surrounded it with minefields, set up machine-gun positions at the windows and breached the walls in the basement for better communications.[41] The soldiers found about ten Soviet civilians hiding in the basement. They were not relieved, and not significantly reinforced, for two months. The building was labelled Festung ("Fortress") on German maps. Sgt. Pavlov was awarded the Hero of the Soviet Union for his actions.
107
+
108
+ The Germans made slow but steady progress through the city. Positions were taken individually, but the Germans were never able to capture the key crossing points along the river bank. By 27 Sept. the Germans occupied the southern portion of the city, but the Soviets held the centre and northern part. Most importantly, the Soviets controlled the ferries to their supplies on the east bank of the Volga.[33]:68
109
+
110
+ The Germans used aircraft, tanks and heavy artillery to clear the city with varying degrees of success. Toward the end of the battle, the gigantic railroad gun nicknamed Dora was brought into the area. The Soviets built up a large number of artillery batteries on the east bank of the Volga. This artillery was able to bombard the German positions or at least provide counter-battery fire.
111
+
112
+ Snipers on both sides used the ruins to inflict casualties. The most famous Soviet sniper in Stalingrad was Vasily Zaytsev,[58] with 225 confirmed kills during the battle. Targets were often soldiers bringing up food or water to forward positions. Artillery spotters were an especially prized target for snipers.
113
+
114
+ A significant historical debate concerns the degree of terror in the Red Army. The British historian Antony Beevor noted the "sinister" message from the Stalingrad Front's Political Department on 8 October 1942 that: "The defeatist mood is almost eliminated and the number of treasonous incidents is getting lower" as an example of the sort of coercion Red Army soldiers experienced under the Special Detachments (later to be renamed SMERSH).[59]:154–68 On the other hand, Beevor noted the often extraordinary bravery of the Soviet soldiers in a battle that was only comparable to Verdun, and argued that terror alone cannot explain such self-sacrifice.[41]:154–68 Richard Overy addresses the question of just how important the Red Army's coercive methods were to the Soviet war effort compared with other motivational factors such as hatred for the enemy. He argues that, though it is "easy to argue that from the summer of 1942 the Soviet army fought because it was forced to fight," to concentrate solely on coercion is nonetheless to "distort our view of the Soviet war effort."[60] After conducting hundreds of interviews with Soviet veterans on the subject of terror on the Eastern Front – and specifically about Order No. 227 ("Not a step back!") at Stalingrad – Catherine Merridale notes that, seemingly paradoxically, "their response was frequently relief."[61] Infantryman Lev Lvovich's explanation, for example, is typical for these interviews; as he recalls, "[i]t was a necessary and important step. We all knew where we stood after we had heard it. And we all – it's true – felt better. Yes, we felt better."[62]
115
+
116
+ Many women fought on the Soviet side, or were under fire. As General Chuikov acknowledged, "Remembering the defence of Stalingrad, I can't overlook the very important question ... about the role of women in war, in the rear, but also at the front. Equally with men they bore all the burdens of combat life and together with us men, they went all the way to Berlin."[63] At the beginning of the battle there were 75,000 women and girls from the Stalingrad area who had finished military or medical training, and all of whom were to serve in the battle.[64] Women staffed a great many of the anti-aircraft batteries that fought not only the Luftwaffe but German tanks.[65] Soviet nurses not only treated wounded personnel under fire but were involved in the highly dangerous work of bringing wounded soldiers back to the hospitals under enemy fire.[66] Many of the Soviet wireless and telephone operators were women who often suffered heavy casualties when their command posts came under fire.[67] Though women were not usually trained as infantry, many Soviet women fought as machine gunners, mortar operators, and scouts.[68] Women were also snipers at Stalingrad.[69] Three air regiments at Stalingrad were entirely female.[68] At least three women won the title Hero of the Soviet Union while driving tanks at Stalingrad.[70]
117
+
118
+ For both Stalin and Hitler, Stalingrad became a matter of prestige far beyond its strategic significance.[71] The Soviet command moved units from the Red Army strategic reserve in the Moscow area to the lower Volga, and transferred aircraft from the entire country to the Stalingrad region.
119
+
120
+ The strain on both military commanders was immense: Paulus developed an uncontrollable tic in his eye, which eventually afflicted the left side of his face, while Chuikov experienced an outbreak of eczema that required him to have his hands completely bandaged. Troops on both sides faced the constant strain of close-range combat.[72]
121
+
122
+ After 27 September, much of the fighting in the city shifted north to the industrial district. Having slowly advanced over 10 days against strong Soviet resistance, the 51st Army Corps was finally in front of the three giant factories of Stalingrad: the Red October Steel Factory, the Barrikady Arms Factory and Stalingrad Tractor Factory. It took a few more days for them to prepare for the most savage offensive of all, which was unleashed on 14 October with a concentration of gunfire never seen before.[73]
123
+ Exceptionally intense shelling and bombing paved the way for the first German assault groups. The main attack (led by the 14th Panzer and 305th Infantry Divisions) attacked towards the tractor factory, while another assault led by the 24th Panzer Division hit to the south of the giant plant.[74]
124
+
125
+ The German onslaught crushed the 37th Guards Rifle Division of Major General Viktor Zholudev and in the afternoon the forward assault group reached the tractor factory before arriving at the Volga River, splitting the 62nd Army into two.[75] In response to the German breakthrough to the Volga, the front headquarters committed three battalions from the 300th Rifle Division and the 45th Rifle Division of Colonel Vasily Sokolov, a substantial force of over 2,000 men, to the fighting at the Red October Factory.[76]
126
+
127
+ Fighting raged inside the Barrikady Factory until the end of October.[77] The Soviet-controlled area shrank down to a few strips of land along the western bank of the Volga, and in November the fighting concentrated around what Soviet newspapers referred to as "Lyudnikov's Island", a small patch of ground behind the Barrikady Factory where the remnants of Colonel Ivan Lyudnikov's 138th Rifle Division resisted all ferocious assaults thrown by the Germans and became a symbol of the stout Soviet defence of Stalingrad.[78]
128
+
129
+ From 5 to 12 September, Luftflotte 4 conducted 7,507 sorties (938 per day). From 16 to 25 September, it carried out 9,746 missions (975 per day).[5]:195 Determined to crush Soviet resistance, Luftflotte 4's Stukawaffe flew 900 individual sorties against Soviet positions at the Stalingrad Tractor Factory on 5 October. Several Soviet regiments were wiped out; the entire staff of the Soviet 339th Infantry Regiment was killed the following morning during an air raid.[2]:83
130
+
131
+ The Luftwaffe retained air superiority into November, and Soviet daytime aerial resistance was nonexistent. However, the combination of constant air support operations on the German side and the Soviet surrender of the daytime skies began to affect the strategic balance in the air. From 28 June to 20 September, Luftflotte 4's original strength of 1,600 aircraft, of which 1,155 were operational, fell to 950, of which only 550 were operational. The fleet's total strength decreased by 40 percent. Daily sorties decreased from 1,343 per day to 975 per day. Soviet offensives in the central and northern portions of the Eastern Front tied down Luftwaffe reserves and newly built aircraft, reducing Luftflotte 4's percentage of Eastern Front aircraft from 60 percent on 28 June to 38 percent by 20 September. The Kampfwaffe (bomber force) was the hardest hit, having only 232 out of an original force of 480 left.[5]:195 The VVS remained qualitatively inferior, but by the time of the Soviet counter-offensive, the VVS had reached numerical superiority.
132
+
133
+ In mid-October, after receiving reinforcements from the Caucasus theatre, the Luftwaffe intensified its efforts against remaining Red Army positions holding the west bank. Luftflotte 4 flew 1,250 sorties on 14 October and its Stukas dropped 550 tonnes of bombs, while German infantry surrounded the three factories.[79] Stukageschwader 1, 2, and 77 had largely silenced Soviet artillery on the eastern bank of the Volga before turning their attention to the shipping that was once again trying to reinforce the narrowing Soviet pockets of resistance. The 62nd Army had been cut in two and, due to intensive air attack on its supply ferries, was receiving much less material support. With the Soviets forced into a 1-kilometre (1,000-yard) strip of land on the western bank of the Volga, over 1,208 Stuka missions were flown in an effort to eliminate them.[2]:84
134
+
135
+ The Soviet bomber force, the Aviatsiya Dal'nego Deystviya (Long Range Aviation; ADD), having taken crippling losses over the past 18 months, was restricted to flying at night. The Soviets flew 11,317 night sorties over Stalingrad and the Don-bend sector between 17 July and 19 November. These raids caused little damage and were of nuisance value only.[2]:82[80]:265
136
+
137
+ On 8 November, substantial units from Luftflotte 4 were withdrawn to combat the Allied landings in North Africa. The German air arm found itself spread thinly across Europe, struggling to maintain its strength in the other southern sectors of the Soviet-German front.[Note 7]
138
+
139
+ As historian Chris Bellamy notes, the Germans paid a high strategic price for the aircraft sent into Stalingrad: the Luftwaffe was forced to divert much of its air strength away from the oil-rich Caucasus, which had been Hitler's original grand-strategic objective.[81]
140
+
141
+ The Royal Romanian Air Force was also involved in the Axis air operations at Stalingrad. Starting 23 October 1942, Romanian pilots flew a total of 4,000 sorties, during which they destroyed 61 Soviet aircraft. The Romanian Air Force lost 79 aircraft, most of them captured on the ground along with their airfields.[82]
142
+
143
+ After three months of slow advance, the Germans finally reached the river banks, capturing 90% of the ruined city and splitting the remaining Soviet forces into two narrow pockets. Ice floes on the Volga now prevented boats and tugs from supplying the Soviet defenders. Nevertheless, the fighting continued, especially on the slopes of Mamayev Kurgan and inside the factory area in the northern part of the city.[83] From 21 August to 20 November, the German 6th Army lost 60,548 men, including 12,782 killed, 45,545 wounded and 2,221 missing.[84]
144
+
145
+ Recognising that German troops were ill-prepared for offensive operations during the winter of 1942, and that most of them were redeployed elsewhere on the southern sector of the Eastern Front, the Stavka decided to conduct a number of offensive operations between 19 November 1942 and 2 February 1943. These operations opened the Winter Campaign of 1942–1943 (19 November 1942 – 3 March 1943), which involved some fifteen Armies operating on several fronts. According to Zhukov, "German operational blunders were aggravated by poor intelligence: they failed to spot preparations for the major counter-offensive near Stalingrad where there were 10 field, 1 tank and 4 air armies."[28]
146
+
147
+ During the siege, the German and allied Italian, Hungarian, and Romanian armies protecting Army Group B's north and south flanks had pressed their headquarters for support. The Hungarian 2nd Army was given the task of defending a 200 km (120 mi) section of the front north of Stalingrad between the Italian Army and Voronezh. This resulted in a very thin line, with some sectors where 1–2 km (0.62–1.24 mi) stretches were being defended by a single platoon. These forces were also lacking in effective anti-tank weapons. Zhukov states, "Compared with the Germans, the troops of the satellites were not so well armed, less experienced and less efficient, even in defence."[28]:95–96, 119, 122, 124
148
+
149
+ Because of the total focus on the city, the Axis forces had neglected for months to consolidate their positions along the natural defensive line of the Don River. The Soviet forces were allowed to retain bridgeheads on the right bank from which offensive operations could be quickly launched. These bridgeheads in retrospect presented a serious threat to Army Group B.[24]:915
150
+
151
+ Similarly, on the southern flank of the Stalingrad sector the front southwest of Kotelnikovo was held only by the Romanian 4th Army. Beyond that army, a single German division, the 16th Motorised Infantry, covered 400 km. Paulus had requested permission to "withdraw the 6th Army behind the Don," but was rejected. According to Paulus' comments to Adam, "There is still the order whereby no commander of an army group or an army has the right to relinquish a village, even a trench, without Hitler's consent."[33]:87–91, 95, 129
152
+
153
+ In autumn, the Soviet generals Georgy Zhukov and Aleksandr Vasilevsky, responsible for strategic planning in the Stalingrad area, concentrated forces in the steppes to the north and south of the city. The northern flank was defended by Hungarian and Romanian units, often in open positions on the steppes. The natural line of defence, the Don River, had never been properly established by the German side. The armies in the area were also poorly equipped in terms of anti-tank weapons. The plan was to punch through the overstretched and weakly defended German flanks and surround the German forces in the Stalingrad region.
154
+
155
+ During the preparations for the attack, Marshal Zhukov personally visited the front and noticing the poor organisation, insisted on a one-week delay in the start date of the planned attack.[41]:117 The operation was code-named "Uranus" and launched in conjunction with Operation Mars, which was directed at Army Group Center. The plan was similar to the one Zhukov had used to achieve victory at Khalkhin Gol three years before, where he had sprung a double envelopment and destroyed the 23rd Division of the Japanese army.[85]
156
+
157
+ On 19 November 1942, the Red Army launched Operation Uranus. The attacking Soviet units under the command of Gen. Nikolay Vatutin consisted of three complete armies, the 1st Guards Army, 5th Tank Army and 21st Army, including a total of 18 infantry divisions, eight tank brigades, two motorised brigades, six cavalry divisions and one anti-tank brigade. The preparations for the attack could be heard by the Romanians, who continued to push for reinforcements, only to be refused again. Thinly spread, deployed in exposed positions, outnumbered and poorly equipped, the Romanian 3rd Army, which held the northern flank of the German 6th Army, was overrun.
158
+
159
+ Behind the front lines, no preparations had been made to defend key points in the rear such as Kalach. The response by the Wehrmacht was both chaotic and indecisive. Poor weather prevented effective air action against the Soviet offensive. Army Group B was in disarray and faced strong Soviet pressure across all its fronts. Hence it was ineffective in relieving the 6th army.
160
+
161
+ On 20 November, a second Soviet offensive (two armies) was launched to the south of Stalingrad against points held by the Romanian 4th Army Corps. The Romanian forces, made up primarily of infantry, were overrun by large numbers of tanks. The Soviet forces raced west and met on 23 November at the town of Kalach, sealing the ring around Stalingrad.[24]:926 The link-up of the Soviet forces, not filmed at the time, was later re-enacted for a propaganda film which was shown worldwide.[citation needed].
162
+
163
+ The surrounded Axis personnel comprised 265,000 Germans, Romanians, Italians,[86][page needed] and the Croatians. In addition, the German 6th Army included between 40,000 and 65,000 Hilfswillige (Hiwi), or "volunteer auxiliaries",[87][88] a term used for personnel recruited amongst Soviet POWs and civilians from areas under occupation. Hiwi often proved to be reliable Axis personnel in rear areas and were used for supporting roles, but also served in some front-line units as their numbers had increased.[88] German personnel in the pocket numbered about 210,000, according to strength breakdowns of the 20 field divisions (average size 9,000) and 100 battalion sized units of the Sixth Army on 19 November 1942. Inside the pocket (German: Kessel, literally "cauldron"), there were also around 10,000 Soviet civilians and several thousand Soviet soldiers the Germans had taken captive during the battle. Not all of the 6th Army was trapped: 50,000 soldiers were brushed aside outside the pocket. These belonged mostly to the other two divisions of the 6th Army between the Italian and Romanian Armies: the 62nd and 298th Infantry Divisions. Of the 210,000 Germans, 10,000 remained to fight on, 105,000 surrendered, 35,000 left by air and the remaining 60,000 died.
164
+
165
+ Even with the desperate situation of the Sixth Army, Army Group A continued their invasion of the Caucasus further south from 19 November until 19 December. By 19 December the German army was in full retreat out of the Caucasus, while using the Sixth Army to tie down the Soviet forces. Hence Army Group A was never used to help relieve the Sixth Army.
166
+
167
+ Army Group Don was formed under Field Marshal von Manstein. Under his command were the twenty German and two Romanian divisions encircled at Stalingrad, Adam's battle groups formed along the Chir River and on the Don bridgehead, plus the remains of the Romanian 3rd Army.[33]:107, 113
168
+
169
+ The Red Army units immediately formed two defensive fronts: a circumvallation facing inward and a contravallation facing outward. Field Marshal Erich von Manstein advised Hitler not to order the 6th Army to break out, stating that he could break through the Soviet lines and relieve the besieged 6th Army.[89] The American historians Williamson Murray and Alan Millet wrote that it was Manstein's message to Hitler on 24 November advising him that the 6th Army should not break out, along with Göring's statements that the Luftwaffe could supply Stalingrad that "... sealed the fate of the Sixth Army."[33]:133[90] After 1945, Manstein claimed that he told Hitler that the 6th Army must break out.[91] The American historian Gerhard Weinberg wrote that Manstein distorted his record on the matter.[92] Manstein was tasked to conduct a relief operation, named Operation Winter Storm (Unternehmen Wintergewitter) against Stalingrad, which he thought was feasible if the 6th Army was temporarily supplied through the air.[93][94]
170
+
171
+ Adolf Hitler had declared in a public speech (in the Berlin Sportpalast) on 30 September 1942 that the German army would never leave the city. At a meeting shortly after the Soviet encirclement, German army chiefs pushed for an immediate breakout to a new line on the west of the Don, but Hitler was at his Bavarian retreat of Obersalzberg in Berchtesgaden with the head of the Luftwaffe, Hermann Göring. When asked by Hitler, Göring replied, after being convinced by Hans Jeschonnek,[5]:234 that the Luftwaffe could supply the 6th Army with an "air bridge." This would allow the Germans in the city to fight on temporarily while a relief force was assembled.[24]:926 A similar plan had been used a year earlier at the Demyansk Pocket, albeit on a much smaller scale: a corps at Demyansk rather than an entire army.[33]:132
172
+
173
+ The director of Luftflotte 4, Wolfram von Richthofen, tried to get this decision overturned. The forces under the 6th Army were almost twice as large as a regular German army unit, plus there was also a corps of the 4th Panzer Army trapped in the pocket. Due to a limited number of available aircraft and having only one available airfield, at Pitomnik, the Luftwaffe could only deliver 105 tonnes of supplies per day, only a fraction of the minimum 750 tonnes that both Paulus and Zeitzler estimated the 6th Army needed.[5][Note 8] To supplement the limited number of Junkers Ju 52 transports, the Germans pressed other aircraft into the role, such as the Heinkel He 177 bomber (some bombers performed adequately – the Heinkel He 111 proved to be quite capable and was much faster than the Ju 52). General Richthofen informed Manstein on 27 November of the small transport capacity of the Luftwaffe and the impossibility of supplying 300 tons a day by air. Manstein now saw the enormous technical difficulties of a supply by air of these dimensions. The next day he made a six-page situation report to the general staff. Based on the information of the expert Richthofen, he declared that contrary to the example of the pocket of Demyansk the permanent supply by air would be impossible. If only a narrow link could be established to Sixth Army, he proposed that this should be used to pull it out from the encirclement, and said that the Luftwaffe should instead of supplies deliver only enough ammunition and fuel for a breakout attempt. He acknowledged the heavy moral sacrifice that giving up Stalingrad would mean, but this would be made easier to bear by conserving the combat power of the Sixth Army and regaining the initiative.[95] He ignored the limited mobility of the army and the difficulties of disengaging the Soviets. Hitler reiterated that the Sixth Army would stay at Stalingrad and that the air bridge would supply it until the encirclement was broken by a new German offensive.
174
+
175
+ Supplying the 270,000 men trapped in the "cauldron" required 700 tons of supplies a day. That would mean 350 Ju 52 flights a day into Pitomnik. At a minimum, 500 tons were required. However, according to Adam, "On not one single day have the minimal essential number of tons of supplies been flown in."[33]:119, 127, 131, 134 The Luftwaffe was able to deliver an average of 85 tonnes of supplies per day out of an air transport capacity of 106 tonnes per day. The most successful day, 19 December, the Luftwaffe delivered 262 tonnes of supplies in 154 flights. The outcome of the airlift was the Luftwaffe's failure to provide its transport units with the tools they needed to maintain an adequate count of operational aircraft – tools that included airfield facilities, supplies, manpower, and even aircraft suited to the prevailing conditions. These factors, taken together, prevented the Luftwaffe from effectively employing the full potential of its transport forces, ensuring that they were unable to deliver the quantity of supplies needed to sustain the 6th Army.[96]
176
+
177
+ In the early parts of the operation, fuel was shipped at a higher priority than food and ammunition because of a belief that there would be a breakout from the city.[30]:153 Transport aircraft also evacuated technical specialists and sick or wounded personnel from the besieged enclave. Sources differ on the number flown out: at least 25,000 to at most 35,000.
178
+
179
+ Initially, supply flights came in from the field at Tatsinskaya,[33]:159 called 'Tazi' by the German pilots. On 23 December, the Soviet 24th Tank Corps, commanded by Major-General Vasily Mikhaylovich Badanov, reached nearby Skassirskaya and in the early morning of 24 December, the tanks reached Tatsinskaya. Without any soldiers to defend the airfield, it was abandoned under heavy fire; in a little under an hour, 108 Ju 52s and 16 Ju 86s took off for Novocherkassk – leaving 72 Ju 52s and many other aircraft burning on the ground. A new base was established some 300 km (190 mi) from Stalingrad at Salsk, the additional distance would become another obstacle to the resupply efforts. Salsk was abandoned in turn by mid-January for a rough facility at Zverevo, near Shakhty. The field at Zverevo was attacked repeatedly on 18 January and a further 50 Ju 52s were destroyed. Winter weather conditions, technical failures, heavy Soviet anti-aircraft fire and fighter interceptions eventually led to the loss of 488 German aircraft.
180
+
181
+ In spite of the failure of the German offensive to reach the 6th Army, the air supply operation continued under ever more difficult circumstances. The 6th Army slowly starved. General Zeitzler, moved by their plight, began to limit himself to their slim rations at meal times. After a few weeks on such a diet, he had "visibly lost weight", according to Albert Speer, and Hitler "commanded Zeitzler to resume at once taking sufficient nourishment."[97]
182
+
183
+ The toll on the Transportgruppen was heavy. 160 aircraft were destroyed and 328 were heavily damaged (beyond repair). Some 266 Junkers Ju 52s were destroyed; one-third of the fleet's strength on the Eastern Front. The He 111 gruppen lost 165 aircraft in transport operations. Other losses included 42 Ju 86s, 9 Fw 200 Condors, 5 He 177 bombers and 1 Ju 290. The Luftwaffe also lost close to 1,000 highly experienced bomber crew personnel.[5]:310 So heavy were the Luftwaffe's losses that four of Luftflotte 4's transport units (KGrzbV 700, KGrzbV 900, I./KGrzbV 1 and II./KGzbV 1) were "formally dissolved."[2]:122
184
+
185
+ Manstein's plan to rescue the Sixth Army - Operation Winter Storm - was developed in full consultation with Führer headquarters.It aimed to break through to the Sixth Army and establish a corridor to keep it supplied and reinforced, so that, according to Hitler's order, it could maintain its 'cornerstone' position on the Volga, 'with regard to operations in 1943'. Manstein, however, who knew that Sixth Army could not survive the winter there, instructed his headquarters to draw up a further plan in the event of Hitler's seeing sense.This would include the subsequent breakout of Sixth Army, in the event of a successful first phase, and its physical reincorporation in Army Group Don. This second plan was given the name Operation Thunderclap. Winter Storm, as Zhukov had predicted, was originally planned as a two-pronged attack. One thrust would come from the area of Kotelnikovo, well to the south, and around a hundred miles from the Sixth Army. The other would start from the Chir front west of the Don, which was little more than forty miles from the edge of the Kessel, but the continuing attacks of Romanenko's 5th Tank Army against the German detachments along the river Chir ruled out that start-line. This left only the LVII Panzer Corps round Kotelnikovo, supported by the rest of Hoth's very mixed Fourth Panzer Army, to relieve Paulus's trapped divisions.The LVII Panzer Corps, commanded by General Friedrich Kirchner, had been weak at first. It consisted of two Romanian cavalry divisions and the 23rd Panzer Division, which mustered no more than thirty serviceable tanks. The 6th Panzer Division, arriving from France, was a vastly more powerful formation, but its members hardly received an encouraging impression. The Austrian divisional commander, General Erhard Raus, was summoned to Manstein's royal carriage in Kharkov station on 24 November, where the field marshal briefed him. 'He described the situation in very sombre terms,' recorded Raus. Three days later, when the first trainload of Raus's division steamed into Kotelnikovo station to unload, his troops were greeted by 'a hail of shells' from Soviet batteries. 'As quick as lightning, the panzer grenadiers jumped from their wagons. But already the enemy was attacking the station with their battle-cries of" Urrah!"'. By 18 December, the German Army had pushed to within 48 km (30 mi) of Sixth Army's positions. However the predictable nature of the relief operation brought significant risk for all German forces in the area. The starving encircled forces at Stalingrad made no attempt to break out or link up with Manstein's advance. Some German officers requested that Paulus defy Hitler's orders to stand fast and instead attempt to break out of the Stalingrad pocket. Paulus refused, concerned about the Red Army attacks on the flank of Army Group Don and Army Group B in their advance on Rostov-on-Don, "an early abandonment" of Stalingrad "would result in the destruction of Army Group A in the Caucasus," and the fact that his 6th Army tanks only had fuel for a 30 km advance towards Hoth's spearhead, a futile effort if they did not receive assurance of resupply by air. Of his questions to Army Group Don, Paulus was told, "Wait, implement Operation 'Thunderclap' only on explicit orders!", Operation Thunderclap being the code word initiating the breakout.[33]:132–33, 138–143, 150, 155, 165
186
+
187
+ On 16 December, the Soviets launched Operation Little Saturn, which attempted to punch through the Axis army (mainly Italians) on the Don and take Rostov-on-Don. The Germans set up a "mobile defence" of small units that were to hold towns until supporting armour arrived. From the Soviet bridgehead at Mamon, 15 divisions – supported by at least 100 tanks – attacked the Italian Cosseria and Ravenna Divisions, and although outnumbered 9 to 1, the Italians initially fought well, with the Germans praising the quality of the Italian defenders,[98] but on 19 December, with the Italian lines disintegrating, ARMIR headquarters ordered the battered divisions to withdraw to new lines.[99]
188
+
189
+ The fighting forced a total revaluation of the German situation. Sensing that this was the last chance for a breakout, Manstein pleaded with Hitler on 18 December, but Hitler refused. Paulus himself also doubted the feasibility of such a breakout. The attempt to break through to Stalingrad was abandoned and Army Group A was ordered to pull back from the Caucasus. The 6th Army now was beyond all hope of German relief. While a motorised breakout might have been possible in the first few weeks, the 6th Army now had insufficient fuel and the German soldiers would have faced great difficulty breaking through the Soviet lines on foot in harsh winter conditions. But in its defensive position on the Volga, the 6th Army continued to tie down a significant number of Soviet Armies.[33]:159, 166–67
190
+
191
+ On 23 December, the attempt to relieve Stalingrad was abandoned and Manstein's forces switched over to the defensive to deal with new Soviet offensives.[33]:153 As Zhukov states, "The military and political leadership of Nazi Germany sought not to relieve them, but to get them to fight on for as long possible so as to tie up the Soviet forces. The aim was to win as much time as possible to withdraw forces from the Caucasus (Army Group A) and to rush troops from other Fronts to form a new front that would be able in some measure to check our counter-offensive."[28]:137
192
+
193
+ The Red Army High Command sent three envoys while simultaneously aircraft and loudspeakers announced terms of capitulation on 7 January 1943. The letter was signed by Colonel-General of Artillery Voronov and the commander-in-chief of the Don Front, Lieutenant-General Rokossovsky. A low-level Soviet envoy party (comprising Major Aleksandr Smyslov, Captain Nikolay Dyatlenko and a trumpeter) carried an offer to Paulus: if he surrendered within 24 hours, he would receive a guarantee of safety for all prisoners, medical care for the sick and wounded, prisoners being allowed to keep their personal belongings, "normal" food rations, and repatriation to any country they wished after the war; but Paulus – ordered not to surrender by Hitler – did not respond.[100]:283 The German High Command informed Paulus, "Every day that the army holds out longer helps the whole front and draws away the Russian divisions from it."[33]:166, 168–69
194
+
195
+ The Germans inside the pocket retreated from the suburbs of Stalingrad to the city itself. The loss of the two airfields, at Pitomnik on 16 January 1943 and Gumrak on the night of 21/22 January,[101] meant an end to air supplies and to the evacuation of the wounded.[31]:98 The third and last serviceable runway was at the Stalingradskaya flight school, which reportedly had the last landings and takeoffs on 23 January.[45] After 23 January, there were no more reported landings, just intermittent air drops of ammunition and food until the end.[33]:183, 185, 189
196
+
197
+ The Germans were now not only starving, but running out of ammunition. Nevertheless, they continued to resist, in part because they believed the Soviets would execute any who surrendered. In particular, the so-called HiWis, Soviet citizens fighting for the Germans, had no illusions about their fate if captured. The Soviets were initially surprised by the number of Germans they had trapped, and had to reinforce their encircling troops. Bloody urban warfare began again in Stalingrad, but this time it was the Germans who were pushed back to the banks of the Volga. The Germans adopted a simple defence of fixing wire nets over all windows to protect themselves from grenades. The Soviets responded by fixing fish hooks to the grenades so they stuck to the nets when thrown. The Germans had no usable tanks in the city, and those that still functioned could, at best, be used as makeshift pillboxes. The Soviets did not bother employing tanks in areas where the urban destruction restricted their mobility.
198
+
199
+ On 22 January, Paulus requested that he be granted permission to surrender. Hitler rejected it on a point of honour. He telegraphed the 6th Army later that day, claiming that it had made a historic contribution to the greatest struggle in German history and that it should stand fast "to the last soldier and the last bullet." Hitler told Goebbels that the plight of the 6th Army was a "heroic drama of German history."[102] On 24 January, in his radio report to Hitler, Paulus reported "18,000 wounded without the slightest aid of bandages and medicines."[33]:193
200
+
201
+ On 26 January 1943, the German forces inside Stalingrad were split into two pockets north and south of Mamayev-Kurgan. The northern pocket consisting of the VIIIth Corps, under General Walter Heitz, and the XIth Corps, was now cut off from telephone communication with Paulus in the southern pocket. Now "each part of the cauldron came personally under Hitler."[33]:201, 203 On 28 January, the cauldron was split into three parts. The northern cauldron consisted of the XIth Corps, the central with the VIIIth and LIst Corps, and the southern with the XIVth Panzer Corps and IVth Corps "without units". The sick and wounded reached 40,000 to 50,000.[33]:203
202
+
203
+ On 30 January 1943, the 10th anniversary of Hitler's coming to power, Goebbels read out a proclamation that included the sentence: "The heroic struggle of our soldiers on the Volga should be a warning for everybody to do the utmost for the struggle for Germany's freedom and the future of our people, and thus in a wider sense for the maintenance of our entire continent."[103] Hitler promoted Paulus to the rank of Generalfeldmarschall. No German field marshal had ever surrendered, and the implication was clear: if Paulus surrendered, he would shame himself and would become the highest ranking German officer ever to be captured. Hitler believed that Paulus would either fight to the last man or commit suicide.[33]:212[104]
204
+
205
+ On the next day, the southern pocket in Stalingrad collapsed. Soviet forces reached the entrance to the German headquarters in the ruined GUM department store. General Schmidt negotiated a surrender of the headquarters while Paulus was unaware in another room.[33]:207–08, 212–15 When interrogated by the Soviets, Paulus claimed that he had not surrendered. He said that he had been taken by surprise. He denied that he was the commander of the remaining northern pocket in Stalingrad and refused to issue an order in his name for them to surrender.[105][106]
206
+
207
+ There was no cameraman to film the capture of Paulus, but one of them (Roman Karmen) was able to record his first interrogation this same day, at Shumilov's 64th Army's HQ, and a few hours later at Rokossovsky's Don Front HQ.[107]
208
+
209
+ The central pocket, under the command of Heitz, surrendered the same day, while the northern pocket, under the command of Karl Strecker, held out for two more days.[33]:215 When Strecker finally surrendered he and his Chief of Staff, Helmuth Groscurth, drafted the final signal sent from Stalingrad, purposely omitting the customary exclamation to Hitler, replacing it with "Long live Germany!"[108]
210
+
211
+ Four Soviet armies were deployed against the remaining northern pocket. At four in the morning on 2 February, General Strecker was informed that one of his own officers had gone to the Soviets to negotiate surrender terms. Seeing no point in continuing, he sent a radio message saying that his command had done its duty and fought to the last man. He then surrendered. Around 91,000 exhausted, ill, wounded, and starving prisoners were taken, including 3,000 Romanians (the survivors of the 20th Infantry Division, 1st Cavalry Division and "Col. Voicu" Detachment).[109] The prisoners included 22 generals. Hitler was furious and confided that Paulus "could have freed himself from all sorrow and ascended into eternity and national immortality, but he prefers to go to Moscow."[110]
212
+
213
+ The calculation of casualties depends on what scope is given to the Battle of Stalingrad. The scope can vary from the fighting in the city and suburbs to the inclusion of almost all fighting on the southern wing of the Soviet–German front from the spring of 1942 to the end of the fighting in the city in the winter of 1943. Scholars have produced different estimates depending on their definition of the scope of the battle. The difference is comparing the city against the region. The Axis suffered 647,300 – 968,374 total casualties (killed, wounded or captured) among all branches of the German armed forces and its allies:
214
+
215
+ 235,000 German and allied troops in total, from all units, including Manstein's ill-fated relief force, were captured during the battle.[114]
216
+
217
+ The Germans lost 900 aircraft (including 274 transports and 165 bombers used as transports), 500 tanks and 6,000 artillery pieces.[2]:122–23 According to a contemporary Soviet report, 5,762 guns, 1,312 mortars, 12,701 heavy machine guns, 156,987 rifles, 80,438 sub-machine guns, 10,722 trucks, 744 aircraft; 1,666 tanks, 261 other armoured vehicles, 571 half-tracks and 10,679 motorcycles were captured by the Soviets.[115] In addition, an unknown amount of Hungarian, Italian, and Romanian materiel was lost.
218
+
219
+ The situation of the Romanian tanks is known, however. Before Operation Uranus, the 1st Romanian Armoured Division consisted of 121 R-2 light tanks and 19 German-produced tanks (Panzer III and IV). All of the 19 German tanks were lost, as well as 81 of the R-2 light tanks. Only 27 of the latter were lost in combat, however, the remaining 54 being abandoned after breaking down or running out of fuel. Ultimately, however, Romanian armoured warfare proved to be a tactical success, as the Romanians destroyed 127 Soviet tanks for the cost of their 100 lost units. Romanian forces destroyed 62 Soviet tanks on 20 November for the cost of 25 tanks of their own, followed by 65 more Soviet tanks on 22 November, for the cost of 10 tanks of their own.[116] More Soviet tanks were destroyed as they overran the Romanian airfields. This was accomplished by Romanian Vickers/Reșița 75 mm anti-aircraft guns, which proved effective against Soviet armour. The battle for the German-Romanian airfield at Karpova lasted two days, with Romanian gunners destroying numerous Soviet tanks. Later, when the Tatsinskaya Airfield was also captured, the Romanian 75 mm guns destroyed five more Soviet tanks.[117]
220
+
221
+ The USSR, according to archival figures, suffered 1,129,619 total casualties; 478,741 personnel killed or missing, and 650,878 wounded or sick. The USSR lost 4,341 tanks destroyed or damaged, 15,728 artillery pieces and 2,769 combat aircraft.[16][118] 955 Soviet civilians died in Stalingrad and its suburbs from aerial bombing by Luftflotte 4 as the German 4th Panzer and 6th Armies approached the city.[2]:73
222
+
223
+ The losses of transport planes were especially serious, as they destroyed the capacity for supply of the trapped 6th Army. The destruction of 72 aircraft when the airfield at Tatsinskaya was overrun meant the loss of about 10 percent of the Luftwaffe transport fleet.[119]
224
+
225
+ These losses amounted to about 50 percent of the aircraft committed and the Luftwaffe training program was stopped and sorties in other theatres of war were significantly reduced to save fuel for use at Stalingrad.
226
+
227
+ The German public was not officially told of the impending disaster until the end of January 1943, though positive media reports had stopped in the weeks before the announcement.[120] Stalingrad marked the first time that the Nazi government publicly acknowledged a failure in its war effort. On 31 January, regular programmes on German state radio were replaced by a broadcast of the sombre Adagio movement from Anton Bruckner's Seventh Symphony, followed by the announcement of the defeat at Stalingrad.[120] On 18 February, Minister of Propaganda Joseph Goebbels gave the famous Sportpalast speech in Berlin, encouraging the Germans to accept a total war that would claim all resources and efforts from the entire population.
228
+
229
+ Based on Soviet records, over 10,000 German soldiers continued to resist in isolated groups within the city for the next month.[citation needed] Some have presumed that they were motivated by a belief that fighting on was better than a slow death in Soviet captivity. Brown University historian Omer Bartov claims they were motivated by National Socialism. He studied 11,237 letters sent by soldiers inside of Stalingrad between 20 December 1942 and 16 January 1943 to their families in Germany. Almost every letter expressed belief in Germany's ultimate victory and their willingness to fight and die at Stalingrad to achieve that victory.[121] Bartov reported that a great many of the soldiers were well aware that they would not be able to escape from Stalingrad but in their letters to their families boasted that they were proud to "sacrifice themselves for the Führer".[121]
230
+
231
+ The remaining forces continued to resist, hiding in cellars and sewers but by early March 1943, the last small and isolated pockets of resistance had surrendered. According to Soviet intelligence documents shown in the documentary, a remarkable NKVD report from March 1943 is available showing the tenacity of some of these German groups:
232
+
233
+ The mopping-up of counter-revolutionary elements in the city of Stalingrad proceeded. The German soldiers – who had hidden themselves in huts and trenches – offered armed resistance after combat actions had already ended. This armed resistance continued until 15 February and in a few areas until 20 February. Most of the armed groups were liquidated by March ... During this period of armed conflict with the Germans, the brigade's units killed 2,418 soldiers and officers and captured 8,646 soldiers and officers, escorting them to POW camps and handing them over.
234
+
235
+ The operative report of the Don Front's staff issued on 5 February 1943, 22:00 said,
236
+
237
+ The 64th Army was putting itself in order, being in previously occupied regions. Location of army's units is as it was previously. In the region of location of the 38th Motorised Rifle Brigade in a basement eighteen armed SS-men (sic) were found, who refused to surrender, the Germans found were destroyed.[122]
238
+
239
+ The condition of the troops that surrendered was pitiful. British war correspondent Alexander Werth described the following scene in his Russia at War book, based on a first-hand account of his visit to Stalingrad on 3–5 February 1943,
240
+
241
+ We [...] went into the yard of the large burnt out building of the Red Army House; and here one realised particularly clearly what the last days of Stalingrad had been to so many of the Germans. In the porch lay the skeleton of a horse, with only a few scraps of meat still clinging to its ribs. Then we came into the yard. Here lay more more [sic?] horses' skeletons, and to the right, there was an enormous horrible cesspool – fortunately, frozen solid. And then, suddenly, at the far end of the yard I caught sight of a human figure. He had been crouching over another cesspool, and now, noticing us, he was hastily pulling up his pants, and then he slunk away into the door of the basement. But as he passed, I caught a glimpse of the wretch's face – with its mixture of suffering and idiot-like incomprehension. For a moment, I wished that the whole of Germany were there to see it. The man was probably already dying. In that basement [...] there were still two hundred Germans—dying of hunger and frostbite. "We haven't had time to deal with them yet," one of the Russians said. "They'll be taken away tomorrow, I suppose." And, at the far end of the yard, besides the other cesspool, behind a low stone wall, the yellow corpses of skinny Germans were piled up – men who had died in that basement—about a dozen wax-like dummies. We did not go into the basement itself – what was the use? There was nothing we could do for them.[123]
242
+
243
+ Out of the nearly 91,000 German prisoners captured in Stalingrad, only about 5,000 returned.[124] Weakened by disease, starvation and lack of medical care during the encirclement, they were sent on foot marches to prisoner camps and later to labour camps all over the Soviet Union. Some 35,000 were eventually sent on transports, of which 17,000 did not survive. Most died of wounds, disease (particularly typhus), cold, overwork, mistreatment and malnutrition. Some were kept in the city to help rebuild it.
244
+
245
+ A handful of senior officers were taken to Moscow and used for propaganda purposes, and some of them joined the National Committee for a Free Germany. Some, including Paulus, signed anti-Hitler statements that were broadcast to German troops. Paulus testified for the prosecution during the Nuremberg Trials and assured families in Germany that those soldiers taken prisoner at Stalingrad were safe.[40]:401 He remained in the Soviet Union until 1952, then moved to Dresden in East Germany, where he spent the remainder of his days defending his actions at Stalingrad and was quoted as saying that Communism was the best hope for postwar Europe.[40]:280 General Walther von Seydlitz-Kurzbach offered to raise an anti-Hitler army from the Stalingrad survivors, but the Soviets did not accept. It was not until 1955 that the last of the 5,000–6,000 survivors were repatriated (to West Germany) after a plea to the Politburo by Konrad Adenauer.
246
+
247
+ Stalingrad has been described as the biggest defeat in the history of the German Army.[125][126] It is often identified as the turning point on the Eastern Front, in the war against Germany overall, and in the entire Second World War.[28]:142[127][128] The Red Army had the initiative, and the Wehrmacht was in retreat. A year of German gains during Case Blue had been wiped out. Germany's Sixth Army had ceased to exist, and the forces of Germany's European allies, except Finland, had been shattered.[129] In a speech on 9 November 1944, Hitler himself blamed Stalingrad for Germany's impending doom.[130]
248
+
249
+ The destruction of an entire army (the largest killed, captured, wounded figures for Axis soldiers, nearly 1 million, during the war) and the frustration of Germany's grand strategy made the battle a watershed moment.[131] At the time, the global significance of the battle was not in doubt. Writing in his diary on 1 January 1943, British General Alan Brooke, Chief of the Imperial General Staff, reflected on the change in the position from a year before:
250
+
251
+ I felt Russia could never hold, Caucasus was bound to be penetrated, and Abadan (our Achilles heel) would be captured with the consequent collapse of Middle East, India, etc. After Russia's defeat how were we to handle the German land and air forces liberated? England would be again bombarded, threat of invasion revived... And now! We start 1943 under conditions I would never have dared to hope. Russia has held, Egypt for the present is safe. There is a hope of clearing North Africa of Germans in the near future... Russia is scoring wonderful successes in Southern Russia.[131]
252
+
253
+ At this point, the British had won the Battle of El Alamein in November 1942. However, there were only about 50,000 German soldiers at El Alamein in Egypt, while at Stalingrad 300,000 to 400,000 Germans had been lost.[131]
254
+
255
+ Regardless of the strategic implications, there is little doubt about Stalingrad's symbolism. Germany's defeat shattered its reputation for invincibility and dealt a devastating blow to German morale. On 30 January 1943, the tenth anniversary of his coming to power, Hitler chose not to speak. Joseph Goebbels read the text of his speech for him on the radio. The speech contained an oblique reference to the battle, which suggested that Germany was now in a defensive war. The public mood was sullen, depressed, fearful, and war-weary. Germany was looking in the face of defeat.[132]
256
+
257
+ The reverse was the case on the Soviet side. There was an overwhelming surge in confidence and belief in victory. A common saying was: "You cannot stop an army which has done Stalingrad." Stalin was feted as the hero of the hour and made a Marshal of the Soviet Union.[133]
258
+
259
+ The news of the battle echoed round the world, with many people now believing that Hitler's defeat was inevitable.[134] The Turkish Consul in Moscow predicted that "the lands which the Germans have destined for their living space will become their dying space".[135] Britain's conservative The Daily Telegraph proclaimed that the victory had saved European civilisation.[135] The country celebrated "Red Army Day" on 23 February 1943. A ceremonial Sword of Stalingrad was forged by King George VI. After being put on public display in Britain, this was presented to Stalin by Winston Churchill at the Tehran Conference later in 1943.[133] Soviet propaganda spared no effort and wasted no time in capitalising on the triumph, impressing a global audience. The prestige of Stalin, the Soviet Union, and the worldwide Communist movement was immense, and their political position greatly enhanced.[136]
260
+
261
+ In recognition of the determination of its defenders, Stalingrad was awarded the title Hero City in 1945. A colossal monument called The Motherland Calls was erected in 1967 on Mamayev Kurgan, the hill overlooking the city where bones and rusty metal splinters can still be found.[137] The statue forms part of a war memorial complex which includes the ruins of the Grain Silo and Pavlov's House. On 2 February 2013 Volgograd hosted a military parade and other events to commemorate the 70th anniversary of the final victory.[138][139] Since then, military parades have always commemorated the victory in the city.
262
+
263
+ The events of the Battle for Stalingrad have been covered in numerous media works of British, American, German, and Russian origin,[140] for its significance as a turning point in the Second World War and for the loss of life associated with the battle. The term Stalingrad has become almost synonymous with large-scale urban battles with high casualties on both sides.[141][142][143]
en/5870.html.txt ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Marvel Comics is the brand name and primary imprint of Marvel Worldwide Inc., formerly Marvel Publishing, Inc. and Marvel Comics Group, a publisher of American comic books and related media. In 2009, The Walt Disney Company acquired Marvel Entertainment, Marvel Worldwide's parent company.
6
+
7
+ Marvel was started in 1939 by Martin Goodman under a number of corporations and imprints but now known as Timely Comics,[2] and by 1951 had generally become known as Atlas Comics. The Marvel era began in 1961, the year that the company launched The Fantastic Four and other superhero titles created by Stan Lee, Jack Kirby, Steve Ditko and many others. The Marvel brand, which had been used over the years, was solidified as the company's primary brand.
8
+
9
+ Marvel counts among its characters such well-known superheroes as Spider-Man, Iron Man, the Hulk, Thor, Captain America, Ant-Man, the Wasp, Black Widow, Wolverine, Captain Marvel, Black Panther, Doctor Strange, Ghost Rider, Blade, Daredevil, the Punisher and Deadpool. Superhero teams exist such as the Avengers, the X-Men, the Fantastic Four and the Guardians of the Galaxy as well as supervillains including Doctor Doom, Magneto, Thanos, Loki, Green Goblin, Kingpin, Red Skull, Ultron, the Mandarin, MODOK, Doctor Octopus, Kang, Dormammu, Annihilus and Galactus. Most of Marvel's fictional characters operate in a single reality known as the Marvel Universe, with most locations mirroring real-life places; many major characters are based in New York City.[3] Additionally, Marvel has published several licensed properties from other companies. This includes Star Wars comics twice from 1977 to 1986 and again since 2015.
10
+
11
+ Pulp-magazine publisher Martin Goodman created the company later known as Marvel Comics under the name Timely Publications in 1939.[4][5] Goodman, who had started with a Western pulp in 1933, was expanding into the emerging—and by then already highly popular—new medium of comic books. Launching his new line from his existing company's offices at 330 West 42nd Street, New York City, he officially held the titles of editor, managing editor, and business manager, with Abraham Goodman (Martin's brother)[6] officially listed as publisher.[5]
12
+
13
+ Timely's first publication, Marvel Comics #1 (cover dated Oct. 1939), included the first appearance of Carl Burgos' android superhero the Human Torch, and the first appearances of Bill Everett's anti-hero Namor the Sub-Mariner,[7] among other features.[4] The issue was a great success; it and a second printing the following month sold a combined nearly 900,000 copies.[8] While its contents came from an outside packager, Funnies, Inc.,[4] Timely had its own staff in place by the following year. The company's first true editor, writer-artist Joe Simon, teamed with artist Jack Kirby to create one of the first patriotically themed superheroes,[9] Captain America, in Captain America Comics #1 (March 1941). It, too, proved a hit, with sales of nearly one million.[8] Goodman formed Timely Comics, Inc., beginning with comics cover-dated April 1941 or Spring 1941.[2][10]
14
+
15
+ While no other Timely character would achieve the success of these three characters, some notable heroes—many of which continue to appear in modern-day retcon appearances and flashbacks—include the Whizzer, Miss America, the Destroyer, the original Vision, and the Angel. Timely also published one of humor cartoonist Basil Wolverton's best-known features, "Powerhouse Pepper",[11][12] as well as a line of children's funny-animal comics featuring characters like Super Rabbit and the duo Ziggy Pig and Silly Seal.
16
+
17
+ Goodman hired his wife's cousin,[13] Stanley Lieber, as a general office assistant in 1939.[14] When editor Simon left the company in late 1941,[15] Goodman made Lieber—by then writing pseudonymously as "Stan Lee"—interim editor of the comics line, a position Lee kept for decades except for three years during his military service in World War II. Lee wrote extensively for Timely, contributing to a number of different titles.
18
+
19
+ Goodman's business strategy involved having his various magazines and comic books published by a number of corporations all operating out of the same office and with the same staff.[2] One of these shell companies through which Timely Comics was published was named Marvel Comics by at least Marvel Mystery Comics #55 (May 1944). As well, some comics' covers, such as All Surprise Comics #12 (Winter 1946–47), were labeled "A Marvel Magazine" many years before Goodman would formally adopt the name in 1961.[16]
20
+
21
+ The post-war American comic market saw superheroes falling out of fashion.[17] Goodman's comic book line dropped them for the most part and expanded into a wider variety of genres than even Timely had published, featuring horror, Westerns, humor, funny animal, men's adventure-drama, giant monster, crime, and war comics, and later adding jungle books, romance titles, espionage, and even medieval adventure, Bible stories and sports.
22
+
23
+ Goodman began using the globe logo of the Atlas News Company, the newsstand-distribution company he owned,[18] on comics cover-dated November 1951 even though another company, Kable News, continued to distribute his comics through the August 1952 issues.[19] This globe branding united a line put out by the same publisher, staff and freelancers through 59 shell companies, from Animirth Comics to Zenith Publications.[20]
24
+
25
+ Atlas, rather than innovate, took a proven route of following popular trends in television and movies—Westerns and war dramas prevailing for a time, drive-in movie monsters another time—and even other comic books, particularly the EC horror line.[21] Atlas also published a plethora of children's and teen humor titles, including Dan DeCarlo's Homer the Happy Ghost (similar to Casper the Friendly Ghost) and Homer Hooper (à la Archie Andrews). Atlas unsuccessfully attempted to revive superheroes from late 1953 to mid-1954, with the Human Torch (art by Syd Shores and Dick Ayers, variously), the Sub-Mariner (drawn and most stories written by Bill Everett), and Captain America (writer Stan Lee, artist John Romita Sr.). Atlas did not achieve any breakout hits and, according to Stan Lee, Atlas survived chiefly because it produced work quickly, cheaply, and at a passable quality.[22]
26
+
27
+ The first modern comic books under the Marvel Comics brand were the science-fiction anthology Journey into Mystery #69 and the teen-humor title Patsy Walker #95 (both cover dated June 1961), which each displayed an "MC" box on its cover.[23] Then, in the wake of DC Comics' success in reviving superheroes in the late 1950s and early 1960s, particularly with the Flash, Green Lantern, Batman, Superman, Wonder Woman, Green Arrow and other members of the team the Justice League of America, Marvel followed suit.[n 1]
28
+
29
+ In 1961, writer-editor Stan Lee revolutionized superhero comics by introducing superheroes designed to appeal to older readers than the predominantly child audiences of the medium, thus ushering what Marvel later called the Marvel Age of Comics.[24] Modern Marvel's first superhero team, the titular stars of The Fantastic Four #1 (Nov. 1961),[25] broke convention with other comic book archetypes of the time by squabbling, holding grudges both deep and petty, and eschewing anonymity or secret identities in favor of celebrity status. Subsequently, Marvel comics developed a reputation for focusing on characterization and adult issues to a greater extent than most superhero comics before them, a quality which the new generation of older readers appreciated.[26] This applied to The Amazing Spider-Man title in particular, which turned out to be Marvel's most successful book. Its young hero suffered from self-doubt and mundane problems like any other teenager, something with which many readers could identify.
30
+
31
+ Stan Lee and freelance artist and eventual co-plotter Jack Kirby's Fantastic Four originated in a Cold War culture that led their creators to revise the superhero conventions of previous eras to better reflect the psychological spirit of their age.[27] Eschewing such comic-book tropes as secret identities and even costumes at first, having a monster as one of the heroes, and having its characters bicker and complain in what was later called a "superheroes in the real world" approach, the series represented a change that proved to be a great success.[28]
32
+
33
+ Marvel often presented flawed superheroes, freaks, and misfits—unlike the perfect, handsome, athletic heroes found in previous traditional comic books. Some Marvel heroes looked like villains and monsters such as the Hulk and the Thing. This naturalistic approach even extended into topical politics.
34
+
35
+ Comics historian Mike Benton also noted:
36
+
37
+ In the world of [rival DC Comics'] Superman comic books, communism did not exist. Superman rarely crossed national borders or involved himself in political disputes.[29] From 1962 to 1965, there were more communists [in Marvel Comics] than on the subscription list of Pravda. Communist agents attack Ant-Man in his laboratory, red henchmen jump the Fantastic Four on the moon, and Viet Cong guerrillas take potshots at Iron Man.[30]
38
+
39
+ All these elements struck a chord with the older readers, including college-aged adults. In 1965, Spider-Man and the Hulk were both featured in Esquire magazine's list of 28 college campus heroes, alongside John F. Kennedy and Bob Dylan.[31] In 2009, writer Geoff Boucher reflected that,
40
+
41
+ Superman and DC Comics instantly seemed like boring old Pat Boone; Marvel felt like The Beatles and the British Invasion. It was Kirby's artwork with its tension and psychedelia that made it perfect for the times—or was it Lee's bravado and melodrama, which was somehow insecure and brash at the same time?[32]
42
+
43
+ In addition to Spider-Man and the Fantastic Four, Marvel began publishing further superhero titles featuring such heroes and antiheroes as the Hulk, Thor, Ant-Man, Iron Man, the X-Men, Daredevil, the Inhumans, Black Panther, Doctor Strange, Captain Marvel and the Silver Surfer, and such memorable antagonists as Doctor Doom, Magneto, Galactus, Loki, the Green Goblin, and Doctor Octopus, all existing in a shared reality known as the Marvel Universe, with locations that mirror real-life cities such as New York, Los Angeles and Chicago.
44
+
45
+ Marvel even lampooned itself and other comics companies in a parody comic, Not Brand Echh (a play on Marvel's dubbing of other companies as "Brand Echh", à la the then-common phrase "Brand X").[33]
46
+
47
+ In 1968, while selling 50 million comic books a year, company founder Goodman revised the constraining distribution arrangement with Independent News he had reached under duress during the Atlas years, allowing him now to release as many titles as demand warranted.[18] Late that year, he sold Marvel Comics and its parent company, Magazine Management, to the Perfect Film and Chemical Corporation, with Goodman remaining as publisher.[34] In 1969, Goodman finally ended his distribution deal with Independent by signing with Curtis Circulation Company.[18]
48
+
49
+ In 1971, the United States Department of Health, Education, and Welfare approached Marvel Comics editor-in-chief Stan Lee to do a comic book story about drug abuse. Lee agreed and wrote a three-part Spider-Man story portraying drug use as dangerous and unglamorous. However, the industry's self-censorship board, the Comics Code Authority, refused to approve the story because of the presence of narcotics, deeming the context of the story irrelevant. Lee, with Goodman's approval, published the story regardless in The Amazing Spider-Man #96–98 (May–July 1971), without the Comics Code seal. The market reacted well to the storyline, and the CCA subsequently revised the Code the same year.[35]
50
+
51
+ Goodman retired as publisher in 1972 and installed his son, Chip, as publisher.[36] Shortly thereafter, Lee succeeded him as publisher and also became Marvel's president[36] for a brief time.[37] During his time as president, he appointed his associate editor, prolific writer Roy Thomas, as editor-in-chief. Thomas added "Stan Lee Presents" to the opening page of each comic book.[36]
52
+
53
+ A series of new editors-in-chief oversaw the company during another slow time for the industry. Once again, Marvel attempted to diversify, and with the updating of the Comics Code published titles themed to horror (The Tomb of Dracula), martial arts (Shang-Chi: Master of Kung Fu), sword-and-sorcery (Conan the Barbarian in 1970,[38] Red Sonja), satire (Howard the Duck) and science fiction (2001: A Space Odyssey, "Killraven" in Amazing Adventures, Battlestar Galactica, Star Trek, and, late in the decade, the long-running Star Wars series). Some of these were published in larger-format black and white magazines, under its Curtis Magazines imprint.
54
+
55
+ Marvel was able to capitalize on its successful superhero comics of the previous decade by acquiring a new newsstand distributor and greatly expanding its comics line. Marvel pulled ahead of rival DC Comics in 1972, during a time when the price and format of the standard newsstand comic were in flux.[39] Goodman increased the price and size of Marvel's November 1971 cover-dated comics from 15 cents for 36 pages total to 25 cents for 52 pages. DC followed suit, but Marvel the following month dropped its comics to 20 cents for 36 pages, offering a lower-priced product with a higher distributor discount.[40]
56
+
57
+ In 1973, Perfect Film and Chemical renamed itself as Cadence Industries and renamed Magazine Management as Marvel Comics Group.[41] Goodman, now disconnected from Marvel, set up a new company called Seaboard Periodicals in 1974, reviving Marvel's old Atlas name for a new Atlas Comics line, but this lasted only a year and a half.[42]
58
+ In the mid-1970s a decline of the newsstand distribution network affected Marvel. Cult hits such as Howard the Duck fell victim to the distribution problems, with some titles reporting low sales when in fact the first specialty comic book stores resold them at a later date.[citation needed] But by the end of the decade, Marvel's fortunes were reviving, thanks to the rise of direct market distribution—selling through those same comics-specialty stores instead of newsstands.
59
+
60
+ Marvel ventured into audio in 1975 with a radio series and a record, both had Stan Lee as narrator. The radio series was Fantastic Four. The record was Spider-Man: Rock Reflections of a Superhero concept album for music fans.[43]
61
+
62
+ Marvel held its own comic book convention, Marvelcon '75, in spring 1975, and promised a Marvelcon '76. At the 1975 event, Stan Lee used a Fantastic Four panel discussion to announce that Jack Kirby, the artist co-creator of most of Marvel's signature characters, was returning to Marvel after having left in 1970 to work for rival DC Comics.[45] In October 1976, Marvel, which already licensed reprints in different countries, including the UK, created a superhero specifically for the British market. Captain Britain debuted exclusively in the UK, and later appeared in American comics.[46] During this time, Marvel and the Iowa-based Register and Tribune Syndicate launched a number of syndicated comic strips — The Amazing Spider-Man, Howard the Duck, Conan the Barbarian, and The Incredible Hulk. None of the strips lasted past 1982, except for The Amazing Spider-Man, which is still being published.
63
+
64
+ In 1978, Jim Shooter became Marvel's editor-in-chief. Although a controversial personality, Shooter cured many of the procedural ills at Marvel, including repeatedly missed deadlines. During Shooter's nine-year tenure as editor-in-chief, Chris Claremont and John Byrne's run on the Uncanny X-Men and Frank Miller's run on Daredevil became critical and commercial successes.[47] Shooter brought Marvel into the rapidly evolving direct market,[48] institutionalized creator royalties, starting with the Epic Comics imprint for creator-owned material in 1982; introduced company-wide crossover story arcs with Contest of Champions and Secret Wars; and in 1986 launched the ultimately unsuccessful New Universe line to commemorate the 25th anniversary of the Marvel Comics imprint. Star Comics, a children-oriented line differing from the regular Marvel titles, was briefly successful during this period.
65
+
66
+ In 1986, Marvel's parent, Marvel Entertainment Group, was sold to New World Entertainment, which within three years sold it to MacAndrews and Forbes, owned by Revlon executive Ronald Perelman in 1989. In 1991 Perelman took MEG public. Following the rapid rise of this stock, Perelman issued a series of junk bonds that he used to acquire other entertainment companies, secured by MEG stock.[49]
67
+
68
+ Marvel earned a great deal of money with their 1980s children's comics imprint Star Comics and they earned a great deal more money and worldwide success during the comic book boom of the early 1990s, launching the successful 2099 line of comics set in the future (Spider-Man 2099, etc.) and the creatively daring though commercially unsuccessful Razorline imprint of superhero comics created by novelist and filmmaker Clive Barker.[50][51] In 1990, Marvel began selling Marvel Universe Cards with trading card maker SkyBox International. These were collectible trading cards that featured the characters and events of the Marvel Universe. The 1990s saw the rise of variant covers, cover enhancements, swimsuit issues, and company-wide crossovers that affected the overall continuity of the Marvel Universe.
69
+
70
+ Marvel suffered a blow in early 1992, when seven of its most prized artists — Todd McFarlane (known for his work on Spider-Man), Jim Lee (X-Men), Rob Liefeld (X-Force), Marc Silvestri (Wolverine), Erik Larsen (The Amazing Spider-Man), Jim Valentino (Guardians of the Galaxy), and Whilce Portacio (Uncanny X-Men) — left to form Image Comics[52] in a deal brokered by Malibu Comics' owner Scott Mitchell Rosenberg.[53] Three years later Rosenberg sold Malibu to Marvel on November 3, 1994,[54][55][56] who acquired the then-leading standard for computer coloring of comic books (developed by Rosenberg) in the process,[57] but also integrating the Ultraverse into Marvel's multiverse and ownership of the Genesis Universe.
71
+
72
+ In late 1994, Marvel acquired the comic book distributor Heroes World Distribution to use as its own exclusive distributor.[58] As the industry's other major publishers made exclusive distribution deals with other companies, the ripple effect resulted in the survival of only one other major distributor in North America, Diamond Comic Distributors Inc.[59][60] Then, by the middle of the decade, the industry had slumped, and in December 1996 MEG filed for Chapter 11 bankruptcy protection.[49] In early 1997, when Marvel's Heroes World endeavor failed, Diamond also forged an exclusive deal with Marvel[61]—giving the company its own section of its comics catalog Previews.[62]
73
+
74
+ In 1996, Marvel had some of its titles participate in "Heroes Reborn", a crossover that allowed Marvel to relaunch some of its flagship characters such as the Avengers and the Fantastic Four, and outsource them to the studios of two of the former Marvel artists turned Image Comics founders, Jim Lee and Rob Liefeld. The relaunched titles, which saw the characters transported to a parallel universe with a history distinct from the mainstream Marvel Universe, were a solid success amidst a generally struggling industry,[63] but Marvel discontinued the experiment after a one-year run and returned the characters to the Marvel Universe proper.
75
+
76
+ In 1997, Toy Biz bought Marvel Entertainment Group to end the bankruptcy, forming a new corporation, Marvel Enterprises.[49] With his business partner Avi Arad, publisher Bill Jemas, and editor-in-chief Bob Harras, Toy Biz co-owner Isaac Perlmutter helped stabilize the comics line.[64]
77
+
78
+ In 1998, the company launched the imprint Marvel Knights, taking place just outside Marvel continuity with better production quality. The imprint was helmed by soon-to-become editor-in-chief Joe Quesada; it featured tough, gritty stories showcasing such characters as the Daredevil,[65] Inhumans and Black Panther.
79
+
80
+ With the new millennium, Marvel Comics emerged from bankruptcy and again began diversifying its offerings. In 2001, Marvel withdrew from the Comics Code Authority and established its own Marvel Rating System for comics. The first title from this era to not have the code was X-Force #119 (October 2001). Marvel also created new imprints, such as MAX (an explicit-content line) and Marvel Adventures (developed for child audiences). In addition, the company created an alternate universe imprint, Ultimate Marvel, that allowed the company to reboot its major titles by revising and updating its characters to introduce to a new generation.
81
+
82
+ Some of its characters have been turned into successful film franchises, such as the Men in Black movie series, starting in 1997, Blade movie series, starting in 1998, X-Men movie series, starting in 2000, and the highest grossing series Spider-Man, beginning in 2002.[66]
83
+
84
+ Marvel's Conan the Barbarian title stopped in 1993 after 275 issues. The Savage Sword of Conan magazine had 235 issues. Marvel published additional titles including miniseries until 2000 for a total of 650 issues. Conan was pick up by Dark Horse three years later.[38]
85
+
86
+ In a cross-promotion, the November 1, 2006, episode of the CBS soap opera The Guiding Light, titled "She's a Marvel", featured the character Harley Davidson Cooper (played by Beth Ehlers) as a superheroine named the Guiding Light.[67] The character's story continued in an eight-page backup feature, "A New Light", that appeared in several Marvel titles published November 1 and 8.[68] Also that year, Marvel created a wiki on its Web site.[69]
87
+
88
+ In late 2007 the company launched Marvel Digital Comics Unlimited, a digital archive of over 2,500 back issues available for viewing, for a monthly or annual subscription fee.[70] At the December 2007 the NY Anime Fest, the company announcement that Del Rey Manga would published two original English language Marvel manga books featuring the X-Men and Wolverine to hit the stands in spring 2009.[71]
89
+
90
+ In 2009 Marvel Comics closed its Open Submissions Policy, in which the company had accepted unsolicited samples from aspiring comic book artists, saying the time-consuming review process had produced no suitably professional work.[72] The same year, the company commemorated its 70th anniversary, dating to its inception as Timely Comics, by issuing the one-shot Marvel Mystery Comics 70th Anniversary Special #1 and a variety of other special issues.[73][74]
91
+
92
+ On August 31, 2009, The Walt Disney Company announced it would acquire Marvel Comics' parent corporation, Marvel Entertainment, for a cash and stock deal worth approximately $4 billion, which if necessary would be adjusted at closing, giving Marvel shareholders $30 and 0.745 Disney shares for each share of Marvel they owned.[75][76] As of 2008, Marvel and its major, longtime competitor DC Comics shared over 80% of the American comic-book market.[77]
93
+
94
+ As of September 2010, Marvel switched its bookstores distribution company from Diamond Book Distributors to Hachette Distribution Services.[78] Marvel moved its office to the Sports Illustrated Building in October 2010.[79]
95
+
96
+ Marvel relaunched the CrossGen imprint, owned by Disney Publishing Worldwide, in March 2011.[80] Marvel and Disney Publishing began jointly publishing Disney/Pixar Presents magazine that May.[81]
97
+
98
+ Marvel discontinued its Marvel Adventures imprint in March 2012,[82] and replaced them with a line of two titles connected to the Marvel Universe TV block.[83] Also in March, Marvel announced its Marvel ReEvolution initiative that included Infinite Comics,[84] a line of digital comics, Marvel AR, a software application that provides an augmented reality experience to readers and Marvel NOW!, a relaunch of most of the company's major titles with different creative teams.[85][86] Marvel NOW! also saw the debut of new flagship titles including Uncanny Avengers and All-New X-Men.[87]
99
+
100
+ In April 2013, Marvel and other Disney conglomerate components began announcing joint projects. With ABC, a Once Upon a Time graphic novel was announced for publication in September.[88] With Disney, Marvel announced in October 2013 that in January 2014 it would release its first title under their joint "Disney Kingdoms" imprint "Seekers of the Weird", a five-issue miniseries.[89] On January 3, 2014, fellow Disney subsidiary Lucasfilm announced that as of 2015, Star Wars comics would once again be published by Marvel.[90]
101
+
102
+ Following the events of the company-wide crossover "Secret Wars" in 2015, a relaunched Marvel universe began in September 2015, called the All-New, All-Different Marvel.[91]
103
+
104
+ Marvel Legacy was the company's Fall 2017 relaunch banner starting in September. The banner had comics with lenticular variant covers which required comic book stores to double their regular issue order to be able to order the variants. The owner of two Comix Experience stores complained about the set up of forcing retailers to be stuck with copies they cannot sell for the variant that they can sell. With other complaints too, Marvel did adjust down requirements for new titles no adjustment was made for any other. Thusforthly MyComicShop.com and at least 70 other comic book stores were boycotting these variant covers.[92] Despite the release of Guardians of the Galaxy Vol. 2, Logan, Thor: Ragnarok and Spider-Man: Homecoming in theaters, none of those characters' titles featured in the top 10 sales and the Guardians of the Galaxy comic book series was cancelled.[93] Conan Properties International announced on January 12, 2018 that Conan would return to Marvel in early 2019.[38]
105
+
106
+ On January 19, 2018, Joshua Yehl, editor of ign.com, speculated on potential changes if Disney's proposed acquisition of 21st Century Fox went through. He expects Fox franchises licensed out to other firms would be moved to Marvel and that Fox's Marvel film properties would be treated better by the publishing division.[94] However, Marvel had licensed Archie Comics to publish Marvel Digests collections for the newsstand market.[95] While Disney has licensed IDW Publishing to produce the classic, all-ages Disney comics since the Marvel purchase[96] and a Big Hero 6 comic book to go along with the TV series despite the fact that the Disney movie was based on a Marvel Comic book. Then on July 17, 2018, Marvel Entertainment announced the licensing of Marvel characters to IDW for a line of middle-grade reader market comic books to start publishing in November 2018.[95]
107
+
108
+ On March 1, 2019, Serial Box, a digital book platform, announced a partnership with Marvel. They will publish new and original stories that will be tied to a number of Marvel's popular franchises. The first series will be about the character Thor and is set to be released Summer 2019.[97]
109
+
110
+ Due to Diamond Comics Distributors halting their distribution of comics globally as a result of the COVID-19 pandemic, Marvel Comics as of April 15 have suspended the release of both physical and digital copies of their comic books until further notice. Dan Buckley the president of Marvel Entertainment has stated that he will provide further information when possible. [98]
111
+
112
+ Marvel's chief editor originally held the title of "editor". This head editor's title later became "editor-in-chief". Joe Simon was the company's first true chief-editor, with publisher Martin Goodman, who had served as titular editor only and outsourced editorial operations.
113
+
114
+ In 1994 Marvel briefly abolished the position of editor-in-chief, replacing Tom DeFalco with five group editors-in-chief. As Carl Potts described the 1990s editorial arrangement:
115
+
116
+ In the early '90s, Marvel had so many titles that there were three Executive Editors, each overseeing approximately 1/3 of the line. Bob Budiansky was the third Executive Editor [following the previously appointed Mark Gruenwald and Potts]. We all answered to Editor-in-Chief Tom DeFalco and Publisher Mike Hobson. All three Executive Editors decided not to add our names to the already crowded credits on the Marvel titles. Therefore it wasn't easy for readers to tell which titles were produced by which Executive Editor … In late '94, Marvel reorganized into a number of different publishing divisions, each with its own Editor-in-Chief.[104]
117
+
118
+ Marvel reinstated the overall editor-in-chief position in 1995 with Bob Harras.
119
+
120
+
121
+
122
+
123
+
124
+ Originally called associate editor when Marvel's chief editor just carried the title of editor, the title of the next highest editorial position became executive editor under the chief editor title of editor-in-chief. The title of associate editor later was revived under the editor-in-chief as an editorial position in charge of few titles under the direction of an editor and without an assistant editor.
125
+
126
+ Located in New York City, Marvel has had successive headquarters:
127
+
128
+ Animated
129
+
130
+ In 2017, Marvel held a 38.30% share of the comics market, compared to its competitor DC Comics' 33.93%.[111] By comparison, the companies respectively held 33.50% and 30.33% shares in 2013, and 40.81% and 29.94% shares in 2008.[112]
131
+
132
+ Marvel characters and stories have been adapted to many other media. Some of these adaptations were produced by Marvel Comics and its sister company, Marvel Studios, while others were produced by companies licensing Marvel material.
133
+
134
+ In June 1993, Marvel issued its collectable caps for milk caps game under the Hero Caps brand.[113] In 2014, the Marvel Disk Wars: The Avengers Japanese TV series was launched together with a collectible game called Bachicombat, a game similar to the milk caps game, by Bandai.[114]
135
+
136
+ The RPG industry brought the development of the collectible card game (CCG) in the early 1990s which there were soon Marvel characters were featured in CCG of their own starting in 1995 with Fleer's OverPower (1995–1999). Later collectible card game were:
137
+
138
+ TSR published the pen-and-paper role-playing game Marvel Super Heroes in 1984. TSR then released in 1998 the Marvel Super Heroes Adventure Game which used a different system, the card-based SAGA system, than their first game. In 2003 Marvel Publishing published its own role-playing game, the Marvel Universe Roleplaying Game, that used a diceless stone pool system.[117] In August 2011 Margaret Weis Productions announced it was developing a tabletop role-playing game based on the Marvel universe, set for release in February 2012 using its house Cortex Plus RPG system.[118]
139
+
140
+ Video games based on Marvel characters go back to 1984 and the Atari game, Spider-Man. Since then several dozen video games have been released and all have been produces by outside licensees. In 2014, Disney Infinity 2.0: Marvel Super Heroes was released that brought Marvel characters to the existing Disney sandbox video game.
141
+
142
+ As of the start of September 2015, films based on Marvel's properties represent the highest-grossing U.S. franchise, having grossed over $7.7 billion [119] as part of a worldwide gross of over $18 billion. As of May 2019 the Marvel Cinematic Universe (MCU) has grossed over $22 billion.
143
+
144
+ Marvel first licensed two prose novels to Bantam Books, who printed The Avengers Battle the Earth Wrecker by Otto Binder (1967) and Captain America: The Great Gold Steal by Ted White (1968). Various publishers took up the licenses from 1978 to 2002. Also, with the various licensed films being released beginning in 1997, various publishers put out movie novelizations.[120] In 2003, following publication of the prose young adult novel Mary Jane, starring Mary Jane Watson from the Spider-Man mythos, Marvel announced the formation of the publishing imprint Marvel Press.[121] However, Marvel moved back to licensing with Pocket Books from 2005 to 2008.[120] With few books issued under the imprint, Marvel and Disney Books Group relaunched Marvel Press in 2011 with the Marvel Origin Storybooks line.[122]
145
+
146
+ Many television series, both live-action and animated, have based their productions on Marvel Comics characters. These include series for popular characters such as Spider-Man, Iron Man, the Hulk, the Avengers, the X-Men, Fantastic Four, the Guardians of the Galaxy, Daredevil, Jessica Jones, Luke Cage, Iron Fist, the Punisher, the Defenders, S.H.I.E.L.D., Agent Carter, Deadpool, Legion, and others. Additionally, a handful of television movies, usually also pilots, based on Marvel Comics characters have been made.
147
+
148
+ Marvel has licensed its characters for theme parks and attractions, including Marvel Super Hero Island at Universal Orlando's Islands of Adventure[123] in Orlando, Florida, which includes rides based on their iconic characters and costumed performers, as well as The Amazing Adventures of Spider-Man ride cloned from Islands of Adventure to Universal Studios Japan.[124]
149
+
150
+ Years after Disney purchased Marvel in late 2009, Walt Disney Parks and Resorts plans on creating original Marvel attractions at their theme parks,[125][126] with Hong Kong Disneyland becoming the first Disney theme park to feature a Marvel attraction.[127][128] Due to the licensing agreement with Universal Studios, signed prior to Disney's purchase of Marvel, Walt Disney World and Tokyo Disney Resort are barred from having Marvel characters in their parks.[129] However, this only includes characters that Universal is currently using, other characters in their "families" (X-Men, Avengers, Fantastic Four, etc.), and the villains associated with said characters.[123] This clause has allowed Walt Disney World to have meet and greets, merchandise, attractions and more with other Marvel characters not associated with the characters at Islands of Adventures, such as Star-Lord and Gamora from Guardians of the Galaxy.[130][131]
151
+
152
+ Marvel Worldwide with Disney announced in October 2013 that in January 2014 it would release its first comic book title under their joint Disney Kingdoms imprint Seekers of the Weird, a five-issue miniseries inspired by a never built Disneyland attraction Museum of the Weird.[89] Marvel's Disney Kingdoms imprint has since released comic adaptations of Big Thunder Mountain Railroad,[132] Walt Disney's Enchanted Tiki Room,[133] The Haunted Mansion,[134] two series on Figment[135][136] based on Journey Into Imagination.
153
+
154
+ Irwin said he never played golf with Goodman, so the story is untrue. I heard this story more than a couple of times while sitting in the lunchroom at DC's 909 Third Avenue and 75 Rockefeller Plaza office as Sol Harrison and [production chief] Jack Adler were schmoozing with some of us … who worked for DC during our college summers.... [T]he way I heard the story from Sol was that Goodman was playing with one of the heads of Independent News, not DC Comics (though DC owned Independent News). … As the distributor of DC Comics, this man certainly knew all the sales figures and was in the best position to tell this tidbit to Goodman. … Of course, Goodman would want to be playing golf with this fellow and be in his good graces. … Sol worked closely with Independent News' top management over the decades and would have gotten this story straight from the horse's mouth.
155
+
156
+ Goodman, a publishing trend-follower aware of the JLA's strong sales, confirmably directed his comics editor, Stan Lee, to create a comic-book series about a team of superheroes. According to Lee in Origins of Marvel Comics (Simon and Schuster/Fireside Books, 1974), p. 16:
157
+ "Martin mentioned that he had noticed one of the titles published by National Comics seemed to be selling better than most. It was a book called The [sic] Justice League of America and it was composed of a team of superheroes. … ' If the Justice League is selling ', spoke he, 'why don't we put out a comic book that features a team of superheroes?'"
en/5871.html.txt ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A computer is a machine that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. A "complete" computer including the hardware, the operating system (main software), and peripheral equipment required and used for "full" operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.
4
+
5
+ Computers are used as control systems for a wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, and also general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects hundreds of millions of other computers and their users.
6
+
7
+ Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit (IC) chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power and versatility of computers have been increasing dramatically ever since then, with MOS transistor counts increasing at a rapid pace (as predicted by Moore's law), leading to the Digital Revolution during the late 20th to early 21st centuries.
8
+
9
+ Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a metal-oxide-semiconductor (MOS) microprocessor, along with some type of computer memory, typically MOS semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.
10
+
11
+ According to the Oxford English Dictionary, the first known use of the word "computer" was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were often hired as computers because they could be paid less than their male counterparts.[1] By 1943, most human computers were women.[2]
12
+
13
+ The Online Etymology Dictionary gives the first attested use of "computer" in the 1640s, meaning "one who calculates"; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean "programmable digital electronic computer" dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine".[3]
14
+
15
+ Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers.[4][5] The use of counting rods is one example.
16
+
17
+ The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.[6]
18
+
19
+ The Antikythera mechanism is believed to be the earliest mechanical analog "computer", according to Derek J. de Solla Price.[7] It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to c. 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later.
20
+
21
+ Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century.[8] The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer[9][10] and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235.[11] Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe,[12] an early fixed-wired knowledge processing machine[13] with a gear train and gear-wheels,[14] c. 1000 AD.
22
+
23
+ The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.
24
+
25
+ The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage.
26
+
27
+ The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft.
28
+
29
+ In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates.[15]
30
+
31
+ The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.
32
+
33
+ The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Lord Kelvin had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators.[16] In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.
34
+
35
+ Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer",[17] he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.[18][19]
36
+
37
+ The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.
38
+
39
+ During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[20] The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.[16]
40
+
41
+ The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (control systems) and aircraft (slide rule).
42
+
43
+ By 1938, the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.
44
+
45
+ Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.[21]
46
+
47
+ In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer.[22][23] The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz.[24] Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.[25] The Z3 was not itself a universal computer but could be extended to be Turing complete.[26][27]
48
+
49
+ Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.[20] In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942,[28] the first "automatic electronic digital computer".[29] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[30]
50
+
51
+ During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women.[31][32] To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus.[30] He spent eleven months from early February 1943 designing and building the first Colossus.[33] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[34] and attacked its first message on 5 February.[30]
52
+
53
+ Colossus was the world's first electronic digital programmable computer.[20] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both 5 times faster and simpler to operate than Mark I, greatly speeding the decoding process.[35][36]
54
+
55
+ The ENIAC[37] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls".[38][39]
56
+
57
+ It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[40]
58
+
59
+ The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper,[41] On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[42] Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
60
+
61
+ Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[30] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.[20]
62
+
63
+ The Manchester Baby was the world's first stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[43] It was designed as a testbed for the Williams tube, the first random-access digital storage device.[44] Although the computer was considered "small and primitive" by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer.[45] As soon as the Baby had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1. Grace Hopper was the first person to develop a compiler for programming language.[2]
64
+
65
+ The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.[46] Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[47] In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951[48] and ran the world's first regular routine office computer job.
66
+
67
+ The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948.[49][50] From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.[51]
68
+
69
+ At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves.[52] Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[53] built by the electronics division of the Atomic Energy Research Establishment at Harwell.[53][54]
70
+
71
+ The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959.[55] It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses.[51] With its high scalability,[56] and much lower power consumption and higher density than bipolar junction transistors,[57] the MOSFET made it possible to build high-density integrated circuits.[58][59] In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers.[60] The MOSFET led to the microcomputer revolution,[61] and became the driving force behind the computer revolution.[62][63] The MOSFET is the most widely used transistor in computers,[64][65] and is the fundamental building block of digital electronics.[66]
72
+
73
+ The next great advance in computing power came with the advent of the integrated circuit (IC).
74
+ The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[67]
75
+
76
+ The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[68] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[69] In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated".[70][71] However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip.[72] Kilby's IC had external wire connections, which made it difficult to mass-produce.[73]
77
+
78
+ Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[74] Noyce's invention was the first true monolithic IC chip.[75][73] His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on the silicon surface passivation and thermal oxidation processes developed by Mohamed Atalla at Bell Labs in the late 1950s.[76][77][78]
79
+
80
+ Modern monolithic ICs are predominantly MOS (metal-oxide-semiconductor) integrated circuits, built from MOSFETs (MOS transistors).[79] After the first MOSFET was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959,[80] Atalla first proposed the concept of the MOS integrated circuit in 1960, followed by Kahng in 1961, both noting that the MOS transistor's ease of fabrication made it useful for integrated circuits.[51][81] The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962.[82] General Microelectronics later introduced the first commercial MOS IC in 1964,[83] developed by Robert Norman.[82] Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968.[84] The MOSFET has since become the most critical device component in modern ICs.[85]
81
+
82
+ The development of the MOS integrated circuit led to the invention of the microprocessor,[86][87] and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[88] designed and realized by Federico Faggin with his silicon-gate MOS IC technology,[86] along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[89][90] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip.[59]
83
+
84
+ System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin.[91] They may or may not have integrated RAM and flash memory. If not integrated, The RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC, this all done to improve data transfer speeds, as the data signals don't have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (Such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power.
85
+
86
+ The first mobile computers were heavy and ran from mains power. The 50lb IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s.[92] The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s.
87
+
88
+ These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market.[93] These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin.[91]
89
+
90
+ Computers can be classified in a number of different ways, including:
91
+
92
+ The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware.
93
+
94
+ A general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires.
95
+ Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.
96
+
97
+ When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are:
98
+
99
+ The means through which computer gives output are known as output devices. Some examples of output devices are:
100
+
101
+ The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[95] Control systems in advanced computers may change the order of execution of some instructions to improve performance.
102
+
103
+ A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[96]
104
+
105
+ The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
106
+
107
+ Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
108
+
109
+ The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen.
110
+
111
+ The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor.
112
+
113
+ The ALU is capable of performing two classes of operations: arithmetic and logic.[97] The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only operate on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing boolean logic.
114
+
115
+ Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously.[98] Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.
116
+
117
+ A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.
118
+
119
+ In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
120
+
121
+ The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
122
+
123
+ Computer main memory comes in two principal varieties:
124
+
125
+ RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[99]
126
+
127
+ In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
128
+
129
+ I/O is the means by which a computer exchanges information with the outside world.[100] Devices that provide input or output to the computer are called peripherals.[101] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.
130
+ I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry.
131
+
132
+ While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[102] One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.[103]
133
+
134
+ Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.
135
+
136
+ Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.
137
+
138
+ Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[104] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.
139
+
140
+ Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware".
141
+
142
+ There are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.
143
+
144
+ The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.
145
+
146
+ This section applies to most common RAM machine–based computers.
147
+
148
+ In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.
149
+
150
+ Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
151
+
152
+ Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language:
153
+
154
+ Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second.
155
+
156
+ In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program[citation needed], architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.
157
+
158
+ While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[105] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.
159
+
160
+ Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.
161
+
162
+ Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[106] Historically a significant number of other cpu architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80.
163
+
164
+ Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[107] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.
165
+
166
+ Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies.
167
+ The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.
168
+
169
+ Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[108]
170
+ Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[109]
171
+
172
+ Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre.[110] In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET.[111] The technologies that made the Arpanet possible spread and evolved.
173
+
174
+ In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.
175
+
176
+ A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer, the modern[112] definition of a computer is literally: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information."[113] Any device which processes information qualifies as a computer, especially if the processing is purposeful.[citation needed]
177
+
178
+ There is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.
179
+
180
+ There are many types of computer architectures:
181
+
182
+ Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing.[114] Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.
183
+
184
+ A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. Artificial intelligence based products generally fall into two major categories: rule based systems and pattern recognition systems. Rule based systems attempt to represent the rules used by human experts and tend to be expensive to develop. Pattern based systems use data about a problem to generate conclusions. Examples of pattern based systems include voice recognition, font recognition, translation and the emerging field of on-line marketing.
185
+
186
+ As the use of computers has spread throughout society, there are an increasing number of careers involving computers.
187
+
188
+ The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.
189
+
190
+
191
+
en/5872.html.txt ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The Twelve Labours of Heracles or Hercules (Greek: οἱ Ἡρακλέους ἆθλοι, hoi Hērakléous âthloi)[1][2] are a series of episodes concerning a penance carried out by Heracles, the greatest of the Greek heroes, whose name was later romanised as Hercules. They were accomplished at the service of King Eurystheus. The episodes were later connected by a continuous narrative. The establishment of a fixed cycle of twelve labours was attributed by the Greeks to an epic poem, now lost, written by Peisander, dated about 600 BC.[3] After Heracles killed his wife and children, he went to the oracle at Delphi. He prayed to the god Apollo for guidance. Heracles was told to serve the king of Mycenae, Eurystheus, for ten years. During this time, he is sent to perform a series of difficult feats, called labours.[4]
2
+
3
+ Driven mad by Hera (queen of the gods), Heracles slew his sons by his wife Megara.[5] After recovering his sanity, Heracles deeply regretted his actions; he was purified by King Thespius, then traveled to Delphi to inquire how he could atone for his actions. Pythia, the Oracle of Delphi, advised him to go to Tiryns and serve his cousin, King Eurystheus, for ten years, performing whatever labours Eurystheus might set him; in return, he would be rewarded with immortality. Heracles despaired at this, loathing to serve a man whom he knew to be far inferior to himself, yet fearing to oppose his father Zeus. Eventually, he placed himself at Eurystheus's disposal.
4
+
5
+ Eurystheus originally ordered Heracles to perform ten labours. Heracles accomplished these tasks, but Eurystheus refused to recognize two: the slaying of the Lernaean Hydra, as Heracles' nephew and charioteer Iolaus had helped him; and the cleansing of the Augeas, because Heracles accepted payment for the labour. Eurystheus set two more tasks (fetching the Golden Apples of Hesperides and capturing Cerberus), which Heracles also performed, bringing the total number of tasks to twelve.
6
+
7
+ As they survive, the labours of Heracles are not recounted in any single place, but must be reassembled from many sources. Ruck and Staples[6] assert that there is no one way to interpret the labours, but that six were located in the Peloponnese, culminating with the rededication of Olympia. Six others took the hero farther afield, to places that were, per Ruck, "all previously strongholds of Hera or the 'Goddess' and were Entrances to the Netherworld".[6] In each case, the pattern was the same: Heracles was sent to kill or subdue, or to fetch back for Eurystheus (as Hera's representative) a magical animal or plant.
8
+
9
+ A famous depiction of the labours in Greek sculpture is found on the metopes of the Temple of Zeus at Olympia, which date to the 450s BC.[citation needed]
10
+
11
+ In his labours, Heracles was sometimes accompanied by a male companion (an eromenos), according to Licymnius[citation needed] and others, such as Iolaus, his nephew. Although he was supposed to perform only ten labours, this assistance led to two labours being disqualified: Eurystheus refused to recognize slaying the Hydra, because Iolaus helped him, and the cleansing of the Augean stables, because Heracles was paid for his services and/or because the rivers did the work. Several of the labours involved the offspring (by various accounts) of Typhon and his mate Echidna, all overcome by Heracles.
12
+
13
+ A traditional order of the labours found in the Bibliotheca[7] by Pseudo-Apollodorus is:
14
+
15
+ Heracles wandered the area until he came to the town of Cleonae. There he met a boy who said that if Heracles slew the Nemean lion and returned alive within 30 days, the town would sacrifice a lion to Zeus, but if he did not return within 30 days or if he died, the boy would sacrifice himself to Zeus. Another version claims that he met Molorchos, a shepherd who had lost his son to the lion, saying that if he came back within 30 days, a ram would be sacrificed to Zeus. If he did not return within 30 days, it would be sacrificed to the dead Heracles as a mourning offering.
16
+
17
+ While searching for the lion, Heracles fletched some arrows to use against it, not knowing that its golden fur was impenetrable. When he found and shot the lion, firing at it with his bow, he discovered the fur's protective property as the arrow bounced harmlessly off the creature's thigh. After some time, Heracles made the lion return to his cave. The cave had two entrances, one of which Heracles blocked; he then entered the other. In those dark and close quarters, Heracles stunned the beast with his club and, using his immense strength, strangled it to death. During the fight the lion bit off one of his fingers.[8] Others say that he shot arrows at it, eventually shooting it in the unarmored mouth. After slaying the lion, he tried to skin it with a knife from his belt, but failed. He then tried sharpening the knife with a stone and even tried with the stone itself. Finally, Athena, noticing the hero's plight, told Heracles to use one of the lion's own claws to skin the pelt. Others say that Heracles' armor was, in fact, the hide of the Lion of Cithaeron.
18
+
19
+ When he returned on the 30th day carrying the carcass of the lion on his shoulders, King Eurystheus was amazed and terrified. Eurystheus forbade him ever again to enter the city; from then on he was to display the fruits of his labours outside the city gates. Eurystheus would then tell Heracles his tasks through a herald, not personally. Eurystheus even had a large bronze jar made for him in which to hide from Heracles if need be. Eurystheus then warned him that the tasks would become increasingly difficult.
20
+
21
+ Heracles' second labour was to slay the Lernaean Hydra, which Hera had raised just to slay Heracles. Upon reaching the swamp near Lake Lerna, where the Hydra dwelt, Heracles used a cloth to cover his mouth and nose to protect himself from the poisonous fumes. He fired flaming arrows into the Hydra's lair, the spring of Amymone, a deep cave that it only came out of to terrorize neighboring villages.[9] He then confronted the Hydra, wielding a harvesting sickle (according to some early vase-paintings), a sword or his famed club. Ruck and Staples (1994: 170) have pointed out that the chthonic creature's reaction was botanical: upon cutting off each of its heads he found that two grew back, an expression of the hopelessness of such a struggle for any but the hero. Additionally, one of the Hydra's heads - the middle one - was immortal.
22
+
23
+ The details of the struggle are explicit in the Bibliotheca (2.5.2): realizing that he could not defeat the Hydra in this way, Heracles called on his nephew Iolaus for help. His nephew then came upon the idea (possibly inspired by Athena) of using a firebrand to scorch the neck stumps after each decapitation. Heracles cut off each head and Iolaus cauterized the open stumps. Seeing that Heracles was winning the struggle, Hera sent a giant crab to distract him. He crushed it under his mighty foot. He cut off the Hydra's one immortal head with a golden sword given to him by Athena. Heracles placed it under a great rock on the sacred way between Lerna and Elaius (Kerenyi 1959:144), and dipped his arrows in the Hydra's poisonous blood, and so his second task was complete. The alternative version of this myth is that after cutting off one head, he then dipped his sword in it and used its venom to burn each head so it could not grow back. Hera, upset that Heracles had slain the beast she raised to kill him, placed it in the dark blue vault of the sky as the constellation Hydra. She then turned the crab into the constellation Cancer.
24
+
25
+ Later, Heracles used an arrow dipped in the Hydra's poisonous blood to kill the centaur Nessus; and Nessus's tainted blood was applied to the Tunic of Nessus, by which the centaur had his posthumous revenge. Both Strabo and Pausanias report that the stench of the river Anigrus in Elis, making all the fish of the river inedible, was reputed to be due to the Hydra's venom, washed from the arrows Heracles used on the centaur.[10]
26
+
27
+ Eurystheus and Hera were greatly angered that Heracles had survived the Nemean Lion and the Lernaean Hydra. For the third labour, they found a task which they thought would spell doom for the hero. It was not slaying a beast or monster, as it had already been established that Heracles could overcome even the most fearsome opponents. Instead, Eurystheus ordered him to capture the Ceryneian Hind, which was so fast that it could outrun an arrow.
28
+
29
+ After beginning the search, Heracles awoke from sleeping and saw the hind by the glint on its antlers. Heracles then chased the hind on foot for a full year through Greece, Thrace, Istria, and the land of the Hyperboreans. In some versions, he captured the hind while it slept, rendering it lame with a trap net. In other versions, he encountered Artemis in her temple; she told him to leave the hind and tell Eurystheus all that had happened, and his third labour would be considered to be completed. Yet another version claims that Heracles trapped the Hind with an arrow between its forelegs.
30
+
31
+ Eurystheus had given Heracles this task hoping to incite Artemis' anger at Heracles for his desecration of her sacred animal. As he was returning with the hind, Heracles encountered Artemis and her brother Apollo. He begged the goddess for forgiveness, explaining that he had to catch it as part of his penance, but he promised to return it. Artemis forgave him, foiling Eurystheus' plan to have her punish him.
32
+
33
+ Upon bringing the hind to Eurystheus, he was told that it was to become part of the King's menagerie. Heracles knew that he had to return the hind as he had promised, so he agreed to hand it over on the condition that Eurystheus himself come out and take it from him. The King came out, but the moment that Heracles let the hind go, it sprinted back to its mistress and Heracles left, saying that Eurystheus had not been quick enough.
34
+
35
+ Eurystheus was disappointed that Heracles had overcome yet another creature and was humiliated by the hind's escape, so he assigned Heracles another dangerous task. By some accounts, the fourth labour was to bring the fearsome Erymanthian Boar back to Eurystheus alive (there is no single definitive telling of the labours). On the way to Mount Erymanthos where the boar lived, Heracles visited Pholus ("caveman"), a kind and hospitable centaur and old friend. Heracles ate with Pholus in his cavern (though the centaur devoured his meat raw) and asked for wine. Pholus had only one jar of wine, a gift from Dionysus to all the centaurs on Mount Erymanthos. Heracles convinced him to open it, and the smell attracted the other centaurs. They did not understand that wine needs to be tempered with water, became drunk, and attacked Heracles. Heracles shot at them with his poisonous arrows, killing many, and the centaurs retreated all the way to Chiron's cave.
36
+
37
+ Pholus was curious why the arrows caused so much death. He picked one up but dropped it, and the arrow stabbed his hoof, poisoning him. One version states that a stray arrow hit Chiron as well. He was immortal, but he still felt the pain. Chiron's pain was so great that he volunteered to give up his immortality and take the place of Prometheus, who had been chained to the top of a mountain to have his liver eaten daily by an eagle. Prometheus' torturer, the eagle, continued its torture on Chiron, so Heracles shot it dead with an arrow. It is generally accepted that the tale was meant to show Heracles as being the recipient of Chiron's surrendered immortality. However, this tale contradicts the fact that Chiron later taught Achilles. The tale of the centaurs sometimes appears in other parts of the twelve labours, as does the freeing of Prometheus.
38
+
39
+ Heracles had visited Chiron to gain advice on how to catch the boar, and Chiron had told him to drive it into thick snow, which sets this labour in mid-winter. Heracles caught the boar, bound it, and carried it back to Eurystheus, who was frightened of it and ducked down in his half-buried storage pithos, begging Heracles to get rid of the beast.
40
+
41
+ The fifth labour was to clean the stables of King Augeas. This assignment was intended to be both humiliating (rather than impressive, as the previous labours had been) and impossible, since the livestock were divinely healthy (and immortal) and therefore produced an enormous quantity of dung. The Augean Stables (/ɔːˈdʒiːən/) had not been cleaned in over 30 years, and over 1,000 cattle lived there. However, Heracles succeeded by re-routing the rivers Alpheus and Peneus to wash out the filth.
42
+
43
+ Before starting on the task, Heracles had asked Augeas for one-tenth of the cattle if he finished the task in one day, and Augeas agreed. But afterwards Augeas refused to honour the agreement on the grounds that Heracles had been ordered to carry out the task by Eurystheus anyway. Heracles claimed his reward in court, and was supported by Augeas' son Phyleus. Augeas banished them both before the court had ruled. Heracles returned, slew Augeas, and gave his kingdom to Phyleus. Heracles then founded the Olympic Games.
44
+ The success of this labour was ultimately discounted as the rushing waters had done the work of cleaning the stables and because Heracles was paid for doing the labour.
45
+ Eurystheus said that Heracles still had seven labours to perform.[11]
46
+
47
+ The sixth labour was to defeat the Stymphalian birds, man-eating birds with beaks made of bronze and sharp metallic feathers they could launch at their victims. They were sacred to Ares, the god of war. Furthermore, their dung was highly toxic. They had migrated to Lake Stymphalia in Arcadia, where they bred quickly and took over the countryside, destroying local crops, fruit trees, and townspeople. Heracles could not go too far into the swamp, for it would not support his weight. Athena, noticing the hero's plight, gave Heracles a rattle which Hephaestus had made especially for the occasion. Heracles shook the rattle and frightened the birds into the air. Heracles then shot many of them with his arrows. The rest flew far away, never to return. The Argonauts would later encounter them.
48
+
49
+ The seventh labour was to capture the Cretan Bull, father of the Minotaur. Heracles sailed to Crete, where King Minos gave Heracles permission to take the bull away and even offered him assistance (which Heracles declined, plausibly because he did not want the labour to be discounted as before).[12] The bull had been wreaking havoc on Crete by uprooting crops and leveling orchard walls. Heracles sneaked up behind the bull and then used his hands to throttle it (stopping before it was killed), and then shipped it back to Tiryns. Eurystheus, who hid in his pithos at first sight of the creature, wanted to sacrifice the bull to Hera, who hated Heracles. She refused the sacrifice because it reflected glory on Heracles. The bull was released and wandered into Marathon, becoming known as the Marathonian Bull.[12] Theseus would later sacrifice the bull to Athena and/or Apollo.
50
+
51
+ As the eighth of his Twelve Labours, also categorised as the second of the Non-Peloponneisan labours,[13] Heracles was sent by King Eurystheus to steal the Mares from Diomedes. The mares’ madness was attributed to their unnatural diet which consisted of the flesh[14] of unsuspecting guests or strangers to the island.[15] Some versions of the myth say that the mares also expelled fire when they breathed.[16] The Mares, which were the terror of Thrace, were kept tethered by iron chains to a bronze manger in the now vanished city of Tirida[17] and were named Podargos (the swift), Lampon (the shining), Xanthos (the yellow) and Deinos (or Deinus, the terrible).[18] Although very similar, there are slight variances in the exact details regarding the mares’ capture.
52
+
53
+ In one version, Heracles brought a number of volunteers to help him capture the giant horses.[17] After overpowering Diomedes’ men, Heracles broke the chains that tethered the horses and drove the mares down to sea. Unaware that the mares were man-eating and uncontrollable, Heracles left them in the charge of his favored companion, Abderus, while he left to fight Diomedes. Upon his return, Heracles found that the boy was eaten. As revenge, Heracles fed Diomedes to his own horses and then founded Abdera next to the boy's tomb.[15]
54
+
55
+ In another version, Heracles, who was visiting the island, stayed awake so that he didn't have his throat cut by Diomedes in the night, and cut the chains binding the horses once everyone was asleep. Having scared the horses onto the high ground of a knoll, Heracles quickly dug a trench through the peninsula, filling it with water and thus flooding the low-lying plain. When Diomedes and his men turned to flee, Heracles killed them with an axe (or a club[17]), and fed Diomedes’ body to the horses to calm them.
56
+
57
+ In yet another version, Heracles first captured Diomedes and fed him to the mares before releasing them. Only after realizing that their King was dead did his men, the Bistonians,[15][17] attack Heracles. Upon seeing the mares charging at them, led in a chariot by Abderus, the Bistonians turned and fled.
58
+
59
+ All versions have eating human flesh make the horses calmer, giving Heracles the opportunity to bind their mouths shut, and easily take them back to King Eurystheus, who dedicated the horses to Hera.[19] In some versions, they were allowed to roam freely around Argos, having become permanently calm, but in others, Eurystheus ordered the horses taken to Olympus to be sacrificed to Zeus, but Zeus refused them, and sent wolves, lions, and bears to kill them.[20] Roger Lancelyn Green states in his Tales of the Greek Heroes that the mares’ descendants were used in the Trojan War, and survived even to the time of Alexander the Great.[17][21] After the incident, Eurystheus sent Heracles to bring back Hippolyta's Girdle.
60
+
61
+ Eurystheus' daughter Admete wanted the Belt of Hippolyta, queen of the Amazons, a gift from her father Ares. To please his daughter, Eurystheus ordered Heracles to retrieve the belt as his ninth labour.
62
+
63
+ Taking a band of friends with him, Heracles set sail, stopping at the island of Paros, which was inhabited by some sons of Minos. The sons killed two of Heracles' companions, an act which set Heracles on a rampage. He killed two of the sons of Minos and threatened the other inhabitants until he was offered two men to replace his fallen companions. Heracles agreed and took two of Minos' grandsons, Alcaeus and Sthenelus. They continued their voyage and landed at the court of Lycus, whom Heracles defended in a battle against King Mygdon of Bebryces. After killing King Mygdon, Heracles gave much of the land to his friend Lycus. Lycus called the land Heraclea. The crew then set off for Themiscyra, where Hippolyta lived.
64
+
65
+ All would have gone well for Heracles had it not been for Hera. Hippolyta, impressed with Heracles and his exploits, agreed to give him the belt and would have done so had Hera not disguised herself and walked among the Amazons sowing seeds of distrust. She claimed the strangers were plotting to carry off the queen of the Amazons. Alarmed, the women set off on horseback to confront Heracles. When Heracles saw them, he thought Hippolyta had been plotting such treachery all along and had never meant to hand over the belt, so he killed her, took the belt and returned to Eurystheus.
66
+
67
+ The tenth labour was to obtain the Cattle of the three-bodied giant Geryon. In the fullest account in the Bibliotheca of Pseudo-Apollodorus,[22] Heracles had to go to the island of Erytheia in the far west (sometimes identified with the Hesperides, or with the island which forms the city of Cádiz) to get the cattle. On the way there, he crossed the Libyan desert[23] and became so frustrated at the heat that he shot an arrow at the Sun. The sun-god Helios "in admiration of his courage" gave Heracles the golden cup Helios used to sail across the sea from west to east each night. Heracles rode the cup to Erytheia; Heracles in the cup was a favorite motif on black-figure pottery.[citation needed] Such a magical conveyance undercuts any literal geography for Erytheia, the "red island" of the sunset.
68
+
69
+ When Heracles landed at Erytheia, he was confronted by the two-headed dog Orthrus. With one blow from his olive-wood club, Heracles killed Orthrus. Eurytion the herdsman came to assist Orthrus, but Heracles dealt with him the same way.
70
+
71
+ On hearing the commotion, Geryon sprang into action, carrying three shields and three spears, and wearing three helmets. He attacked Heracles at the River Anthemus, but was slain by one of Heracles' poisoned arrows. Heracles shot so forcefully that the arrow pierced Geryon's forehead, "and Geryon bent his neck over to one side, like a poppy that spoils its delicate shapes, shedding its petals all at once."[24]
72
+
73
+ Heracles then had to herd the cattle back to Eurystheus. In Roman versions of the narrative, Heracles drove the cattle over the Aventine Hill on the future site of Rome. The giant Cacus, who lived there, stole some of the cattle as Heracles slept, making the cattle walk backwards so that they left no trail, a repetition of the trick of the young Hermes. According to some versions, Heracles drove his remaining cattle past the cave, where Cacus had hidden the stolen animals, and they began calling out to each other. In other versions, Cacus' sister Caca told Heracles where he was. Heracles then killed Cacus, and set up an altar on the spot, later the site of Rome's Forum Boarium (the cattle market).
74
+
75
+ To annoy Heracles, Hera sent a gadfly to bite the cattle, irritate them, and scatter them. Within a year, Heracles retrieved them. Hera then sent a flood which raised the level of a river so much that Heracles could not cross with the cattle. He piled stones into the river to make the water shallower. When he finally reached the court of Eurystheus, the cattle were sacrificed to Hera.
76
+
77
+ After Heracles completed the first ten labours, Eurystheus gave him two more, claiming that slaying the Hydra did not count (because Iolaus helped Heracles), neither did cleaning the Augean Stables (either because he was paid for the job or because the rivers did the work).
78
+
79
+ The first additional labour was to steal three of the golden apples from the garden of the Hesperides. Heracles first caught the Old Man of the Sea, the shapeshifting sea god,[25] to learn where the Garden of the Hesperides was located.[26]
80
+
81
+ In some variations, Heracles, either at the start or at the end of this task, meets Antaeus, who was invincible as long as he touched his mother, Gaia, the Earth. Heracles killed Antaeus by holding him aloft and crushing him in a bear hug.[27]
82
+
83
+ Herodotus claims that Heracles stopped in Egypt, where King Busiris decided to make him the yearly sacrifice, but Heracles burst out of his chains.
84
+
85
+ Heracles finally made his way to the garden of the Hesperides, where he encountered Atlas holding up the heavens on his shoulders. Heracles persuaded Atlas to get the three golden Apples for him by offering to hold up the heavens in his place for a little while. Atlas could get the apples because, in this version, he was the father or otherwise related to the Hesperides. This would have made the labour – like the Hydra and the Augean stables – void because Heracles had received help. When Atlas returned, he decided that he did not want to take the heavens back, and instead offered to deliver the apples himself, but Heracles tricked him by agreeing to remain in place of Atlas on the condition that Atlas relieve him temporarily while Heracles adjusted his cloak. Atlas agreed, but Heracles reneged and walked away with the apples. According to an alternative version, Heracles slew Ladon, the dragon who guarded the apples instead. Eurystheus was furious that Heracles had accomplished something that Eurystheus thought could not possibly be done.
86
+
87
+ The twelfth and final labour was the capture of Cerberus, the three-headed, dragon-tailed dog that was the guardian of the gates of the Underworld. To prepare for his descent into the Underworld, Heracles went to Eleusis (or Athens) to be initiated in the Eleusinian Mysteries. He entered the Underworld, and Hermes and Athena were his guides.
88
+
89
+ While in the Underworld, Heracles met Theseus and Pirithous. The two companions had been imprisoned by Hades for attempting to kidnap Persephone. One tradition tells of snakes coiling around their legs, then turning into stone; another that Hades feigned hospitality and prepared a feast inviting them to sit. They unknowingly sat in chairs of forgetfulness and were permanently ensnared. When Heracles had pulled Theseus first from his chair, some of his thigh stuck to it (this explains the supposedly lean thighs of Athenians), but the Earth shook at the attempt to liberate Pirithous, whose desire to have the goddess for himself was so insulting he was doomed to stay behind.
90
+
91
+ Heracles found Hades and asked permission to bring Cerberus to the surface, which Hades agreed to if Heracles could subdue the beast without using weapons. Heracles overpowered Cerberus with his bare hands and slung the beast over his back. He carried Cerberus out of the Underworld through a cavern entrance in the Peloponnese and brought it to Eurystheus, who again fled into his pithos. Eurystheus begged Heracles to return Cerberus to the Underworld, offering in return to release him from any further labours when Cerberus disappeared back to his master.
92
+
93
+ After completing the Twelve Labours, one tradition says Heracles joined Jason and the Argonauts in their quest for the Golden Fleece. However, Herodorus (c. 400 BC) disputed this and denied Heracles ever sailed with the Argonauts. A separate tradition (e.g. Argonautica) has Heracles accompany the Argonauts, but he did not travel with them as far as Colchis.
94
+
95
+ Some ancient Greeks found allegorical meanings of a moral, psychological or philosophical nature in the Labours of Heracles. This trend became more prominent in the Renaissance.[28] For example, Heraclitus the Grammarian wrote in his Homeric Problems:
96
+
97
+ I turn to Heracles. We must not suppose he attained such power in those days as a result of his physical strength. Rather, he was a man of intellect, an initiate in heavenly wisdom, who, as it were, shed light on philosophy, which had been hidden in deep darkness. The most authoritative of the Stoics agree with this account.... The (Erymanthian) boar which he overcame is the common incontinence of men; the (Nemean) lion is the indiscriminate rush towards improper goals; in the same way, by fettering irrational passions he gave rise to the belief that he had fettered the violent (Cretan) bull. He banished cowardice also from the world, in the shape of the hind of Ceryneia. There was another "labor" too, not properly so called, in which he cleared out the mass of dung (from the Augean stables) — in other words, the foulness that disfigures humanity. The (Stymphalian) birds he scattered are the windy hopes that feed our lives; the many-headed hydra that he burned, as it were, with the fires of exhortation, is pleasure, which begins to grow again as soon as it is cut out.
en/5873.html.txt ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Gases:
4
+
5
+ Ices:
6
+
7
+ Uranus is the seventh planet from the Sun. The name "Uranus" is a reference to the Greek god of the sky, Uranus. According to Greek mythology, Uranus was the grandfather of Zeus (Jupiter) and father of Cronus (Saturn). It has the third-largest planetary radius and fourth-largest planetary mass in the Solar System. Uranus is similar in composition to Neptune, and both have bulk chemical compositions which differ from that of the larger gas giants Jupiter and Saturn. For this reason, scientists often classify Uranus and Neptune as "ice giants" to distinguish them from the gas giants. Uranus' atmosphere is similar to Jupiter's and Saturn's in its primary composition of hydrogen and helium, but it contains more "ices" such as water, ammonia, and methane, along with traces of other hydrocarbons.[15] It has the coldest planetary atmosphere in the Solar System, with a minimum temperature of 49 K (−224 °C; −371 °F), and has a complex, layered cloud structure with water thought to make up the lowest clouds and methane the uppermost layer of clouds.[15] The interior of Uranus is mainly composed of ices and rock.[14]
8
+
9
+ Like the other giant planets, Uranus has a ring system, a magnetosphere, and numerous moons. The Uranian system has a unique configuration because its axis of rotation is tilted sideways, nearly into the plane of its solar orbit. Its north and south poles, therefore, lie where most other planets have their equators.[20] In 1986, images from Voyager 2 showed Uranus as an almost featureless planet in visible light, without the cloud bands or storms associated with the other giant planets.[20] Voyager 2 remains the only spacecraft to visit the planet.[21] Observations from Earth have shown seasonal change and increased weather activity as Uranus approached its equinox in 2007. Wind speeds can reach 250 metres per second (900 km/h; 560 mph).[22]
10
+
11
+ Like the classical planets, Uranus is visible to the naked eye, but it was never recognised as a planet by ancient observers because of its dimness and slow orbit.[23] Sir William Herschel first observed Uranus on 13 March 1781, leading to its discovery as a planet, expanding the known boundaries of the Solar System for the first time in history and making Uranus the first planet classified as such with the aid of a telescope.
12
+
13
+
14
+
15
+ Uranus had been observed on many occasions before its recognition as a planet, but it was generally mistaken for a star. Possibly the earliest known observation was by Hipparchos, who in 128 BC might have recorded it as a star for his star catalogue that was later incorporated into Ptolemy's Almagest.[24] The earliest definite sighting was in 1690, when John Flamsteed observed it at least six times, cataloguing it as 34 Tauri. The French astronomer Pierre Charles Le Monnier observed Uranus at least twelve times between 1750 and 1769,[25] including on four consecutive nights.
16
+
17
+ Sir William Herschel observed Uranus on 13 March 1781 from the garden of his house at 19 New King Street in Bath, Somerset, England (now the Herschel Museum of Astronomy),[26] and initially reported it (on 26 April 1781) as a comet.[27] With a telescope, Herschel "engaged in a series of observations on the parallax of the fixed stars."[28]
18
+
19
+ Herschel recorded in his journal: "In the quartile near ζ Tauri ... either [a] Nebulous star or perhaps a comet."[29] On 17 March he noted: "I looked for the Comet or Nebulous Star and found that it is a Comet, for it has changed its place."[30] When he presented his discovery to the Royal Society, he continued to assert that he had found a comet, but also implicitly compared it to a planet:[28]
20
+
21
+ The power I had on when I first saw the comet was 227. From experience I know that the diameters of the fixed stars are not proportionally magnified with higher powers, as planets are; therefore I now put the powers at 460 and 932, and found that the diameter of the comet increased in proportion to the power, as it ought to be, on the supposition of its not being a fixed star, while the diameters of the stars to which I compared it were not increased in the same ratio. Moreover, the comet being magnified much beyond what its light would admit of, appeared hazy and ill-defined with these great powers, while the stars preserved that lustre and distinctness which from many thousand observations I knew they would retain. The sequel has shown that my surmises were well-founded, this proving to be the Comet we have lately observed.[28]
22
+
23
+ Herschel notified the Astronomer Royal Nevil Maskelyne of his discovery and received this flummoxed reply from him on 23 April 1781: "I don't know what to call it. It is as likely to be a regular planet moving in an orbit nearly circular to the sun as a Comet moving in a very eccentric ellipsis. I have not yet seen any coma or tail to it."[31]
24
+
25
+ Although Herschel continued to describe his new object as a comet, other astronomers had already begun to suspect otherwise. Finnish-Swedish astronomer Anders Johan Lexell, working in Russia, was the first to compute the orbit of the new object.[32] Its nearly circular orbit led him to a conclusion that it was a planet rather than a comet. Berlin astronomer Johann Elert Bode described Herschel's discovery as "a moving star that can be deemed a hitherto unknown planet-like object circulating beyond the orbit of Saturn".[33] Bode concluded that its near-circular orbit was more like a planet's than a comet's.[34]
26
+
27
+ The object was soon universally accepted as a new planet. By 1783, Herschel acknowledged this to Royal Society president Joseph Banks: "By the observation of the most eminent Astronomers in Europe it appears that the new star, which I had the honour of pointing out to them in March 1781, is a Primary Planet of our Solar System."[35] In recognition of his achievement, King George III gave Herschel an annual stipend of £200 on condition that he move to Windsor so that the Royal Family could look through his telescopes (equivalent to £24,000 in 2019).[36][37]
28
+
29
+ The name of Uranus references the ancient Greek deity of the sky Uranus (Ancient Greek: Οὐρανός), the father of Cronus (Saturn) and grandfather of Zeus (Jupiter), which in Latin became Ūranus (IPA: [ˈuːranʊs]).[1] It is the only planet whose English name is derived directly from a figure of Greek mythology. The adjectival form of Uranus is "Uranian".[38] The pronunciation of the name Uranus preferred among astronomers is /ˈjʊərənəs/,[2] with stress on the first syllable as in Latin Ūranus, in contrast to /jʊˈreɪnəs/, with stress on the second syllable and a long a, though both are considered acceptable.[f]
30
+
31
+ Consensus on the name was not reached until almost 70 years after the planet's discovery. During the original discussions following discovery, Maskelyne asked Herschel to "do the astronomical world the faver [sic] to give a name to your planet, which is entirely your own, [and] which we are so much obliged to you for the discovery of".[40] In response to Maskelyne's request, Herschel decided to name the object Georgium Sidus (George's Star), or the "Georgian Planet" in honour of his new patron, King George III.[41] He explained this decision in a letter to Joseph Banks:[35]
32
+
33
+ In the fabulous ages of ancient times the appellations of Mercury, Venus, Mars, Jupiter and Saturn were given to the Planets, as being the names of their principal heroes and divinities. In the present more philosophical era it would hardly be allowable to have recourse to the same method and call it Juno, Pallas, Apollo or Minerva, for a name to our new heavenly body. The first consideration of any particular event, or remarkable incident, seems to be its chronology: if in any future age it should be asked, when this last-found Planet was discovered? It would be a very satisfactory answer to say, 'In the reign of King George the Third'.
34
+
35
+ Herschel's proposed name was not popular outside Britain, and alternatives were soon proposed. Astronomer Jérôme Lalande proposed that it be named Herschel in honour of its discoverer.[42] Swedish astronomer Erik Prosperin proposed the name Neptune, which was supported by other astronomers who liked the idea to commemorate the victories of the British Royal Naval fleet in the course of the American Revolutionary War by calling the new planet even Neptune George III or Neptune Great Britain.[32]
36
+
37
+ In a March 1782 treatise, Bode proposed Uranus, the Latinised version of the Greek god of the sky, Ouranos.[43] Bode argued that the name should follow the mythology so as not to stand out as different from the other planets, and that Uranus was an appropriate name as the father of the first generation of the Titans.[43] He also noted that elegance of the name in that just as Saturn was the father of Jupiter, the new planet should be named after the father of Saturn.[37][43][44][45] In 1789, Bode's Royal Academy colleague Martin Klaproth named his newly discovered element uranium in support of Bode's choice.[46] Ultimately, Bode's suggestion became the most widely used, and became universal in 1850 when HM Nautical Almanac Office, the final holdout, switched from using Georgium Sidus to Uranus.[44]
38
+
39
+ Uranus has two astronomical symbols. The first to be proposed, ♅,[g] was suggested by Lalande in 1784. In a letter to Herschel, Lalande described it as "un globe surmonté par la première lettre de votre nom" ("a globe surmounted by the first letter of your surname").[42] A later proposal, ⛢,[h] is a hybrid of the symbols for Mars and the Sun because Uranus was the Sky in Greek mythology, which was thought to be dominated by the combined powers of the Sun and Mars.[47]
40
+
41
+ Uranus is called by a variety of translations in other languages. In Chinese, Japanese, Korean, and Vietnamese, its name is literally translated as the "sky king star" (天王星).[48][49][50][51] In Thai, its official name is Dao Yurenat (ดาวยูเรนัส), as in English. Its other name in Thai is Dao Maritayu (ดาวมฤตยู, Star of Mṛtyu), after the Sanskrit word for 'death', Mrtyu (मृत्यु). In Mongolian, its name is Tengeriin Van (Тэнгэрийн ван), translated as 'King of the Sky', reflecting its namesake god's role as the ruler of the heavens. In Hawaiian, its name is Heleʻekala, a loanword for the discoverer Herschel.[52] In Māori, its name is Whērangi.[53][54]
42
+
43
+ Uranus orbits the Sun once every 84 years, taking an average of seven years to pass through each constellation of the zodiac. In 2033, the planet will have made its third complete orbit around the Sun since being discovered in 1781. The planet has returned to the point of its discovery northeast of Zeta Tauri twice since then, in 1862 and 1943, one day later each time as the precession of the equinoxes has shifted it 1° west every 72 years. Uranus will return to this location again in 2030-31. Its average distance from the Sun is roughly 20 AU (3 billion km; 2 billion mi). The difference between its minimum and maximum distance from the Sun is 1.8 AU, larger than that of any other planet, though not as large as that of dwarf planet Pluto.[55] The intensity of sunlight varies inversely with the square of distance, and so on Uranus (at about 20 times the distance from the Sun compared to Earth) it is about 1/400 the intensity of light on Earth.[56] Its orbital elements were first calculated in 1783 by Pierre-Simon Laplace.[57] With time, discrepancies began to appear between the predicted and observed orbits, and in 1841, John Couch Adams first proposed that the differences might be due to the gravitational tug of an unseen planet. In 1845, Urbain Le Verrier began his own independent research into Uranus' orbit. On 23 September 1846, Johann Gottfried Galle located a new planet, later named Neptune, at nearly the position predicted by Le Verrier.[58]
44
+
45
+ The rotational period of the interior of Uranus is 17 hours, 14 minutes. As on all the giant planets, its upper atmosphere experiences strong winds in the direction of rotation. At some latitudes, such as about 60 degrees south, visible features of the atmosphere move much faster, making a full rotation in as little as 14 hours.[59]
46
+
47
+ The Uranian axis of rotation is approximately parallel with the plane of the Solar System, with an axial tilt of 97.77° (as defined by prograde rotation). This gives it seasonal changes completely unlike those of the other planets. Near the solstice, one pole faces the Sun continuously and the other faces away. Only a narrow strip around the equator experiences a rapid day–night cycle, but with the Sun low over the horizon. At the other side of Uranus' orbit the orientation of the poles towards the Sun is reversed. Each pole gets around 42 years of continuous sunlight, followed by 42 years of darkness.[60] Near the time of the equinoxes, the Sun faces the equator of Uranus giving a period of day–night cycles similar to those seen on most of the other planets.
48
+
49
+ Uranus reached its most recent equinox on 7 December 2007.[61][62]
50
+
51
+ One result of this axis orientation is that, averaged over the Uranian year, the polar regions of Uranus receive a greater energy input from the Sun than its equatorial regions. Nevertheless, Uranus is hotter at its equator than at its poles. The underlying mechanism that causes this is unknown. The reason for Uranus' unusual axial tilt is also not known with certainty, but the usual speculation is that during the formation of the Solar System, an Earth-sized protoplanet collided with Uranus, causing the skewed orientation.[63] Research by Jacob Kegerreis of Durham University suggests that the tilt resulted from a rock larger than the Earth crashing into the planet 3 to 4 billion years ago.[64]
52
+ Uranus' south pole was pointed almost directly at the Sun at the time of Voyager 2's flyby in 1986. The labelling of this pole as "south" uses the definition currently endorsed by the International Astronomical Union, namely that the north pole of a planet or satellite is the pole that points above the invariable plane of the Solar System, regardless of the direction the planet is spinning.[65][66] A different convention is sometimes used, in which a body's north and south poles are defined according to the right-hand rule in relation to the direction of rotation.[67]
53
+
54
+ The mean apparent magnitude of Uranus is 5.68 with a standard deviation of 0.17, while the extremes are 5.38 and +6.03.[16] This range of brightness is near the limit of naked eye visibility. Much of the variability is dependent upon the planetary latitudes being illuminated from the Sun and viewed from the Earth.[68] Its angular diameter is between 3.4 and 3.7 arcseconds, compared with 16 to 20 arcseconds for Saturn and 32 to 45 arcseconds for Jupiter.[69] At opposition, Uranus is visible to the naked eye in dark skies, and becomes an easy target even in urban conditions with binoculars.[6] In larger amateur telescopes with an objective diameter of between 15 and 23 cm, Uranus appears as a pale cyan disk with distinct limb darkening. With a large telescope of 25 cm or wider, cloud patterns, as well as some of the larger satellites, such as Titania and Oberon, may be visible.[70]
55
+
56
+ Uranus' mass is roughly 14.5 times that of Earth, making it the least massive of the giant planets. Its diameter is slightly larger than Neptune's at roughly four times that of Earth. A resulting density of 1.27 g/cm3 makes Uranus the second least dense planet, after Saturn.[9][10] This value indicates that it is made primarily of various ices, such as water, ammonia, and methane.[14] The total mass of ice in Uranus' interior is not precisely known, because different figures emerge depending on the model chosen; it must be between 9.3 and 13.5 Earth masses.[14][71] Hydrogen and helium constitute only a small part of the total, with between 0.5 and 1.5 Earth masses.[14] The remainder of the non-ice mass (0.5 to 3.7 Earth masses) is accounted for by rocky material.[14]
57
+
58
+ The standard model of Uranus' structure is that it consists of three layers: a rocky (silicate/iron–nickel) core in the centre, an icy mantle in the middle and an outer gaseous hydrogen/helium envelope.[14][72] The core is relatively small, with a mass of only 0.55 Earth masses and a radius less than 20% of Uranus'; the mantle comprises its bulk, with around 13.4 Earth masses, and the upper atmosphere is relatively insubstantial, weighing about 0.5 Earth masses and extending for the last 20% of Uranus' radius.[14][72] Uranus' core density is around 9 g/cm3, with a pressure in the centre of 8 million bars (800 GPa) and a temperature of about 5000 K.[71][72] The ice mantle is not in fact composed of ice in the conventional sense, but of a hot and dense fluid consisting of water, ammonia and other volatiles.[14][72] This fluid, which has a high electrical conductivity, is sometimes called a water–ammonia ocean.[73]
59
+
60
+ The extreme pressure and temperature deep within Uranus may break up the methane molecules, with the carbon atoms condensing into crystals of diamond that rain down through the mantle like hailstones.[74][75][76] Very-high-pressure experiments at the Lawrence Livermore National Laboratory suggest that the base of the mantle may comprise an ocean of liquid diamond, with floating solid 'diamond-bergs'.[77][78] Scientists also believe that rainfalls of solid diamonds occur on Uranus, as well as on Jupiter, Saturn, and Neptune.[79][80]
61
+
62
+ The bulk compositions of Uranus and Neptune are different from those of Jupiter and Saturn, with ice dominating over gases, hence justifying their separate classification as ice giants. There may be a layer of ionic water where the water molecules break down into a soup of hydrogen and oxygen ions, and deeper down superionic water in which the oxygen crystallises but the hydrogen ions move freely within the oxygen lattice.[81]
63
+
64
+ Although the model considered above is reasonably standard, it is not unique; other models also satisfy observations. For instance, if substantial amounts of hydrogen and rocky material are mixed in the ice mantle, the total mass of ices in the interior will be lower, and, correspondingly, the total mass of rocks and hydrogen will be higher. Presently available data does not allow a scientific determination of which model is correct.[71] The fluid interior structure of Uranus means that it has no solid surface. The gaseous atmosphere gradually transitions into the internal liquid layers.[14] For the sake of convenience, a revolving oblate spheroid set at the point at which atmospheric pressure equals 1 bar (100 kPa) is conditionally designated as a "surface". It has equatorial and polar radii of 25,559 ± 4 km (15,881.6 ± 2.5 mi) and 24,973 ± 20 km (15,518 ± 12 mi), respectively.[9] This surface is used throughout this article as a zero point for altitudes.
65
+
66
+ Uranus' internal heat appears markedly lower than that of the other giant planets; in astronomical terms, it has a low thermal flux.[22][82] Why Uranus' internal temperature is so low is still not understood. Neptune, which is Uranus' near twin in size and composition, radiates 2.61 times as much energy into space as it receives from the Sun,[22] but Uranus radiates hardly any excess heat at all. The total power radiated by Uranus in the far infrared (i.e. heat) part of the spectrum is 1.06±0.08 times the solar energy absorbed in its atmosphere.[15][83] Uranus' heat flux is only 0.042±0.047 W/m2, which is lower than the internal heat flux of Earth of about 0.075 W/m2.[83] The lowest temperature recorded in Uranus' tropopause is 49 K (−224.2 °C; −371.5 °F), making Uranus the coldest planet in the Solar System.[15][83]
67
+
68
+ One of the hypotheses for this discrepancy suggests that when Uranus was hit by a supermassive impactor, which caused it to expel most of its primordial heat, it was left with a depleted core temperature.[84] This impact hypothesis is also used in some attempts to explain the planet's axial tilt. Another hypothesis is that some form of barrier exists in Uranus' upper layers that prevents the core's heat from reaching the surface.[14] For example, convection may take place in a set of compositionally different layers, which may inhibit the upward heat transport;[15][83] perhaps double diffusive convection is a limiting factor.[14]
69
+
70
+ Although there is no well-defined solid surface within Uranus' interior, the outermost part of Uranus' gaseous envelope that is accessible to remote sensing is called its atmosphere.[15] Remote-sensing capability extends down to roughly 300 km below the 1 bar (100 kPa) level, with a corresponding pressure around 100 bar (10 MPa) and temperature of 320 K (47 °C; 116 °F).[86] The tenuous thermosphere extends over two planetary radii from the nominal surface, which is defined to lie at a pressure of 1 bar.[87] The Uranian atmosphere can be divided into three layers: the troposphere, between altitudes of −300 and 50 km (−186 and 31 mi) and pressures from 100 to 0.1 bar (10 MPa to 10 kPa); the stratosphere, spanning altitudes between 50 and 4,000 km (31 and 2,485 mi) and pressures of between 0.1 and 10−10 bar (10 kPa to 10 µPa); and the thermosphere extending from 4,000 km to as high as 50,000 km from the surface.[15] There is no mesosphere.
71
+
72
+ The composition of Uranus' atmosphere is different from its bulk, consisting mainly of molecular hydrogen and helium.[15] The helium molar fraction, i.e. the number of helium atoms per molecule of gas, is 0.15±0.03[19] in the upper troposphere, which corresponds to a mass fraction 0.26±0.05.[15][83] This value is close to the protosolar helium mass fraction of 0.275±0.01,[88] indicating that helium has not settled in its centre as it has in the gas giants.[15] The third-most-abundant component of Uranus' atmosphere is methane (CH4).[15] Methane has prominent absorption bands in the visible and near-infrared (IR), making Uranus aquamarine or cyan in colour.[15] Methane molecules account for 2.3% of the atmosphere by molar fraction below the methane cloud deck at the pressure level of 1.3 bar (130 kPa); this represents about 20 to 30 times the carbon abundance found in the Sun.[15][18][89] The mixing ratio[i] is much lower in the upper atmosphere due to its extremely low temperature, which lowers the saturation level and causes excess methane to freeze out.[90] The abundances of less volatile compounds such as ammonia, water, and hydrogen sulfide in the deep atmosphere are poorly known. They are probably also higher than solar values.[15][91] Along with methane, trace amounts of various hydrocarbons are found in the stratosphere of Uranus, which are thought to be produced from methane by photolysis induced by the solar ultraviolet (UV) radiation.[92] They include ethane (C2H6), acetylene (C2H2), methylacetylene (CH3C2H), and diacetylene (C2HC2H).[90][93][94] Spectroscopy has also uncovered traces of water vapour, carbon monoxide and carbon dioxide in the upper atmosphere, which can only originate from an external source such as infalling dust and comets.[93][94][95]
73
+
74
+ The troposphere is the lowest and densest part of the atmosphere and is characterised by a decrease in temperature with altitude.[15] The temperature falls from about 320 K (47 °C; 116 °F) at the base of the nominal troposphere at −300 km to 53 K (−220 °C; −364 °F) at 50 km.[86][89] The temperatures in the coldest upper region of the troposphere (the tropopause) actually vary in the range between 49 and 57 K (−224 and −216 °C; −371 and −357 °F) depending on planetary latitude.[15][82] The tropopause region is responsible for the vast majority of Uranus' thermal far infrared emissions, thus determining its effective temperature of 59.1 ± 0.3 K (−214.1 ± 0.3 °C; −353.3 ± 0.5 °F).[82][83]
75
+
76
+ The troposphere is thought to have a highly complex cloud structure; water clouds are hypothesised to lie in the pressure range of 50 to 100 bar (5 to 10 MPa), ammonium hydrosulfide clouds in the range of 20 to 40 bar (2 to 4 MPa), ammonia or hydrogen sulfide clouds at between 3 and 10 bar (0.3 and 1 MPa) and finally directly detected thin methane clouds at 1 to 2 bar (0.1 to 0.2 MPa).[15][18][86][96] The troposphere is a dynamic part of the atmosphere, exhibiting strong winds, bright clouds and seasonal changes.[22]
77
+
78
+ The middle layer of the Uranian atmosphere is the stratosphere, where temperature generally increases with altitude from 53 K (−220 °C; −364 °F) in the tropopause to between 800 and 850 K (527 and 577 °C; 980 and 1,070 °F) at the base of the thermosphere.[87] The heating of the stratosphere is caused by absorption of solar UV and IR radiation by methane and other hydrocarbons,[98] which form in this part of the atmosphere as a result of methane photolysis.[92] Heat is also conducted from the hot thermosphere.[98] The hydrocarbons occupy a relatively narrow layer at altitudes of between 100 and 300 km corresponding to a pressure range of 1000 to 10 Pa and temperatures of between 75 and 170 K (−198 and −103 °C; −325 and −154 °F).[90][93] The most abundant hydrocarbons are methane, acetylene and ethane with mixing ratios of around 10−7 relative to hydrogen. The mixing ratio of carbon monoxide is similar at these altitudes.[90][93][95] Heavier hydrocarbons and carbon dioxide have mixing ratios three orders of magnitude lower.[93] The abundance ratio of water is around 7×10−9.[94] Ethane and acetylene tend to condense in the colder lower part of stratosphere and tropopause (below 10 mBar level) forming haze layers,[92] which may be partly responsible for the bland appearance of Uranus. The concentration of hydrocarbons in the Uranian stratosphere above the haze is significantly lower than in the stratospheres of the other giant planets.[90][99]
79
+
80
+ The outermost layer of the Uranian atmosphere is the thermosphere and corona, which has a uniform temperature around 800 to 850 K.[15][99] The heat sources necessary to sustain such a high level are not understood, as neither the solar UV nor the auroral activity can provide the necessary energy to maintain these temperatures. The weak cooling efficiency due to the lack of hydrocarbons in the stratosphere above 0.1 mBar pressure level may contribute too.[87][99] In addition to molecular hydrogen, the thermosphere-corona contains many free hydrogen atoms. Their small mass and high temperatures explain why the corona extends as far as 50,000 km (31,000 mi), or two Uranian radii, from its surface.[87][99] This extended corona is a unique feature of Uranus.[99] Its effects include a drag on small particles orbiting Uranus, causing a general depletion of dust in the Uranian rings.[87] The Uranian thermosphere, together with the upper part of the stratosphere, corresponds to the ionosphere of Uranus.[89] Observations show that the ionosphere occupies altitudes from 2,000 to 10,000 km (1,200 to 6,200 mi).[89] The Uranian ionosphere is denser than that of either Saturn or Neptune, which may arise from the low concentration of hydrocarbons in the stratosphere.[99][100] The ionosphere is mainly sustained by solar UV radiation and its density depends on the solar activity.[101] Auroral activity is insignificant as compared to Jupiter and Saturn.[99][102]
81
+
82
+ Temperature profile of the Uranian troposphere and lower stratosphere. Cloud and haze layers are also indicated.
83
+
84
+ Zonal wind speeds on Uranus. Shaded areas show the southern collar and its future northern counterpart. The red curve is a symmetrical fit to the data.
85
+
86
+ Before the arrival of Voyager 2, no measurements of the Uranian magnetosphere had been taken, so its nature remained a mystery. Before 1986, scientists had expected the magnetic field of Uranus to be in line with the solar wind, because it would then align with Uranus' poles that lie in the ecliptic.[103]
87
+
88
+ Voyager's observations revealed that Uranus' magnetic field is peculiar, both because it does not originate from its geometric centre, and because it is tilted at 59° from the axis of rotation.[103][104] In fact the magnetic dipole is shifted from Uranus' centre towards the south rotational pole by as much as one third of the planetary radius.[103] This unusual geometry results in a highly asymmetric magnetosphere, where the magnetic field strength on the surface in the southern hemisphere can be as low as 0.1 gauss (10 µT), whereas in the northern hemisphere it can be as high as 1.1 gauss (110 µT).[103] The average field at the surface is 0.23 gauss (23 µT).[103] Studies of Voyager 2 data in 2017 suggest that this asymmetry causes Uranus' magnetosphere to connect with the solar wind once a Uranian day, opening the planet to the Sun's particles.[105] In comparison, the magnetic field of Earth is roughly as strong at either pole, and its "magnetic equator" is roughly parallel with its geographical equator.[104] The dipole moment of Uranus is 50 times that of Earth.[103][104] Neptune has a similarly displaced and tilted magnetic field, suggesting that this may be a common feature of ice giants.[104] One hypothesis is that, unlike the magnetic fields of the terrestrial and gas giants, which are generated within their cores, the ice giants' magnetic fields are generated by motion at relatively shallow depths, for instance, in the water–ammonia ocean.[73][106] Another possible explanation for the magnetosphere's alignment is that there are oceans of liquid diamond in Uranus' interior that would deter the magnetic field.[77]
89
+
90
+ Despite its curious alignment, in other respects the Uranian magnetosphere is like those of other planets: it has a bow shock at about 23 Uranian radii ahead of it, a magnetopause at 18 Uranian radii, a fully developed magnetotail, and radiation belts.[103][104][107] Overall, the structure of Uranus' magnetosphere is different from Jupiter's and more similar to Saturn's.[103][104] Uranus' magnetotail trails behind it into space for millions of kilometres and is twisted by its sideways rotation into a long corkscrew.[103][108]
91
+
92
+ Uranus' magnetosphere contains charged particles: mainly protons and electrons, with a small amount of H2+ ions.[104][107] Many of these particles probably derive from the thermosphere.[107] The ion and electron energies can be as high as 4 and 1.2 megaelectronvolts, respectively.[107] The density of low-energy (below 1 kiloelectronvolt) ions in the inner magnetosphere is about 2 cm−3.[109] The particle population is strongly affected by the Uranian moons, which sweep through the magnetosphere, leaving noticeable gaps.[107] The particle flux is high enough to cause darkening or space weathering of their surfaces on an astronomically rapid timescale of 100,000 years.[107] This may be the cause of the uniformly dark colouration of the Uranian satellites and rings.[110] Uranus has relatively well developed aurorae, which are seen as bright arcs around both magnetic poles.[99] Unlike Jupiter's, Uranus' aurorae seem to be insignificant for the energy balance of the planetary thermosphere.[102]
93
+
94
+ In March 2020, NASA astronomers reported the detection of a large atmospheric magnetic bubble, also known as a plasmoid, released into outer space from the planet Uranus, after reevaluating old data recorded by the Voyager 2 space probe during a flyby of the planet in 1986.[111][112]
95
+
96
+ At ultraviolet and visible wavelengths, Uranus' atmosphere is bland in comparison to the other giant planets, even to Neptune, which it otherwise closely resembles.[22] When Voyager 2 flew by Uranus in 1986, it observed a total of ten cloud features across the entire planet.[20][113] One proposed explanation for this dearth of features is that Uranus' internal heat appears markedly lower than that of the other giant planets. The lowest temperature recorded in Uranus' tropopause is 49 K (−224 °C; −371 °F), making Uranus the coldest planet in the Solar System.[15][83]
97
+
98
+ In 1986, Voyager 2 found that the visible southern hemisphere of Uranus can be subdivided into two regions: a bright polar cap and dark equatorial bands.[20] Their boundary is located at about −45° of latitude. A narrow band straddling the latitudinal range from −45 to −50° is the brightest large feature on its visible surface.[20][114] It is called a southern "collar". The cap and collar are thought to be a dense region of methane clouds located within the pressure range of 1.3 to 2 bar (see above).[115] Besides the large-scale banded structure, Voyager 2 observed ten small bright clouds, most lying several degrees to the north from the collar.[20] In all other respects Uranus looked like a dynamically dead planet in 1986. Voyager 2 arrived during the height of Uranus' southern summer and could not observe the northern hemisphere. At the beginning of the 21st century, when the northern polar region came into view, the Hubble Space Telescope (HST) and Keck telescope initially observed neither a collar nor a polar cap in the northern hemisphere.[114] So Uranus appeared to be asymmetric: bright near the south pole and uniformly dark in the region north of the southern collar.[114] In 2007, when Uranus passed its equinox, the southern collar almost disappeared, and a faint northern collar emerged near 45° of latitude.[116]
99
+
100
+ In the 1990s, the number of the observed bright cloud features grew considerably partly because new high-resolution imaging techniques became available.[22] Most were found in the northern hemisphere as it started to become visible.[22] An early explanation—that bright clouds are easier to identify in its dark part, whereas in the southern hemisphere the bright collar masks them – was shown to be incorrect.[117][118] Nevertheless there are differences between the clouds of each hemisphere. The northern clouds are smaller, sharper and brighter.[118] They appear to lie at a higher altitude.[118] The lifetime of clouds spans several orders of magnitude. Some small clouds live for hours; at least one southern cloud may have persisted since the Voyager 2 flyby.[22][113] Recent observation also discovered that cloud features on Uranus have a lot in common with those on Neptune.[22] For example, the dark spots common on Neptune had never been observed on Uranus before 2006, when the first such feature dubbed Uranus Dark Spot was imaged.[119] The speculation is that Uranus is becoming more Neptune-like during its equinoctial season.[120]
101
+
102
+ The tracking of numerous cloud features allowed determination of zonal winds blowing in the upper troposphere of Uranus.[22] At the equator winds are retrograde, which means that they blow in the reverse direction to the planetary rotation. Their speeds are from −360 to −180 km/h (−220 to −110 mph).[22][114] Wind speeds increase with the distance from the equator, reaching zero values near ±20° latitude, where the troposphere's temperature minimum is located.[22][82] Closer to the poles, the winds shift to a prograde direction, flowing with Uranus' rotation. Wind speeds continue to increase reaching maxima at ±60° latitude before falling to zero at the poles.[22] Wind speeds at −40° latitude range from 540 to 720 km/h (340 to 450 mph). Because the collar obscures all clouds below that parallel, speeds between it and the southern pole are impossible to measure.[22] In contrast, in the northern hemisphere maximum speeds as high as 860 km/h (540 mph) are observed near +50° latitude.[22][114][121]
103
+
104
+ For a short period from March to May 2004, large clouds appeared in the Uranian atmosphere, giving it a Neptune-like appearance.[118][122] Observations included record-breaking wind speeds of 820 km/h (510 mph) and a persistent thunderstorm referred to as "Fourth of July fireworks".[113] On 23 August 2006, researchers at the Space Science Institute (Boulder, Colorado) and the University of Wisconsin observed a dark spot on Uranus' surface, giving scientists more insight into Uranus atmospheric activity.[119] Why this sudden upsurge in activity occurred is not fully known, but it appears that Uranus' extreme axial tilt results in extreme seasonal variations in its weather.[62][120] Determining the nature of this seasonal variation is difficult because good data on Uranus' atmosphere have existed for less than 84 years, or one full Uranian year. Photometry over the course of half a Uranian year (beginning in the 1950s) has shown regular variation in the brightness in two spectral bands, with maxima occurring at the solstices and minima occurring at the equinoxes.[123] A similar periodic variation, with maxima at the solstices, has been noted in microwave measurements of the deep troposphere begun in the 1960s.[124] Stratospheric temperature measurements beginning in the 1970s also showed maximum values near the 1986 solstice.[98] The majority of this variability is thought to occur owing to changes in the viewing geometry.[117]
105
+
106
+ There are some indications that physical seasonal changes are happening in Uranus. Although Uranus is known to have a bright south polar region, the north pole is fairly dim, which is incompatible with the model of the seasonal change outlined above.[120] During its previous northern solstice in 1944, Uranus displayed elevated levels of brightness, which suggests that the north pole was not always so dim.[123] This information implies that the visible pole brightens some time before the solstice and darkens after the equinox.[120] Detailed analysis of the visible and microwave data revealed that the periodical changes of brightness are not completely symmetrical around the solstices, which also indicates a change in the meridional albedo patterns.[120] In the 1990s, as Uranus moved away from its solstice, Hubble and ground-based telescopes revealed that the south polar cap darkened noticeably (except the southern collar, which remained bright),[115] whereas the northern hemisphere demonstrated increasing activity,[113] such as cloud formations and stronger winds, bolstering expectations that it should brighten soon.[118] This indeed happened in 2007 when it passed an equinox: a faint northern polar collar arose, and the southern collar became nearly invisible, although the zonal wind profile remained slightly asymmetric, with northern winds being somewhat slower than southern.[116]
107
+
108
+ The mechanism of these physical changes is still not clear.[120] Near the summer and winter solstices, Uranus' hemispheres lie alternately either in full glare of the Sun's rays or facing deep space. The brightening of the sunlit hemisphere is thought to result from the local thickening of the methane clouds and haze layers located in the troposphere.[115] The bright collar at −45° latitude is also connected with methane clouds.[115] Other changes in the southern polar region can be explained by changes in the lower cloud layers.[115] The variation of the microwave emission from Uranus is probably caused by changes in the deep tropospheric circulation, because thick polar clouds and haze may inhibit convection.[125] Now that the spring and autumn equinoxes are arriving on Uranus, the dynamics are changing and convection can occur again.[113][125]
109
+
110
+ Many argue that the differences between the ice giants and the gas giants extend to their formation.[126][127] The Solar System is hypothesised to have formed from a giant rotating ball of gas and dust known as the presolar nebula. Much of the nebula's gas, primarily hydrogen and helium, formed the Sun, and the dust grains collected together to form the first protoplanets. As the planets grew, some of them eventually accreted enough matter for their gravity to hold on to the nebula's leftover gas.[126][127] The more gas they held onto, the larger they became; the larger they became, the more gas they held onto until a critical point was reached, and their size began to increase exponentially. The ice giants, with only a few Earth masses of nebular gas, never reached that critical point.[126][127][128] Recent simulations of planetary migration have suggested that both ice giants formed closer to the Sun than their present positions, and moved outwards after formation (the Nice model).[126]
111
+
112
+ Uranus has 27 known natural satellites.[128] The names of these satellites are chosen from characters in the works of Shakespeare and Alexander Pope.[72][129] The five main satellites are Miranda, Ariel, Umbriel, Titania, and Oberon.[72] The Uranian satellite system is the least massive among those of the giant planets; the combined mass of the five major satellites would be less than half that of Triton (largest moon of Neptune) alone.[10] The largest of Uranus' satellites, Titania, has a radius of only 788.9 km (490.2 mi), or less than half that of the Moon, but slightly more than Rhea, the second-largest satellite of Saturn, making Titania the eighth-largest moon in the Solar System. Uranus' satellites have relatively low albedos; ranging from 0.20 for Umbriel to 0.35 for Ariel (in green light).[20] They are ice–rock conglomerates composed of roughly 50% ice and 50% rock. The ice may include ammonia and carbon dioxide.[110][130]
113
+
114
+ Among the Uranian satellites, Ariel appears to have the youngest surface with the fewest impact craters and Umbriel's the oldest.[20][110] Miranda has fault canyons 20 km (12 mi) deep, terraced layers, and a chaotic variation in surface ages and features.[20] Miranda's past geologic activity is thought to have been driven by tidal heating at a time when its orbit was more eccentric than currently, probably as a result of a former 3:1 orbital resonance with Umbriel.[131] Extensional processes associated with upwelling diapirs are the likely origin of Miranda's 'racetrack'-like coronae.[132][133] Ariel is thought to have once been held in a 4:1 resonance with Titania.[134]
115
+
116
+ Uranus has at least one horseshoe orbiter occupying the Sun–Uranus L3 Lagrangian point—a gravitationally unstable region at 180° in its orbit, 83982 Crantor.[135][136] Crantor moves inside Uranus' co-orbital region on a complex, temporary horseshoe orbit.
117
+ 2010 EU65 is also a promising Uranus horseshoe librator candidate.[136]
118
+
119
+ The Uranian rings are composed of extremely dark particles, which vary in size from micrometres to a fraction of a metre.[20] Thirteen distinct rings are presently known, the brightest being the ε ring. All except two rings of Uranus are extremely narrow – they are usually a few kilometres wide. The rings are probably quite young; the dynamics considerations indicate that they did not form with Uranus. The matter in the rings may once have been part of a moon (or moons) that was shattered by high-speed impacts. From numerous pieces of debris that formed as a result of those impacts, only a few particles survived, in stable zones corresponding to the locations of the present rings.[110][137]
120
+
121
+ William Herschel described a possible ring around Uranus in 1789. This sighting is generally considered doubtful, because the rings are quite faint, and in the two following centuries none were noted by other observers. Still, Herschel made an accurate description of the epsilon ring's size, its angle relative to Earth, its red colour, and its apparent changes as Uranus travelled around the Sun.[138][139] The ring system was definitively discovered on 10 March 1977 by James L. Elliot, Edward W. Dunham, and Jessica Mink using the Kuiper Airborne Observatory. The discovery was serendipitous; they planned to use the occultation of the star SAO 158687 (also known as HD 128598) by Uranus to study its atmosphere. When their observations were analysed, they found that the star had disappeared briefly from view five times both before and after it disappeared behind Uranus. They concluded that there must be a ring system around Uranus.[140] Later they detected four additional rings.[140] The rings were directly imaged when Voyager 2 passed Uranus in 1986.[20] Voyager 2 also discovered two additional faint rings, bringing the total number to eleven.[20]
122
+
123
+ In December 2005, the Hubble Space Telescope detected a pair of previously unknown rings. The largest is located twice as far from Uranus as the previously known rings. These new rings are so far from Uranus that they are called the "outer" ring system. Hubble also spotted two small satellites, one of which, Mab, shares its orbit with the outermost newly discovered ring. The new rings bring the total number of Uranian rings to 13.[141] In April 2006, images of the new rings from the Keck Observatory yielded the colours of the outer rings: the outermost is blue and the other one red.[142][143]
124
+ One hypothesis concerning the outer ring's blue colour is that it is composed of minute particles of water ice from the surface of Mab that are small enough to scatter blue light.[142][144] In contrast, Uranus' inner rings appear grey.[142]
125
+
126
+ Animation about the discovering occultation in 1977. (Click on it to start)
127
+
128
+ Uranus has a complicated planetary ring system, which was the second such system to be discovered in the Solar System after Saturn's.[137]
129
+
130
+ Uranus' aurorae against its equatorial rings, imaged by the Hubble telescope. Unlike the aurorae of Earth and Jupiter, those of Uranus are not in line with its poles, due to its lopsided magnetic field.
131
+
132
+ In 1986, NASA's Voyager 2 interplanetary probe encountered Uranus. This flyby remains the only investigation of Uranus carried out from a short distance and no other visits are planned. Launched in 1977, Voyager 2 made its closest approach to Uranus on 24 January 1986, coming within 81,500 km (50,600 mi) of the cloudtops, before continuing its journey to Neptune. The spacecraft studied the structure and chemical composition of Uranus' atmosphere,[89] including its unique weather, caused by its axial tilt of 97.77°. It made the first detailed investigations of its five largest moons and discovered 10 new ones. It examined all nine of the system's known rings and discovered two more.[20][110][145] It also studied the magnetic field, its irregular structure, its tilt and its unique corkscrew magnetotail caused by Uranus' sideways orientation.[103]
133
+
134
+ Voyager 1 was unable to visit Uranus because investigation of Saturn's moon Titan was considered a priority. This trajectory took Voyager 1 out of the plane of the ecliptic, ending its planetary science mission.[146]:118
135
+
136
+ The possibility of sending the Cassini spacecraft from Saturn to Uranus was evaluated during a mission extension planning phase in 2009, but was ultimately rejected in favour of destroying it in the Saturnian atmosphere.[147] It would have taken about twenty years to get to the Uranian system after departing Saturn.[147] A Uranus orbiter and probe was recommended by the 2013–2022 Planetary Science Decadal Survey published in 2011; the proposal envisages launch during 2020–2023 and a 13-year cruise to Uranus.[148] A Uranus entry probe could use Pioneer Venus Multiprobe heritage and descend to 1–5 atmospheres.[148] The ESA evaluated a "medium-class" mission called Uranus Pathfinder.[149] A New Frontiers Uranus Orbiter has been evaluated and recommended in the study, The Case for a Uranus Orbiter.[150] Such a mission is aided by the ease with which a relatively big mass can be sent to the system—over 1500 kg with an Atlas 521 and 12-year journey.[151] For more concepts see Proposed Uranus missions.
137
+
138
+ In astrology, the planet Uranus () is the ruling planet of Aquarius. Because Uranus is cyan and Uranus is associated with electricity, the colour electric blue, which is close to cyan, is associated with the sign Aquarius[152] (see Uranus in astrology).
139
+
140
+ The chemical element uranium, discovered in 1789 by the German chemist Martin Heinrich Klaproth, was named after the newly discovered planet Uranus.[153]
141
+
142
+ "Uranus, the Magician" is a movement in Gustav Holst's orchestral suite The Planets, written between 1914 and 1916.
143
+
144
+ Operation Uranus was the successful military operation in World War II by the Red Army to take back Stalingrad and marked the turning point in the land war against the Wehrmacht.
145
+
146
+ The lines "Then felt I like some watcher of the skies/When a new planet swims into his ken", from John Keats's "On First Looking into Chapman's Homer", are a reference to Herschel's discovery of Uranus.[154]
147
+
148
+ Many references to Uranus in English language popular culture and news involve humour about one pronunciation of its name resembling that of the phrase "your anus".[155]
149
+
150
+ Bereits in der am 12ten März 1782 bei der hiesigen naturforschenden Gesellschaft vorgelesenen Abhandlung, habe ich den Namen des Vaters vom Saturn, nemlich Uranos, oder wie er mit der lateinischen Endung gewöhnlicher ist, Uranus vorgeschlagen, und habe seit dem das Vergnügen gehabt, daß verschiedene Astronomen und Mathematiker in ihren Schriften oder in Briefen an mich, diese Benennung aufgenommen oder gebilligt. Meines Erachtens muß man bei dieser Wahl die Mythologie befolgen, aus welcher die uralten Namen der übrigen Planeten entlehnen worden; denn in der Reihe der bisher bekannten, würde der von einer merkwürdigen Person oder Begebenheit der neuern Zeit wahrgenommene Name eines Planeten sehr auffallen. Diodor von Cicilien erzahlt die Geschichte der Atlanten, eines uralten Volks, welches eine der fruchtbarsten Gegenden in Africa bewohnte, und die Meeresküsten seines Landes als das Vaterland der Götter ansah. Uranus war ihr, erster König, Stifter ihres gesitteter Lebens und Erfinder vieler nützlichen Künste. Zugleich wird er auch als ein fleißiger und geschickter Himmelsforscher des Alterthums beschrieben... Noch mehr: Uranus war der Vater des Saturns und des Atlas, so wie der erstere der Vater des Jupiters.
151
+
152
+ Already in the pre-read at the local Natural History Society on 12th March 1782 treatise, I have the father's name from Saturn, namely Uranos, or as it is usually with the Latin suffix, proposed Uranus, and have since had the pleasure that various astronomers and mathematicians, cited in their writings or letters to me approving this designation. In my view, it is necessary to follow the mythology in this election, which had been borrowed from the ancient name of the other planets; because in the series of previously known, perceived by a strange person or event of modern times name of a planet would very noticeable. Diodorus of Cilicia tells the story of Atlas, an ancient people that inhabited one of the most fertile areas in Africa, and looked at the sea shores of his country as the homeland of the gods. Uranus was her first king, founder of their civilized life and inventor of many useful arts. At the same time he is also described as a diligent and skilful astronomers of antiquity ... even more: Uranus was the father of Saturn and the Atlas, as the former is the father of Jupiter.
153
+
154
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
en/5874.html.txt ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Gases:
4
+
5
+ Ices:
6
+
7
+ Uranus is the seventh planet from the Sun. The name "Uranus" is a reference to the Greek god of the sky, Uranus. According to Greek mythology, Uranus was the grandfather of Zeus (Jupiter) and father of Cronus (Saturn). It has the third-largest planetary radius and fourth-largest planetary mass in the Solar System. Uranus is similar in composition to Neptune, and both have bulk chemical compositions which differ from that of the larger gas giants Jupiter and Saturn. For this reason, scientists often classify Uranus and Neptune as "ice giants" to distinguish them from the gas giants. Uranus' atmosphere is similar to Jupiter's and Saturn's in its primary composition of hydrogen and helium, but it contains more "ices" such as water, ammonia, and methane, along with traces of other hydrocarbons.[15] It has the coldest planetary atmosphere in the Solar System, with a minimum temperature of 49 K (−224 °C; −371 °F), and has a complex, layered cloud structure with water thought to make up the lowest clouds and methane the uppermost layer of clouds.[15] The interior of Uranus is mainly composed of ices and rock.[14]
8
+
9
+ Like the other giant planets, Uranus has a ring system, a magnetosphere, and numerous moons. The Uranian system has a unique configuration because its axis of rotation is tilted sideways, nearly into the plane of its solar orbit. Its north and south poles, therefore, lie where most other planets have their equators.[20] In 1986, images from Voyager 2 showed Uranus as an almost featureless planet in visible light, without the cloud bands or storms associated with the other giant planets.[20] Voyager 2 remains the only spacecraft to visit the planet.[21] Observations from Earth have shown seasonal change and increased weather activity as Uranus approached its equinox in 2007. Wind speeds can reach 250 metres per second (900 km/h; 560 mph).[22]
10
+
11
+ Like the classical planets, Uranus is visible to the naked eye, but it was never recognised as a planet by ancient observers because of its dimness and slow orbit.[23] Sir William Herschel first observed Uranus on 13 March 1781, leading to its discovery as a planet, expanding the known boundaries of the Solar System for the first time in history and making Uranus the first planet classified as such with the aid of a telescope.
12
+
13
+
14
+
15
+ Uranus had been observed on many occasions before its recognition as a planet, but it was generally mistaken for a star. Possibly the earliest known observation was by Hipparchos, who in 128 BC might have recorded it as a star for his star catalogue that was later incorporated into Ptolemy's Almagest.[24] The earliest definite sighting was in 1690, when John Flamsteed observed it at least six times, cataloguing it as 34 Tauri. The French astronomer Pierre Charles Le Monnier observed Uranus at least twelve times between 1750 and 1769,[25] including on four consecutive nights.
16
+
17
+ Sir William Herschel observed Uranus on 13 March 1781 from the garden of his house at 19 New King Street in Bath, Somerset, England (now the Herschel Museum of Astronomy),[26] and initially reported it (on 26 April 1781) as a comet.[27] With a telescope, Herschel "engaged in a series of observations on the parallax of the fixed stars."[28]
18
+
19
+ Herschel recorded in his journal: "In the quartile near ζ Tauri ... either [a] Nebulous star or perhaps a comet."[29] On 17 March he noted: "I looked for the Comet or Nebulous Star and found that it is a Comet, for it has changed its place."[30] When he presented his discovery to the Royal Society, he continued to assert that he had found a comet, but also implicitly compared it to a planet:[28]
20
+
21
+ The power I had on when I first saw the comet was 227. From experience I know that the diameters of the fixed stars are not proportionally magnified with higher powers, as planets are; therefore I now put the powers at 460 and 932, and found that the diameter of the comet increased in proportion to the power, as it ought to be, on the supposition of its not being a fixed star, while the diameters of the stars to which I compared it were not increased in the same ratio. Moreover, the comet being magnified much beyond what its light would admit of, appeared hazy and ill-defined with these great powers, while the stars preserved that lustre and distinctness which from many thousand observations I knew they would retain. The sequel has shown that my surmises were well-founded, this proving to be the Comet we have lately observed.[28]
22
+
23
+ Herschel notified the Astronomer Royal Nevil Maskelyne of his discovery and received this flummoxed reply from him on 23 April 1781: "I don't know what to call it. It is as likely to be a regular planet moving in an orbit nearly circular to the sun as a Comet moving in a very eccentric ellipsis. I have not yet seen any coma or tail to it."[31]
24
+
25
+ Although Herschel continued to describe his new object as a comet, other astronomers had already begun to suspect otherwise. Finnish-Swedish astronomer Anders Johan Lexell, working in Russia, was the first to compute the orbit of the new object.[32] Its nearly circular orbit led him to a conclusion that it was a planet rather than a comet. Berlin astronomer Johann Elert Bode described Herschel's discovery as "a moving star that can be deemed a hitherto unknown planet-like object circulating beyond the orbit of Saturn".[33] Bode concluded that its near-circular orbit was more like a planet's than a comet's.[34]
26
+
27
+ The object was soon universally accepted as a new planet. By 1783, Herschel acknowledged this to Royal Society president Joseph Banks: "By the observation of the most eminent Astronomers in Europe it appears that the new star, which I had the honour of pointing out to them in March 1781, is a Primary Planet of our Solar System."[35] In recognition of his achievement, King George III gave Herschel an annual stipend of £200 on condition that he move to Windsor so that the Royal Family could look through his telescopes (equivalent to £24,000 in 2019).[36][37]
28
+
29
+ The name of Uranus references the ancient Greek deity of the sky Uranus (Ancient Greek: Οὐρανός), the father of Cronus (Saturn) and grandfather of Zeus (Jupiter), which in Latin became Ūranus (IPA: [ˈuːranʊs]).[1] It is the only planet whose English name is derived directly from a figure of Greek mythology. The adjectival form of Uranus is "Uranian".[38] The pronunciation of the name Uranus preferred among astronomers is /ˈjʊərənəs/,[2] with stress on the first syllable as in Latin Ūranus, in contrast to /jʊˈreɪnəs/, with stress on the second syllable and a long a, though both are considered acceptable.[f]
30
+
31
+ Consensus on the name was not reached until almost 70 years after the planet's discovery. During the original discussions following discovery, Maskelyne asked Herschel to "do the astronomical world the faver [sic] to give a name to your planet, which is entirely your own, [and] which we are so much obliged to you for the discovery of".[40] In response to Maskelyne's request, Herschel decided to name the object Georgium Sidus (George's Star), or the "Georgian Planet" in honour of his new patron, King George III.[41] He explained this decision in a letter to Joseph Banks:[35]
32
+
33
+ In the fabulous ages of ancient times the appellations of Mercury, Venus, Mars, Jupiter and Saturn were given to the Planets, as being the names of their principal heroes and divinities. In the present more philosophical era it would hardly be allowable to have recourse to the same method and call it Juno, Pallas, Apollo or Minerva, for a name to our new heavenly body. The first consideration of any particular event, or remarkable incident, seems to be its chronology: if in any future age it should be asked, when this last-found Planet was discovered? It would be a very satisfactory answer to say, 'In the reign of King George the Third'.
34
+
35
+ Herschel's proposed name was not popular outside Britain, and alternatives were soon proposed. Astronomer Jérôme Lalande proposed that it be named Herschel in honour of its discoverer.[42] Swedish astronomer Erik Prosperin proposed the name Neptune, which was supported by other astronomers who liked the idea to commemorate the victories of the British Royal Naval fleet in the course of the American Revolutionary War by calling the new planet even Neptune George III or Neptune Great Britain.[32]
36
+
37
+ In a March 1782 treatise, Bode proposed Uranus, the Latinised version of the Greek god of the sky, Ouranos.[43] Bode argued that the name should follow the mythology so as not to stand out as different from the other planets, and that Uranus was an appropriate name as the father of the first generation of the Titans.[43] He also noted that elegance of the name in that just as Saturn was the father of Jupiter, the new planet should be named after the father of Saturn.[37][43][44][45] In 1789, Bode's Royal Academy colleague Martin Klaproth named his newly discovered element uranium in support of Bode's choice.[46] Ultimately, Bode's suggestion became the most widely used, and became universal in 1850 when HM Nautical Almanac Office, the final holdout, switched from using Georgium Sidus to Uranus.[44]
38
+
39
+ Uranus has two astronomical symbols. The first to be proposed, ♅,[g] was suggested by Lalande in 1784. In a letter to Herschel, Lalande described it as "un globe surmonté par la première lettre de votre nom" ("a globe surmounted by the first letter of your surname").[42] A later proposal, ⛢,[h] is a hybrid of the symbols for Mars and the Sun because Uranus was the Sky in Greek mythology, which was thought to be dominated by the combined powers of the Sun and Mars.[47]
40
+
41
+ Uranus is called by a variety of translations in other languages. In Chinese, Japanese, Korean, and Vietnamese, its name is literally translated as the "sky king star" (天王星).[48][49][50][51] In Thai, its official name is Dao Yurenat (ดาวยูเรนัส), as in English. Its other name in Thai is Dao Maritayu (ดาวมฤตยู, Star of Mṛtyu), after the Sanskrit word for 'death', Mrtyu (मृत्यु). In Mongolian, its name is Tengeriin Van (Тэнгэрийн ван), translated as 'King of the Sky', reflecting its namesake god's role as the ruler of the heavens. In Hawaiian, its name is Heleʻekala, a loanword for the discoverer Herschel.[52] In Māori, its name is Whērangi.[53][54]
42
+
43
+ Uranus orbits the Sun once every 84 years, taking an average of seven years to pass through each constellation of the zodiac. In 2033, the planet will have made its third complete orbit around the Sun since being discovered in 1781. The planet has returned to the point of its discovery northeast of Zeta Tauri twice since then, in 1862 and 1943, one day later each time as the precession of the equinoxes has shifted it 1° west every 72 years. Uranus will return to this location again in 2030-31. Its average distance from the Sun is roughly 20 AU (3 billion km; 2 billion mi). The difference between its minimum and maximum distance from the Sun is 1.8 AU, larger than that of any other planet, though not as large as that of dwarf planet Pluto.[55] The intensity of sunlight varies inversely with the square of distance, and so on Uranus (at about 20 times the distance from the Sun compared to Earth) it is about 1/400 the intensity of light on Earth.[56] Its orbital elements were first calculated in 1783 by Pierre-Simon Laplace.[57] With time, discrepancies began to appear between the predicted and observed orbits, and in 1841, John Couch Adams first proposed that the differences might be due to the gravitational tug of an unseen planet. In 1845, Urbain Le Verrier began his own independent research into Uranus' orbit. On 23 September 1846, Johann Gottfried Galle located a new planet, later named Neptune, at nearly the position predicted by Le Verrier.[58]
44
+
45
+ The rotational period of the interior of Uranus is 17 hours, 14 minutes. As on all the giant planets, its upper atmosphere experiences strong winds in the direction of rotation. At some latitudes, such as about 60 degrees south, visible features of the atmosphere move much faster, making a full rotation in as little as 14 hours.[59]
46
+
47
+ The Uranian axis of rotation is approximately parallel with the plane of the Solar System, with an axial tilt of 97.77° (as defined by prograde rotation). This gives it seasonal changes completely unlike those of the other planets. Near the solstice, one pole faces the Sun continuously and the other faces away. Only a narrow strip around the equator experiences a rapid day–night cycle, but with the Sun low over the horizon. At the other side of Uranus' orbit the orientation of the poles towards the Sun is reversed. Each pole gets around 42 years of continuous sunlight, followed by 42 years of darkness.[60] Near the time of the equinoxes, the Sun faces the equator of Uranus giving a period of day–night cycles similar to those seen on most of the other planets.
48
+
49
+ Uranus reached its most recent equinox on 7 December 2007.[61][62]
50
+
51
+ One result of this axis orientation is that, averaged over the Uranian year, the polar regions of Uranus receive a greater energy input from the Sun than its equatorial regions. Nevertheless, Uranus is hotter at its equator than at its poles. The underlying mechanism that causes this is unknown. The reason for Uranus' unusual axial tilt is also not known with certainty, but the usual speculation is that during the formation of the Solar System, an Earth-sized protoplanet collided with Uranus, causing the skewed orientation.[63] Research by Jacob Kegerreis of Durham University suggests that the tilt resulted from a rock larger than the Earth crashing into the planet 3 to 4 billion years ago.[64]
52
+ Uranus' south pole was pointed almost directly at the Sun at the time of Voyager 2's flyby in 1986. The labelling of this pole as "south" uses the definition currently endorsed by the International Astronomical Union, namely that the north pole of a planet or satellite is the pole that points above the invariable plane of the Solar System, regardless of the direction the planet is spinning.[65][66] A different convention is sometimes used, in which a body's north and south poles are defined according to the right-hand rule in relation to the direction of rotation.[67]
53
+
54
+ The mean apparent magnitude of Uranus is 5.68 with a standard deviation of 0.17, while the extremes are 5.38 and +6.03.[16] This range of brightness is near the limit of naked eye visibility. Much of the variability is dependent upon the planetary latitudes being illuminated from the Sun and viewed from the Earth.[68] Its angular diameter is between 3.4 and 3.7 arcseconds, compared with 16 to 20 arcseconds for Saturn and 32 to 45 arcseconds for Jupiter.[69] At opposition, Uranus is visible to the naked eye in dark skies, and becomes an easy target even in urban conditions with binoculars.[6] In larger amateur telescopes with an objective diameter of between 15 and 23 cm, Uranus appears as a pale cyan disk with distinct limb darkening. With a large telescope of 25 cm or wider, cloud patterns, as well as some of the larger satellites, such as Titania and Oberon, may be visible.[70]
55
+
56
+ Uranus' mass is roughly 14.5 times that of Earth, making it the least massive of the giant planets. Its diameter is slightly larger than Neptune's at roughly four times that of Earth. A resulting density of 1.27 g/cm3 makes Uranus the second least dense planet, after Saturn.[9][10] This value indicates that it is made primarily of various ices, such as water, ammonia, and methane.[14] The total mass of ice in Uranus' interior is not precisely known, because different figures emerge depending on the model chosen; it must be between 9.3 and 13.5 Earth masses.[14][71] Hydrogen and helium constitute only a small part of the total, with between 0.5 and 1.5 Earth masses.[14] The remainder of the non-ice mass (0.5 to 3.7 Earth masses) is accounted for by rocky material.[14]
57
+
58
+ The standard model of Uranus' structure is that it consists of three layers: a rocky (silicate/iron–nickel) core in the centre, an icy mantle in the middle and an outer gaseous hydrogen/helium envelope.[14][72] The core is relatively small, with a mass of only 0.55 Earth masses and a radius less than 20% of Uranus'; the mantle comprises its bulk, with around 13.4 Earth masses, and the upper atmosphere is relatively insubstantial, weighing about 0.5 Earth masses and extending for the last 20% of Uranus' radius.[14][72] Uranus' core density is around 9 g/cm3, with a pressure in the centre of 8 million bars (800 GPa) and a temperature of about 5000 K.[71][72] The ice mantle is not in fact composed of ice in the conventional sense, but of a hot and dense fluid consisting of water, ammonia and other volatiles.[14][72] This fluid, which has a high electrical conductivity, is sometimes called a water–ammonia ocean.[73]
59
+
60
+ The extreme pressure and temperature deep within Uranus may break up the methane molecules, with the carbon atoms condensing into crystals of diamond that rain down through the mantle like hailstones.[74][75][76] Very-high-pressure experiments at the Lawrence Livermore National Laboratory suggest that the base of the mantle may comprise an ocean of liquid diamond, with floating solid 'diamond-bergs'.[77][78] Scientists also believe that rainfalls of solid diamonds occur on Uranus, as well as on Jupiter, Saturn, and Neptune.[79][80]
61
+
62
+ The bulk compositions of Uranus and Neptune are different from those of Jupiter and Saturn, with ice dominating over gases, hence justifying their separate classification as ice giants. There may be a layer of ionic water where the water molecules break down into a soup of hydrogen and oxygen ions, and deeper down superionic water in which the oxygen crystallises but the hydrogen ions move freely within the oxygen lattice.[81]
63
+
64
+ Although the model considered above is reasonably standard, it is not unique; other models also satisfy observations. For instance, if substantial amounts of hydrogen and rocky material are mixed in the ice mantle, the total mass of ices in the interior will be lower, and, correspondingly, the total mass of rocks and hydrogen will be higher. Presently available data does not allow a scientific determination of which model is correct.[71] The fluid interior structure of Uranus means that it has no solid surface. The gaseous atmosphere gradually transitions into the internal liquid layers.[14] For the sake of convenience, a revolving oblate spheroid set at the point at which atmospheric pressure equals 1 bar (100 kPa) is conditionally designated as a "surface". It has equatorial and polar radii of 25,559 ± 4 km (15,881.6 ± 2.5 mi) and 24,973 ± 20 km (15,518 ± 12 mi), respectively.[9] This surface is used throughout this article as a zero point for altitudes.
65
+
66
+ Uranus' internal heat appears markedly lower than that of the other giant planets; in astronomical terms, it has a low thermal flux.[22][82] Why Uranus' internal temperature is so low is still not understood. Neptune, which is Uranus' near twin in size and composition, radiates 2.61 times as much energy into space as it receives from the Sun,[22] but Uranus radiates hardly any excess heat at all. The total power radiated by Uranus in the far infrared (i.e. heat) part of the spectrum is 1.06±0.08 times the solar energy absorbed in its atmosphere.[15][83] Uranus' heat flux is only 0.042±0.047 W/m2, which is lower than the internal heat flux of Earth of about 0.075 W/m2.[83] The lowest temperature recorded in Uranus' tropopause is 49 K (−224.2 °C; −371.5 °F), making Uranus the coldest planet in the Solar System.[15][83]
67
+
68
+ One of the hypotheses for this discrepancy suggests that when Uranus was hit by a supermassive impactor, which caused it to expel most of its primordial heat, it was left with a depleted core temperature.[84] This impact hypothesis is also used in some attempts to explain the planet's axial tilt. Another hypothesis is that some form of barrier exists in Uranus' upper layers that prevents the core's heat from reaching the surface.[14] For example, convection may take place in a set of compositionally different layers, which may inhibit the upward heat transport;[15][83] perhaps double diffusive convection is a limiting factor.[14]
69
+
70
+ Although there is no well-defined solid surface within Uranus' interior, the outermost part of Uranus' gaseous envelope that is accessible to remote sensing is called its atmosphere.[15] Remote-sensing capability extends down to roughly 300 km below the 1 bar (100 kPa) level, with a corresponding pressure around 100 bar (10 MPa) and temperature of 320 K (47 °C; 116 °F).[86] The tenuous thermosphere extends over two planetary radii from the nominal surface, which is defined to lie at a pressure of 1 bar.[87] The Uranian atmosphere can be divided into three layers: the troposphere, between altitudes of −300 and 50 km (−186 and 31 mi) and pressures from 100 to 0.1 bar (10 MPa to 10 kPa); the stratosphere, spanning altitudes between 50 and 4,000 km (31 and 2,485 mi) and pressures of between 0.1 and 10−10 bar (10 kPa to 10 µPa); and the thermosphere extending from 4,000 km to as high as 50,000 km from the surface.[15] There is no mesosphere.
71
+
72
+ The composition of Uranus' atmosphere is different from its bulk, consisting mainly of molecular hydrogen and helium.[15] The helium molar fraction, i.e. the number of helium atoms per molecule of gas, is 0.15±0.03[19] in the upper troposphere, which corresponds to a mass fraction 0.26±0.05.[15][83] This value is close to the protosolar helium mass fraction of 0.275±0.01,[88] indicating that helium has not settled in its centre as it has in the gas giants.[15] The third-most-abundant component of Uranus' atmosphere is methane (CH4).[15] Methane has prominent absorption bands in the visible and near-infrared (IR), making Uranus aquamarine or cyan in colour.[15] Methane molecules account for 2.3% of the atmosphere by molar fraction below the methane cloud deck at the pressure level of 1.3 bar (130 kPa); this represents about 20 to 30 times the carbon abundance found in the Sun.[15][18][89] The mixing ratio[i] is much lower in the upper atmosphere due to its extremely low temperature, which lowers the saturation level and causes excess methane to freeze out.[90] The abundances of less volatile compounds such as ammonia, water, and hydrogen sulfide in the deep atmosphere are poorly known. They are probably also higher than solar values.[15][91] Along with methane, trace amounts of various hydrocarbons are found in the stratosphere of Uranus, which are thought to be produced from methane by photolysis induced by the solar ultraviolet (UV) radiation.[92] They include ethane (C2H6), acetylene (C2H2), methylacetylene (CH3C2H), and diacetylene (C2HC2H).[90][93][94] Spectroscopy has also uncovered traces of water vapour, carbon monoxide and carbon dioxide in the upper atmosphere, which can only originate from an external source such as infalling dust and comets.[93][94][95]
73
+
74
+ The troposphere is the lowest and densest part of the atmosphere and is characterised by a decrease in temperature with altitude.[15] The temperature falls from about 320 K (47 °C; 116 °F) at the base of the nominal troposphere at −300 km to 53 K (−220 °C; −364 °F) at 50 km.[86][89] The temperatures in the coldest upper region of the troposphere (the tropopause) actually vary in the range between 49 and 57 K (−224 and −216 °C; −371 and −357 °F) depending on planetary latitude.[15][82] The tropopause region is responsible for the vast majority of Uranus' thermal far infrared emissions, thus determining its effective temperature of 59.1 ± 0.3 K (−214.1 ± 0.3 °C; −353.3 ± 0.5 °F).[82][83]
75
+
76
+ The troposphere is thought to have a highly complex cloud structure; water clouds are hypothesised to lie in the pressure range of 50 to 100 bar (5 to 10 MPa), ammonium hydrosulfide clouds in the range of 20 to 40 bar (2 to 4 MPa), ammonia or hydrogen sulfide clouds at between 3 and 10 bar (0.3 and 1 MPa) and finally directly detected thin methane clouds at 1 to 2 bar (0.1 to 0.2 MPa).[15][18][86][96] The troposphere is a dynamic part of the atmosphere, exhibiting strong winds, bright clouds and seasonal changes.[22]
77
+
78
+ The middle layer of the Uranian atmosphere is the stratosphere, where temperature generally increases with altitude from 53 K (−220 °C; −364 °F) in the tropopause to between 800 and 850 K (527 and 577 °C; 980 and 1,070 °F) at the base of the thermosphere.[87] The heating of the stratosphere is caused by absorption of solar UV and IR radiation by methane and other hydrocarbons,[98] which form in this part of the atmosphere as a result of methane photolysis.[92] Heat is also conducted from the hot thermosphere.[98] The hydrocarbons occupy a relatively narrow layer at altitudes of between 100 and 300 km corresponding to a pressure range of 1000 to 10 Pa and temperatures of between 75 and 170 K (−198 and −103 °C; −325 and −154 °F).[90][93] The most abundant hydrocarbons are methane, acetylene and ethane with mixing ratios of around 10−7 relative to hydrogen. The mixing ratio of carbon monoxide is similar at these altitudes.[90][93][95] Heavier hydrocarbons and carbon dioxide have mixing ratios three orders of magnitude lower.[93] The abundance ratio of water is around 7×10−9.[94] Ethane and acetylene tend to condense in the colder lower part of stratosphere and tropopause (below 10 mBar level) forming haze layers,[92] which may be partly responsible for the bland appearance of Uranus. The concentration of hydrocarbons in the Uranian stratosphere above the haze is significantly lower than in the stratospheres of the other giant planets.[90][99]
79
+
80
+ The outermost layer of the Uranian atmosphere is the thermosphere and corona, which has a uniform temperature around 800 to 850 K.[15][99] The heat sources necessary to sustain such a high level are not understood, as neither the solar UV nor the auroral activity can provide the necessary energy to maintain these temperatures. The weak cooling efficiency due to the lack of hydrocarbons in the stratosphere above 0.1 mBar pressure level may contribute too.[87][99] In addition to molecular hydrogen, the thermosphere-corona contains many free hydrogen atoms. Their small mass and high temperatures explain why the corona extends as far as 50,000 km (31,000 mi), or two Uranian radii, from its surface.[87][99] This extended corona is a unique feature of Uranus.[99] Its effects include a drag on small particles orbiting Uranus, causing a general depletion of dust in the Uranian rings.[87] The Uranian thermosphere, together with the upper part of the stratosphere, corresponds to the ionosphere of Uranus.[89] Observations show that the ionosphere occupies altitudes from 2,000 to 10,000 km (1,200 to 6,200 mi).[89] The Uranian ionosphere is denser than that of either Saturn or Neptune, which may arise from the low concentration of hydrocarbons in the stratosphere.[99][100] The ionosphere is mainly sustained by solar UV radiation and its density depends on the solar activity.[101] Auroral activity is insignificant as compared to Jupiter and Saturn.[99][102]
81
+
82
+ Temperature profile of the Uranian troposphere and lower stratosphere. Cloud and haze layers are also indicated.
83
+
84
+ Zonal wind speeds on Uranus. Shaded areas show the southern collar and its future northern counterpart. The red curve is a symmetrical fit to the data.
85
+
86
+ Before the arrival of Voyager 2, no measurements of the Uranian magnetosphere had been taken, so its nature remained a mystery. Before 1986, scientists had expected the magnetic field of Uranus to be in line with the solar wind, because it would then align with Uranus' poles that lie in the ecliptic.[103]
87
+
88
+ Voyager's observations revealed that Uranus' magnetic field is peculiar, both because it does not originate from its geometric centre, and because it is tilted at 59° from the axis of rotation.[103][104] In fact the magnetic dipole is shifted from Uranus' centre towards the south rotational pole by as much as one third of the planetary radius.[103] This unusual geometry results in a highly asymmetric magnetosphere, where the magnetic field strength on the surface in the southern hemisphere can be as low as 0.1 gauss (10 µT), whereas in the northern hemisphere it can be as high as 1.1 gauss (110 µT).[103] The average field at the surface is 0.23 gauss (23 µT).[103] Studies of Voyager 2 data in 2017 suggest that this asymmetry causes Uranus' magnetosphere to connect with the solar wind once a Uranian day, opening the planet to the Sun's particles.[105] In comparison, the magnetic field of Earth is roughly as strong at either pole, and its "magnetic equator" is roughly parallel with its geographical equator.[104] The dipole moment of Uranus is 50 times that of Earth.[103][104] Neptune has a similarly displaced and tilted magnetic field, suggesting that this may be a common feature of ice giants.[104] One hypothesis is that, unlike the magnetic fields of the terrestrial and gas giants, which are generated within their cores, the ice giants' magnetic fields are generated by motion at relatively shallow depths, for instance, in the water–ammonia ocean.[73][106] Another possible explanation for the magnetosphere's alignment is that there are oceans of liquid diamond in Uranus' interior that would deter the magnetic field.[77]
89
+
90
+ Despite its curious alignment, in other respects the Uranian magnetosphere is like those of other planets: it has a bow shock at about 23 Uranian radii ahead of it, a magnetopause at 18 Uranian radii, a fully developed magnetotail, and radiation belts.[103][104][107] Overall, the structure of Uranus' magnetosphere is different from Jupiter's and more similar to Saturn's.[103][104] Uranus' magnetotail trails behind it into space for millions of kilometres and is twisted by its sideways rotation into a long corkscrew.[103][108]
91
+
92
+ Uranus' magnetosphere contains charged particles: mainly protons and electrons, with a small amount of H2+ ions.[104][107] Many of these particles probably derive from the thermosphere.[107] The ion and electron energies can be as high as 4 and 1.2 megaelectronvolts, respectively.[107] The density of low-energy (below 1 kiloelectronvolt) ions in the inner magnetosphere is about 2 cm−3.[109] The particle population is strongly affected by the Uranian moons, which sweep through the magnetosphere, leaving noticeable gaps.[107] The particle flux is high enough to cause darkening or space weathering of their surfaces on an astronomically rapid timescale of 100,000 years.[107] This may be the cause of the uniformly dark colouration of the Uranian satellites and rings.[110] Uranus has relatively well developed aurorae, which are seen as bright arcs around both magnetic poles.[99] Unlike Jupiter's, Uranus' aurorae seem to be insignificant for the energy balance of the planetary thermosphere.[102]
93
+
94
+ In March 2020, NASA astronomers reported the detection of a large atmospheric magnetic bubble, also known as a plasmoid, released into outer space from the planet Uranus, after reevaluating old data recorded by the Voyager 2 space probe during a flyby of the planet in 1986.[111][112]
95
+
96
+ At ultraviolet and visible wavelengths, Uranus' atmosphere is bland in comparison to the other giant planets, even to Neptune, which it otherwise closely resembles.[22] When Voyager 2 flew by Uranus in 1986, it observed a total of ten cloud features across the entire planet.[20][113] One proposed explanation for this dearth of features is that Uranus' internal heat appears markedly lower than that of the other giant planets. The lowest temperature recorded in Uranus' tropopause is 49 K (−224 °C; −371 °F), making Uranus the coldest planet in the Solar System.[15][83]
97
+
98
+ In 1986, Voyager 2 found that the visible southern hemisphere of Uranus can be subdivided into two regions: a bright polar cap and dark equatorial bands.[20] Their boundary is located at about −45° of latitude. A narrow band straddling the latitudinal range from −45 to −50° is the brightest large feature on its visible surface.[20][114] It is called a southern "collar". The cap and collar are thought to be a dense region of methane clouds located within the pressure range of 1.3 to 2 bar (see above).[115] Besides the large-scale banded structure, Voyager 2 observed ten small bright clouds, most lying several degrees to the north from the collar.[20] In all other respects Uranus looked like a dynamically dead planet in 1986. Voyager 2 arrived during the height of Uranus' southern summer and could not observe the northern hemisphere. At the beginning of the 21st century, when the northern polar region came into view, the Hubble Space Telescope (HST) and Keck telescope initially observed neither a collar nor a polar cap in the northern hemisphere.[114] So Uranus appeared to be asymmetric: bright near the south pole and uniformly dark in the region north of the southern collar.[114] In 2007, when Uranus passed its equinox, the southern collar almost disappeared, and a faint northern collar emerged near 45° of latitude.[116]
99
+
100
+ In the 1990s, the number of the observed bright cloud features grew considerably partly because new high-resolution imaging techniques became available.[22] Most were found in the northern hemisphere as it started to become visible.[22] An early explanation—that bright clouds are easier to identify in its dark part, whereas in the southern hemisphere the bright collar masks them – was shown to be incorrect.[117][118] Nevertheless there are differences between the clouds of each hemisphere. The northern clouds are smaller, sharper and brighter.[118] They appear to lie at a higher altitude.[118] The lifetime of clouds spans several orders of magnitude. Some small clouds live for hours; at least one southern cloud may have persisted since the Voyager 2 flyby.[22][113] Recent observation also discovered that cloud features on Uranus have a lot in common with those on Neptune.[22] For example, the dark spots common on Neptune had never been observed on Uranus before 2006, when the first such feature dubbed Uranus Dark Spot was imaged.[119] The speculation is that Uranus is becoming more Neptune-like during its equinoctial season.[120]
101
+
102
+ The tracking of numerous cloud features allowed determination of zonal winds blowing in the upper troposphere of Uranus.[22] At the equator winds are retrograde, which means that they blow in the reverse direction to the planetary rotation. Their speeds are from −360 to −180 km/h (−220 to −110 mph).[22][114] Wind speeds increase with the distance from the equator, reaching zero values near ±20° latitude, where the troposphere's temperature minimum is located.[22][82] Closer to the poles, the winds shift to a prograde direction, flowing with Uranus' rotation. Wind speeds continue to increase reaching maxima at ±60° latitude before falling to zero at the poles.[22] Wind speeds at −40° latitude range from 540 to 720 km/h (340 to 450 mph). Because the collar obscures all clouds below that parallel, speeds between it and the southern pole are impossible to measure.[22] In contrast, in the northern hemisphere maximum speeds as high as 860 km/h (540 mph) are observed near +50° latitude.[22][114][121]
103
+
104
+ For a short period from March to May 2004, large clouds appeared in the Uranian atmosphere, giving it a Neptune-like appearance.[118][122] Observations included record-breaking wind speeds of 820 km/h (510 mph) and a persistent thunderstorm referred to as "Fourth of July fireworks".[113] On 23 August 2006, researchers at the Space Science Institute (Boulder, Colorado) and the University of Wisconsin observed a dark spot on Uranus' surface, giving scientists more insight into Uranus atmospheric activity.[119] Why this sudden upsurge in activity occurred is not fully known, but it appears that Uranus' extreme axial tilt results in extreme seasonal variations in its weather.[62][120] Determining the nature of this seasonal variation is difficult because good data on Uranus' atmosphere have existed for less than 84 years, or one full Uranian year. Photometry over the course of half a Uranian year (beginning in the 1950s) has shown regular variation in the brightness in two spectral bands, with maxima occurring at the solstices and minima occurring at the equinoxes.[123] A similar periodic variation, with maxima at the solstices, has been noted in microwave measurements of the deep troposphere begun in the 1960s.[124] Stratospheric temperature measurements beginning in the 1970s also showed maximum values near the 1986 solstice.[98] The majority of this variability is thought to occur owing to changes in the viewing geometry.[117]
105
+
106
+ There are some indications that physical seasonal changes are happening in Uranus. Although Uranus is known to have a bright south polar region, the north pole is fairly dim, which is incompatible with the model of the seasonal change outlined above.[120] During its previous northern solstice in 1944, Uranus displayed elevated levels of brightness, which suggests that the north pole was not always so dim.[123] This information implies that the visible pole brightens some time before the solstice and darkens after the equinox.[120] Detailed analysis of the visible and microwave data revealed that the periodical changes of brightness are not completely symmetrical around the solstices, which also indicates a change in the meridional albedo patterns.[120] In the 1990s, as Uranus moved away from its solstice, Hubble and ground-based telescopes revealed that the south polar cap darkened noticeably (except the southern collar, which remained bright),[115] whereas the northern hemisphere demonstrated increasing activity,[113] such as cloud formations and stronger winds, bolstering expectations that it should brighten soon.[118] This indeed happened in 2007 when it passed an equinox: a faint northern polar collar arose, and the southern collar became nearly invisible, although the zonal wind profile remained slightly asymmetric, with northern winds being somewhat slower than southern.[116]
107
+
108
+ The mechanism of these physical changes is still not clear.[120] Near the summer and winter solstices, Uranus' hemispheres lie alternately either in full glare of the Sun's rays or facing deep space. The brightening of the sunlit hemisphere is thought to result from the local thickening of the methane clouds and haze layers located in the troposphere.[115] The bright collar at −45° latitude is also connected with methane clouds.[115] Other changes in the southern polar region can be explained by changes in the lower cloud layers.[115] The variation of the microwave emission from Uranus is probably caused by changes in the deep tropospheric circulation, because thick polar clouds and haze may inhibit convection.[125] Now that the spring and autumn equinoxes are arriving on Uranus, the dynamics are changing and convection can occur again.[113][125]
109
+
110
+ Many argue that the differences between the ice giants and the gas giants extend to their formation.[126][127] The Solar System is hypothesised to have formed from a giant rotating ball of gas and dust known as the presolar nebula. Much of the nebula's gas, primarily hydrogen and helium, formed the Sun, and the dust grains collected together to form the first protoplanets. As the planets grew, some of them eventually accreted enough matter for their gravity to hold on to the nebula's leftover gas.[126][127] The more gas they held onto, the larger they became; the larger they became, the more gas they held onto until a critical point was reached, and their size began to increase exponentially. The ice giants, with only a few Earth masses of nebular gas, never reached that critical point.[126][127][128] Recent simulations of planetary migration have suggested that both ice giants formed closer to the Sun than their present positions, and moved outwards after formation (the Nice model).[126]
111
+
112
+ Uranus has 27 known natural satellites.[128] The names of these satellites are chosen from characters in the works of Shakespeare and Alexander Pope.[72][129] The five main satellites are Miranda, Ariel, Umbriel, Titania, and Oberon.[72] The Uranian satellite system is the least massive among those of the giant planets; the combined mass of the five major satellites would be less than half that of Triton (largest moon of Neptune) alone.[10] The largest of Uranus' satellites, Titania, has a radius of only 788.9 km (490.2 mi), or less than half that of the Moon, but slightly more than Rhea, the second-largest satellite of Saturn, making Titania the eighth-largest moon in the Solar System. Uranus' satellites have relatively low albedos; ranging from 0.20 for Umbriel to 0.35 for Ariel (in green light).[20] They are ice–rock conglomerates composed of roughly 50% ice and 50% rock. The ice may include ammonia and carbon dioxide.[110][130]
113
+
114
+ Among the Uranian satellites, Ariel appears to have the youngest surface with the fewest impact craters and Umbriel's the oldest.[20][110] Miranda has fault canyons 20 km (12 mi) deep, terraced layers, and a chaotic variation in surface ages and features.[20] Miranda's past geologic activity is thought to have been driven by tidal heating at a time when its orbit was more eccentric than currently, probably as a result of a former 3:1 orbital resonance with Umbriel.[131] Extensional processes associated with upwelling diapirs are the likely origin of Miranda's 'racetrack'-like coronae.[132][133] Ariel is thought to have once been held in a 4:1 resonance with Titania.[134]
115
+
116
+ Uranus has at least one horseshoe orbiter occupying the Sun–Uranus L3 Lagrangian point—a gravitationally unstable region at 180° in its orbit, 83982 Crantor.[135][136] Crantor moves inside Uranus' co-orbital region on a complex, temporary horseshoe orbit.
117
+ 2010 EU65 is also a promising Uranus horseshoe librator candidate.[136]
118
+
119
+ The Uranian rings are composed of extremely dark particles, which vary in size from micrometres to a fraction of a metre.[20] Thirteen distinct rings are presently known, the brightest being the ε ring. All except two rings of Uranus are extremely narrow – they are usually a few kilometres wide. The rings are probably quite young; the dynamics considerations indicate that they did not form with Uranus. The matter in the rings may once have been part of a moon (or moons) that was shattered by high-speed impacts. From numerous pieces of debris that formed as a result of those impacts, only a few particles survived, in stable zones corresponding to the locations of the present rings.[110][137]
120
+
121
+ William Herschel described a possible ring around Uranus in 1789. This sighting is generally considered doubtful, because the rings are quite faint, and in the two following centuries none were noted by other observers. Still, Herschel made an accurate description of the epsilon ring's size, its angle relative to Earth, its red colour, and its apparent changes as Uranus travelled around the Sun.[138][139] The ring system was definitively discovered on 10 March 1977 by James L. Elliot, Edward W. Dunham, and Jessica Mink using the Kuiper Airborne Observatory. The discovery was serendipitous; they planned to use the occultation of the star SAO 158687 (also known as HD 128598) by Uranus to study its atmosphere. When their observations were analysed, they found that the star had disappeared briefly from view five times both before and after it disappeared behind Uranus. They concluded that there must be a ring system around Uranus.[140] Later they detected four additional rings.[140] The rings were directly imaged when Voyager 2 passed Uranus in 1986.[20] Voyager 2 also discovered two additional faint rings, bringing the total number to eleven.[20]
122
+
123
+ In December 2005, the Hubble Space Telescope detected a pair of previously unknown rings. The largest is located twice as far from Uranus as the previously known rings. These new rings are so far from Uranus that they are called the "outer" ring system. Hubble also spotted two small satellites, one of which, Mab, shares its orbit with the outermost newly discovered ring. The new rings bring the total number of Uranian rings to 13.[141] In April 2006, images of the new rings from the Keck Observatory yielded the colours of the outer rings: the outermost is blue and the other one red.[142][143]
124
+ One hypothesis concerning the outer ring's blue colour is that it is composed of minute particles of water ice from the surface of Mab that are small enough to scatter blue light.[142][144] In contrast, Uranus' inner rings appear grey.[142]
125
+
126
+ Animation about the discovering occultation in 1977. (Click on it to start)
127
+
128
+ Uranus has a complicated planetary ring system, which was the second such system to be discovered in the Solar System after Saturn's.[137]
129
+
130
+ Uranus' aurorae against its equatorial rings, imaged by the Hubble telescope. Unlike the aurorae of Earth and Jupiter, those of Uranus are not in line with its poles, due to its lopsided magnetic field.
131
+
132
+ In 1986, NASA's Voyager 2 interplanetary probe encountered Uranus. This flyby remains the only investigation of Uranus carried out from a short distance and no other visits are planned. Launched in 1977, Voyager 2 made its closest approach to Uranus on 24 January 1986, coming within 81,500 km (50,600 mi) of the cloudtops, before continuing its journey to Neptune. The spacecraft studied the structure and chemical composition of Uranus' atmosphere,[89] including its unique weather, caused by its axial tilt of 97.77°. It made the first detailed investigations of its five largest moons and discovered 10 new ones. It examined all nine of the system's known rings and discovered two more.[20][110][145] It also studied the magnetic field, its irregular structure, its tilt and its unique corkscrew magnetotail caused by Uranus' sideways orientation.[103]
133
+
134
+ Voyager 1 was unable to visit Uranus because investigation of Saturn's moon Titan was considered a priority. This trajectory took Voyager 1 out of the plane of the ecliptic, ending its planetary science mission.[146]:118
135
+
136
+ The possibility of sending the Cassini spacecraft from Saturn to Uranus was evaluated during a mission extension planning phase in 2009, but was ultimately rejected in favour of destroying it in the Saturnian atmosphere.[147] It would have taken about twenty years to get to the Uranian system after departing Saturn.[147] A Uranus orbiter and probe was recommended by the 2013–2022 Planetary Science Decadal Survey published in 2011; the proposal envisages launch during 2020–2023 and a 13-year cruise to Uranus.[148] A Uranus entry probe could use Pioneer Venus Multiprobe heritage and descend to 1–5 atmospheres.[148] The ESA evaluated a "medium-class" mission called Uranus Pathfinder.[149] A New Frontiers Uranus Orbiter has been evaluated and recommended in the study, The Case for a Uranus Orbiter.[150] Such a mission is aided by the ease with which a relatively big mass can be sent to the system—over 1500 kg with an Atlas 521 and 12-year journey.[151] For more concepts see Proposed Uranus missions.
137
+
138
+ In astrology, the planet Uranus () is the ruling planet of Aquarius. Because Uranus is cyan and Uranus is associated with electricity, the colour electric blue, which is close to cyan, is associated with the sign Aquarius[152] (see Uranus in astrology).
139
+
140
+ The chemical element uranium, discovered in 1789 by the German chemist Martin Heinrich Klaproth, was named after the newly discovered planet Uranus.[153]
141
+
142
+ "Uranus, the Magician" is a movement in Gustav Holst's orchestral suite The Planets, written between 1914 and 1916.
143
+
144
+ Operation Uranus was the successful military operation in World War II by the Red Army to take back Stalingrad and marked the turning point in the land war against the Wehrmacht.
145
+
146
+ The lines "Then felt I like some watcher of the skies/When a new planet swims into his ken", from John Keats's "On First Looking into Chapman's Homer", are a reference to Herschel's discovery of Uranus.[154]
147
+
148
+ Many references to Uranus in English language popular culture and news involve humour about one pronunciation of its name resembling that of the phrase "your anus".[155]
149
+
150
+ Bereits in der am 12ten März 1782 bei der hiesigen naturforschenden Gesellschaft vorgelesenen Abhandlung, habe ich den Namen des Vaters vom Saturn, nemlich Uranos, oder wie er mit der lateinischen Endung gewöhnlicher ist, Uranus vorgeschlagen, und habe seit dem das Vergnügen gehabt, daß verschiedene Astronomen und Mathematiker in ihren Schriften oder in Briefen an mich, diese Benennung aufgenommen oder gebilligt. Meines Erachtens muß man bei dieser Wahl die Mythologie befolgen, aus welcher die uralten Namen der übrigen Planeten entlehnen worden; denn in der Reihe der bisher bekannten, würde der von einer merkwürdigen Person oder Begebenheit der neuern Zeit wahrgenommene Name eines Planeten sehr auffallen. Diodor von Cicilien erzahlt die Geschichte der Atlanten, eines uralten Volks, welches eine der fruchtbarsten Gegenden in Africa bewohnte, und die Meeresküsten seines Landes als das Vaterland der Götter ansah. Uranus war ihr, erster König, Stifter ihres gesitteter Lebens und Erfinder vieler nützlichen Künste. Zugleich wird er auch als ein fleißiger und geschickter Himmelsforscher des Alterthums beschrieben... Noch mehr: Uranus war der Vater des Saturns und des Atlas, so wie der erstere der Vater des Jupiters.
151
+
152
+ Already in the pre-read at the local Natural History Society on 12th March 1782 treatise, I have the father's name from Saturn, namely Uranos, or as it is usually with the Latin suffix, proposed Uranus, and have since had the pleasure that various astronomers and mathematicians, cited in their writings or letters to me approving this designation. In my view, it is necessary to follow the mythology in this election, which had been borrowed from the ancient name of the other planets; because in the series of previously known, perceived by a strange person or event of modern times name of a planet would very noticeable. Diodorus of Cilicia tells the story of Atlas, an ancient people that inhabited one of the most fertile areas in Africa, and looked at the sea shores of his country as the homeland of the gods. Uranus was her first king, founder of their civilized life and inventor of many useful arts. At the same time he is also described as a diligent and skilful astronomers of antiquity ... even more: Uranus was the father of Saturn and the Atlas, as the former is the father of Jupiter.
153
+
154
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
en/5875.html.txt ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Gases:
4
+
5
+ Ices:
6
+
7
+ Uranus is the seventh planet from the Sun. The name "Uranus" is a reference to the Greek god of the sky, Uranus. According to Greek mythology, Uranus was the grandfather of Zeus (Jupiter) and father of Cronus (Saturn). It has the third-largest planetary radius and fourth-largest planetary mass in the Solar System. Uranus is similar in composition to Neptune, and both have bulk chemical compositions which differ from that of the larger gas giants Jupiter and Saturn. For this reason, scientists often classify Uranus and Neptune as "ice giants" to distinguish them from the gas giants. Uranus' atmosphere is similar to Jupiter's and Saturn's in its primary composition of hydrogen and helium, but it contains more "ices" such as water, ammonia, and methane, along with traces of other hydrocarbons.[15] It has the coldest planetary atmosphere in the Solar System, with a minimum temperature of 49 K (−224 °C; −371 °F), and has a complex, layered cloud structure with water thought to make up the lowest clouds and methane the uppermost layer of clouds.[15] The interior of Uranus is mainly composed of ices and rock.[14]
8
+
9
+ Like the other giant planets, Uranus has a ring system, a magnetosphere, and numerous moons. The Uranian system has a unique configuration because its axis of rotation is tilted sideways, nearly into the plane of its solar orbit. Its north and south poles, therefore, lie where most other planets have their equators.[20] In 1986, images from Voyager 2 showed Uranus as an almost featureless planet in visible light, without the cloud bands or storms associated with the other giant planets.[20] Voyager 2 remains the only spacecraft to visit the planet.[21] Observations from Earth have shown seasonal change and increased weather activity as Uranus approached its equinox in 2007. Wind speeds can reach 250 metres per second (900 km/h; 560 mph).[22]
10
+
11
+ Like the classical planets, Uranus is visible to the naked eye, but it was never recognised as a planet by ancient observers because of its dimness and slow orbit.[23] Sir William Herschel first observed Uranus on 13 March 1781, leading to its discovery as a planet, expanding the known boundaries of the Solar System for the first time in history and making Uranus the first planet classified as such with the aid of a telescope.
12
+
13
+
14
+
15
+ Uranus had been observed on many occasions before its recognition as a planet, but it was generally mistaken for a star. Possibly the earliest known observation was by Hipparchos, who in 128 BC might have recorded it as a star for his star catalogue that was later incorporated into Ptolemy's Almagest.[24] The earliest definite sighting was in 1690, when John Flamsteed observed it at least six times, cataloguing it as 34 Tauri. The French astronomer Pierre Charles Le Monnier observed Uranus at least twelve times between 1750 and 1769,[25] including on four consecutive nights.
16
+
17
+ Sir William Herschel observed Uranus on 13 March 1781 from the garden of his house at 19 New King Street in Bath, Somerset, England (now the Herschel Museum of Astronomy),[26] and initially reported it (on 26 April 1781) as a comet.[27] With a telescope, Herschel "engaged in a series of observations on the parallax of the fixed stars."[28]
18
+
19
+ Herschel recorded in his journal: "In the quartile near ζ Tauri ... either [a] Nebulous star or perhaps a comet."[29] On 17 March he noted: "I looked for the Comet or Nebulous Star and found that it is a Comet, for it has changed its place."[30] When he presented his discovery to the Royal Society, he continued to assert that he had found a comet, but also implicitly compared it to a planet:[28]
20
+
21
+ The power I had on when I first saw the comet was 227. From experience I know that the diameters of the fixed stars are not proportionally magnified with higher powers, as planets are; therefore I now put the powers at 460 and 932, and found that the diameter of the comet increased in proportion to the power, as it ought to be, on the supposition of its not being a fixed star, while the diameters of the stars to which I compared it were not increased in the same ratio. Moreover, the comet being magnified much beyond what its light would admit of, appeared hazy and ill-defined with these great powers, while the stars preserved that lustre and distinctness which from many thousand observations I knew they would retain. The sequel has shown that my surmises were well-founded, this proving to be the Comet we have lately observed.[28]
22
+
23
+ Herschel notified the Astronomer Royal Nevil Maskelyne of his discovery and received this flummoxed reply from him on 23 April 1781: "I don't know what to call it. It is as likely to be a regular planet moving in an orbit nearly circular to the sun as a Comet moving in a very eccentric ellipsis. I have not yet seen any coma or tail to it."[31]
24
+
25
+ Although Herschel continued to describe his new object as a comet, other astronomers had already begun to suspect otherwise. Finnish-Swedish astronomer Anders Johan Lexell, working in Russia, was the first to compute the orbit of the new object.[32] Its nearly circular orbit led him to a conclusion that it was a planet rather than a comet. Berlin astronomer Johann Elert Bode described Herschel's discovery as "a moving star that can be deemed a hitherto unknown planet-like object circulating beyond the orbit of Saturn".[33] Bode concluded that its near-circular orbit was more like a planet's than a comet's.[34]
26
+
27
+ The object was soon universally accepted as a new planet. By 1783, Herschel acknowledged this to Royal Society president Joseph Banks: "By the observation of the most eminent Astronomers in Europe it appears that the new star, which I had the honour of pointing out to them in March 1781, is a Primary Planet of our Solar System."[35] In recognition of his achievement, King George III gave Herschel an annual stipend of £200 on condition that he move to Windsor so that the Royal Family could look through his telescopes (equivalent to £24,000 in 2019).[36][37]
28
+
29
+ The name of Uranus references the ancient Greek deity of the sky Uranus (Ancient Greek: Οὐρανός), the father of Cronus (Saturn) and grandfather of Zeus (Jupiter), which in Latin became Ūranus (IPA: [ˈuːranʊs]).[1] It is the only planet whose English name is derived directly from a figure of Greek mythology. The adjectival form of Uranus is "Uranian".[38] The pronunciation of the name Uranus preferred among astronomers is /ˈjʊərənəs/,[2] with stress on the first syllable as in Latin Ūranus, in contrast to /jʊˈreɪnəs/, with stress on the second syllable and a long a, though both are considered acceptable.[f]
30
+
31
+ Consensus on the name was not reached until almost 70 years after the planet's discovery. During the original discussions following discovery, Maskelyne asked Herschel to "do the astronomical world the faver [sic] to give a name to your planet, which is entirely your own, [and] which we are so much obliged to you for the discovery of".[40] In response to Maskelyne's request, Herschel decided to name the object Georgium Sidus (George's Star), or the "Georgian Planet" in honour of his new patron, King George III.[41] He explained this decision in a letter to Joseph Banks:[35]
32
+
33
+ In the fabulous ages of ancient times the appellations of Mercury, Venus, Mars, Jupiter and Saturn were given to the Planets, as being the names of their principal heroes and divinities. In the present more philosophical era it would hardly be allowable to have recourse to the same method and call it Juno, Pallas, Apollo or Minerva, for a name to our new heavenly body. The first consideration of any particular event, or remarkable incident, seems to be its chronology: if in any future age it should be asked, when this last-found Planet was discovered? It would be a very satisfactory answer to say, 'In the reign of King George the Third'.
34
+
35
+ Herschel's proposed name was not popular outside Britain, and alternatives were soon proposed. Astronomer Jérôme Lalande proposed that it be named Herschel in honour of its discoverer.[42] Swedish astronomer Erik Prosperin proposed the name Neptune, which was supported by other astronomers who liked the idea to commemorate the victories of the British Royal Naval fleet in the course of the American Revolutionary War by calling the new planet even Neptune George III or Neptune Great Britain.[32]
36
+
37
+ In a March 1782 treatise, Bode proposed Uranus, the Latinised version of the Greek god of the sky, Ouranos.[43] Bode argued that the name should follow the mythology so as not to stand out as different from the other planets, and that Uranus was an appropriate name as the father of the first generation of the Titans.[43] He also noted that elegance of the name in that just as Saturn was the father of Jupiter, the new planet should be named after the father of Saturn.[37][43][44][45] In 1789, Bode's Royal Academy colleague Martin Klaproth named his newly discovered element uranium in support of Bode's choice.[46] Ultimately, Bode's suggestion became the most widely used, and became universal in 1850 when HM Nautical Almanac Office, the final holdout, switched from using Georgium Sidus to Uranus.[44]
38
+
39
+ Uranus has two astronomical symbols. The first to be proposed, ♅,[g] was suggested by Lalande in 1784. In a letter to Herschel, Lalande described it as "un globe surmonté par la première lettre de votre nom" ("a globe surmounted by the first letter of your surname").[42] A later proposal, ⛢,[h] is a hybrid of the symbols for Mars and the Sun because Uranus was the Sky in Greek mythology, which was thought to be dominated by the combined powers of the Sun and Mars.[47]
40
+
41
+ Uranus is called by a variety of translations in other languages. In Chinese, Japanese, Korean, and Vietnamese, its name is literally translated as the "sky king star" (天王星).[48][49][50][51] In Thai, its official name is Dao Yurenat (ดาวยูเรนัส), as in English. Its other name in Thai is Dao Maritayu (ดาวมฤตยู, Star of Mṛtyu), after the Sanskrit word for 'death', Mrtyu (मृत्यु). In Mongolian, its name is Tengeriin Van (Тэнгэрийн ван), translated as 'King of the Sky', reflecting its namesake god's role as the ruler of the heavens. In Hawaiian, its name is Heleʻekala, a loanword for the discoverer Herschel.[52] In Māori, its name is Whērangi.[53][54]
42
+
43
+ Uranus orbits the Sun once every 84 years, taking an average of seven years to pass through each constellation of the zodiac. In 2033, the planet will have made its third complete orbit around the Sun since being discovered in 1781. The planet has returned to the point of its discovery northeast of Zeta Tauri twice since then, in 1862 and 1943, one day later each time as the precession of the equinoxes has shifted it 1° west every 72 years. Uranus will return to this location again in 2030-31. Its average distance from the Sun is roughly 20 AU (3 billion km; 2 billion mi). The difference between its minimum and maximum distance from the Sun is 1.8 AU, larger than that of any other planet, though not as large as that of dwarf planet Pluto.[55] The intensity of sunlight varies inversely with the square of distance, and so on Uranus (at about 20 times the distance from the Sun compared to Earth) it is about 1/400 the intensity of light on Earth.[56] Its orbital elements were first calculated in 1783 by Pierre-Simon Laplace.[57] With time, discrepancies began to appear between the predicted and observed orbits, and in 1841, John Couch Adams first proposed that the differences might be due to the gravitational tug of an unseen planet. In 1845, Urbain Le Verrier began his own independent research into Uranus' orbit. On 23 September 1846, Johann Gottfried Galle located a new planet, later named Neptune, at nearly the position predicted by Le Verrier.[58]
44
+
45
+ The rotational period of the interior of Uranus is 17 hours, 14 minutes. As on all the giant planets, its upper atmosphere experiences strong winds in the direction of rotation. At some latitudes, such as about 60 degrees south, visible features of the atmosphere move much faster, making a full rotation in as little as 14 hours.[59]
46
+
47
+ The Uranian axis of rotation is approximately parallel with the plane of the Solar System, with an axial tilt of 97.77° (as defined by prograde rotation). This gives it seasonal changes completely unlike those of the other planets. Near the solstice, one pole faces the Sun continuously and the other faces away. Only a narrow strip around the equator experiences a rapid day–night cycle, but with the Sun low over the horizon. At the other side of Uranus' orbit the orientation of the poles towards the Sun is reversed. Each pole gets around 42 years of continuous sunlight, followed by 42 years of darkness.[60] Near the time of the equinoxes, the Sun faces the equator of Uranus giving a period of day–night cycles similar to those seen on most of the other planets.
48
+
49
+ Uranus reached its most recent equinox on 7 December 2007.[61][62]
50
+
51
+ One result of this axis orientation is that, averaged over the Uranian year, the polar regions of Uranus receive a greater energy input from the Sun than its equatorial regions. Nevertheless, Uranus is hotter at its equator than at its poles. The underlying mechanism that causes this is unknown. The reason for Uranus' unusual axial tilt is also not known with certainty, but the usual speculation is that during the formation of the Solar System, an Earth-sized protoplanet collided with Uranus, causing the skewed orientation.[63] Research by Jacob Kegerreis of Durham University suggests that the tilt resulted from a rock larger than the Earth crashing into the planet 3 to 4 billion years ago.[64]
52
+ Uranus' south pole was pointed almost directly at the Sun at the time of Voyager 2's flyby in 1986. The labelling of this pole as "south" uses the definition currently endorsed by the International Astronomical Union, namely that the north pole of a planet or satellite is the pole that points above the invariable plane of the Solar System, regardless of the direction the planet is spinning.[65][66] A different convention is sometimes used, in which a body's north and south poles are defined according to the right-hand rule in relation to the direction of rotation.[67]
53
+
54
+ The mean apparent magnitude of Uranus is 5.68 with a standard deviation of 0.17, while the extremes are 5.38 and +6.03.[16] This range of brightness is near the limit of naked eye visibility. Much of the variability is dependent upon the planetary latitudes being illuminated from the Sun and viewed from the Earth.[68] Its angular diameter is between 3.4 and 3.7 arcseconds, compared with 16 to 20 arcseconds for Saturn and 32 to 45 arcseconds for Jupiter.[69] At opposition, Uranus is visible to the naked eye in dark skies, and becomes an easy target even in urban conditions with binoculars.[6] In larger amateur telescopes with an objective diameter of between 15 and 23 cm, Uranus appears as a pale cyan disk with distinct limb darkening. With a large telescope of 25 cm or wider, cloud patterns, as well as some of the larger satellites, such as Titania and Oberon, may be visible.[70]
55
+
56
+ Uranus' mass is roughly 14.5 times that of Earth, making it the least massive of the giant planets. Its diameter is slightly larger than Neptune's at roughly four times that of Earth. A resulting density of 1.27 g/cm3 makes Uranus the second least dense planet, after Saturn.[9][10] This value indicates that it is made primarily of various ices, such as water, ammonia, and methane.[14] The total mass of ice in Uranus' interior is not precisely known, because different figures emerge depending on the model chosen; it must be between 9.3 and 13.5 Earth masses.[14][71] Hydrogen and helium constitute only a small part of the total, with between 0.5 and 1.5 Earth masses.[14] The remainder of the non-ice mass (0.5 to 3.7 Earth masses) is accounted for by rocky material.[14]
57
+
58
+ The standard model of Uranus' structure is that it consists of three layers: a rocky (silicate/iron–nickel) core in the centre, an icy mantle in the middle and an outer gaseous hydrogen/helium envelope.[14][72] The core is relatively small, with a mass of only 0.55 Earth masses and a radius less than 20% of Uranus'; the mantle comprises its bulk, with around 13.4 Earth masses, and the upper atmosphere is relatively insubstantial, weighing about 0.5 Earth masses and extending for the last 20% of Uranus' radius.[14][72] Uranus' core density is around 9 g/cm3, with a pressure in the centre of 8 million bars (800 GPa) and a temperature of about 5000 K.[71][72] The ice mantle is not in fact composed of ice in the conventional sense, but of a hot and dense fluid consisting of water, ammonia and other volatiles.[14][72] This fluid, which has a high electrical conductivity, is sometimes called a water–ammonia ocean.[73]
59
+
60
+ The extreme pressure and temperature deep within Uranus may break up the methane molecules, with the carbon atoms condensing into crystals of diamond that rain down through the mantle like hailstones.[74][75][76] Very-high-pressure experiments at the Lawrence Livermore National Laboratory suggest that the base of the mantle may comprise an ocean of liquid diamond, with floating solid 'diamond-bergs'.[77][78] Scientists also believe that rainfalls of solid diamonds occur on Uranus, as well as on Jupiter, Saturn, and Neptune.[79][80]
61
+
62
+ The bulk compositions of Uranus and Neptune are different from those of Jupiter and Saturn, with ice dominating over gases, hence justifying their separate classification as ice giants. There may be a layer of ionic water where the water molecules break down into a soup of hydrogen and oxygen ions, and deeper down superionic water in which the oxygen crystallises but the hydrogen ions move freely within the oxygen lattice.[81]
63
+
64
+ Although the model considered above is reasonably standard, it is not unique; other models also satisfy observations. For instance, if substantial amounts of hydrogen and rocky material are mixed in the ice mantle, the total mass of ices in the interior will be lower, and, correspondingly, the total mass of rocks and hydrogen will be higher. Presently available data does not allow a scientific determination of which model is correct.[71] The fluid interior structure of Uranus means that it has no solid surface. The gaseous atmosphere gradually transitions into the internal liquid layers.[14] For the sake of convenience, a revolving oblate spheroid set at the point at which atmospheric pressure equals 1 bar (100 kPa) is conditionally designated as a "surface". It has equatorial and polar radii of 25,559 ± 4 km (15,881.6 ± 2.5 mi) and 24,973 ± 20 km (15,518 ± 12 mi), respectively.[9] This surface is used throughout this article as a zero point for altitudes.
65
+
66
+ Uranus' internal heat appears markedly lower than that of the other giant planets; in astronomical terms, it has a low thermal flux.[22][82] Why Uranus' internal temperature is so low is still not understood. Neptune, which is Uranus' near twin in size and composition, radiates 2.61 times as much energy into space as it receives from the Sun,[22] but Uranus radiates hardly any excess heat at all. The total power radiated by Uranus in the far infrared (i.e. heat) part of the spectrum is 1.06±0.08 times the solar energy absorbed in its atmosphere.[15][83] Uranus' heat flux is only 0.042±0.047 W/m2, which is lower than the internal heat flux of Earth of about 0.075 W/m2.[83] The lowest temperature recorded in Uranus' tropopause is 49 K (−224.2 °C; −371.5 °F), making Uranus the coldest planet in the Solar System.[15][83]
67
+
68
+ One of the hypotheses for this discrepancy suggests that when Uranus was hit by a supermassive impactor, which caused it to expel most of its primordial heat, it was left with a depleted core temperature.[84] This impact hypothesis is also used in some attempts to explain the planet's axial tilt. Another hypothesis is that some form of barrier exists in Uranus' upper layers that prevents the core's heat from reaching the surface.[14] For example, convection may take place in a set of compositionally different layers, which may inhibit the upward heat transport;[15][83] perhaps double diffusive convection is a limiting factor.[14]
69
+
70
+ Although there is no well-defined solid surface within Uranus' interior, the outermost part of Uranus' gaseous envelope that is accessible to remote sensing is called its atmosphere.[15] Remote-sensing capability extends down to roughly 300 km below the 1 bar (100 kPa) level, with a corresponding pressure around 100 bar (10 MPa) and temperature of 320 K (47 °C; 116 °F).[86] The tenuous thermosphere extends over two planetary radii from the nominal surface, which is defined to lie at a pressure of 1 bar.[87] The Uranian atmosphere can be divided into three layers: the troposphere, between altitudes of −300 and 50 km (−186 and 31 mi) and pressures from 100 to 0.1 bar (10 MPa to 10 kPa); the stratosphere, spanning altitudes between 50 and 4,000 km (31 and 2,485 mi) and pressures of between 0.1 and 10−10 bar (10 kPa to 10 µPa); and the thermosphere extending from 4,000 km to as high as 50,000 km from the surface.[15] There is no mesosphere.
71
+
72
+ The composition of Uranus' atmosphere is different from its bulk, consisting mainly of molecular hydrogen and helium.[15] The helium molar fraction, i.e. the number of helium atoms per molecule of gas, is 0.15±0.03[19] in the upper troposphere, which corresponds to a mass fraction 0.26±0.05.[15][83] This value is close to the protosolar helium mass fraction of 0.275±0.01,[88] indicating that helium has not settled in its centre as it has in the gas giants.[15] The third-most-abundant component of Uranus' atmosphere is methane (CH4).[15] Methane has prominent absorption bands in the visible and near-infrared (IR), making Uranus aquamarine or cyan in colour.[15] Methane molecules account for 2.3% of the atmosphere by molar fraction below the methane cloud deck at the pressure level of 1.3 bar (130 kPa); this represents about 20 to 30 times the carbon abundance found in the Sun.[15][18][89] The mixing ratio[i] is much lower in the upper atmosphere due to its extremely low temperature, which lowers the saturation level and causes excess methane to freeze out.[90] The abundances of less volatile compounds such as ammonia, water, and hydrogen sulfide in the deep atmosphere are poorly known. They are probably also higher than solar values.[15][91] Along with methane, trace amounts of various hydrocarbons are found in the stratosphere of Uranus, which are thought to be produced from methane by photolysis induced by the solar ultraviolet (UV) radiation.[92] They include ethane (C2H6), acetylene (C2H2), methylacetylene (CH3C2H), and diacetylene (C2HC2H).[90][93][94] Spectroscopy has also uncovered traces of water vapour, carbon monoxide and carbon dioxide in the upper atmosphere, which can only originate from an external source such as infalling dust and comets.[93][94][95]
73
+
74
+ The troposphere is the lowest and densest part of the atmosphere and is characterised by a decrease in temperature with altitude.[15] The temperature falls from about 320 K (47 °C; 116 °F) at the base of the nominal troposphere at −300 km to 53 K (−220 °C; −364 °F) at 50 km.[86][89] The temperatures in the coldest upper region of the troposphere (the tropopause) actually vary in the range between 49 and 57 K (−224 and −216 °C; −371 and −357 °F) depending on planetary latitude.[15][82] The tropopause region is responsible for the vast majority of Uranus' thermal far infrared emissions, thus determining its effective temperature of 59.1 ± 0.3 K (−214.1 ± 0.3 °C; −353.3 ± 0.5 °F).[82][83]
75
+
76
+ The troposphere is thought to have a highly complex cloud structure; water clouds are hypothesised to lie in the pressure range of 50 to 100 bar (5 to 10 MPa), ammonium hydrosulfide clouds in the range of 20 to 40 bar (2 to 4 MPa), ammonia or hydrogen sulfide clouds at between 3 and 10 bar (0.3 and 1 MPa) and finally directly detected thin methane clouds at 1 to 2 bar (0.1 to 0.2 MPa).[15][18][86][96] The troposphere is a dynamic part of the atmosphere, exhibiting strong winds, bright clouds and seasonal changes.[22]
77
+
78
+ The middle layer of the Uranian atmosphere is the stratosphere, where temperature generally increases with altitude from 53 K (−220 °C; −364 °F) in the tropopause to between 800 and 850 K (527 and 577 °C; 980 and 1,070 °F) at the base of the thermosphere.[87] The heating of the stratosphere is caused by absorption of solar UV and IR radiation by methane and other hydrocarbons,[98] which form in this part of the atmosphere as a result of methane photolysis.[92] Heat is also conducted from the hot thermosphere.[98] The hydrocarbons occupy a relatively narrow layer at altitudes of between 100 and 300 km corresponding to a pressure range of 1000 to 10 Pa and temperatures of between 75 and 170 K (−198 and −103 °C; −325 and −154 °F).[90][93] The most abundant hydrocarbons are methane, acetylene and ethane with mixing ratios of around 10−7 relative to hydrogen. The mixing ratio of carbon monoxide is similar at these altitudes.[90][93][95] Heavier hydrocarbons and carbon dioxide have mixing ratios three orders of magnitude lower.[93] The abundance ratio of water is around 7×10−9.[94] Ethane and acetylene tend to condense in the colder lower part of stratosphere and tropopause (below 10 mBar level) forming haze layers,[92] which may be partly responsible for the bland appearance of Uranus. The concentration of hydrocarbons in the Uranian stratosphere above the haze is significantly lower than in the stratospheres of the other giant planets.[90][99]
79
+
80
+ The outermost layer of the Uranian atmosphere is the thermosphere and corona, which has a uniform temperature around 800 to 850 K.[15][99] The heat sources necessary to sustain such a high level are not understood, as neither the solar UV nor the auroral activity can provide the necessary energy to maintain these temperatures. The weak cooling efficiency due to the lack of hydrocarbons in the stratosphere above 0.1 mBar pressure level may contribute too.[87][99] In addition to molecular hydrogen, the thermosphere-corona contains many free hydrogen atoms. Their small mass and high temperatures explain why the corona extends as far as 50,000 km (31,000 mi), or two Uranian radii, from its surface.[87][99] This extended corona is a unique feature of Uranus.[99] Its effects include a drag on small particles orbiting Uranus, causing a general depletion of dust in the Uranian rings.[87] The Uranian thermosphere, together with the upper part of the stratosphere, corresponds to the ionosphere of Uranus.[89] Observations show that the ionosphere occupies altitudes from 2,000 to 10,000 km (1,200 to 6,200 mi).[89] The Uranian ionosphere is denser than that of either Saturn or Neptune, which may arise from the low concentration of hydrocarbons in the stratosphere.[99][100] The ionosphere is mainly sustained by solar UV radiation and its density depends on the solar activity.[101] Auroral activity is insignificant as compared to Jupiter and Saturn.[99][102]
81
+
82
+ Temperature profile of the Uranian troposphere and lower stratosphere. Cloud and haze layers are also indicated.
83
+
84
+ Zonal wind speeds on Uranus. Shaded areas show the southern collar and its future northern counterpart. The red curve is a symmetrical fit to the data.
85
+
86
+ Before the arrival of Voyager 2, no measurements of the Uranian magnetosphere had been taken, so its nature remained a mystery. Before 1986, scientists had expected the magnetic field of Uranus to be in line with the solar wind, because it would then align with Uranus' poles that lie in the ecliptic.[103]
87
+
88
+ Voyager's observations revealed that Uranus' magnetic field is peculiar, both because it does not originate from its geometric centre, and because it is tilted at 59° from the axis of rotation.[103][104] In fact the magnetic dipole is shifted from Uranus' centre towards the south rotational pole by as much as one third of the planetary radius.[103] This unusual geometry results in a highly asymmetric magnetosphere, where the magnetic field strength on the surface in the southern hemisphere can be as low as 0.1 gauss (10 µT), whereas in the northern hemisphere it can be as high as 1.1 gauss (110 µT).[103] The average field at the surface is 0.23 gauss (23 µT).[103] Studies of Voyager 2 data in 2017 suggest that this asymmetry causes Uranus' magnetosphere to connect with the solar wind once a Uranian day, opening the planet to the Sun's particles.[105] In comparison, the magnetic field of Earth is roughly as strong at either pole, and its "magnetic equator" is roughly parallel with its geographical equator.[104] The dipole moment of Uranus is 50 times that of Earth.[103][104] Neptune has a similarly displaced and tilted magnetic field, suggesting that this may be a common feature of ice giants.[104] One hypothesis is that, unlike the magnetic fields of the terrestrial and gas giants, which are generated within their cores, the ice giants' magnetic fields are generated by motion at relatively shallow depths, for instance, in the water–ammonia ocean.[73][106] Another possible explanation for the magnetosphere's alignment is that there are oceans of liquid diamond in Uranus' interior that would deter the magnetic field.[77]
89
+
90
+ Despite its curious alignment, in other respects the Uranian magnetosphere is like those of other planets: it has a bow shock at about 23 Uranian radii ahead of it, a magnetopause at 18 Uranian radii, a fully developed magnetotail, and radiation belts.[103][104][107] Overall, the structure of Uranus' magnetosphere is different from Jupiter's and more similar to Saturn's.[103][104] Uranus' magnetotail trails behind it into space for millions of kilometres and is twisted by its sideways rotation into a long corkscrew.[103][108]
91
+
92
+ Uranus' magnetosphere contains charged particles: mainly protons and electrons, with a small amount of H2+ ions.[104][107] Many of these particles probably derive from the thermosphere.[107] The ion and electron energies can be as high as 4 and 1.2 megaelectronvolts, respectively.[107] The density of low-energy (below 1 kiloelectronvolt) ions in the inner magnetosphere is about 2 cm−3.[109] The particle population is strongly affected by the Uranian moons, which sweep through the magnetosphere, leaving noticeable gaps.[107] The particle flux is high enough to cause darkening or space weathering of their surfaces on an astronomically rapid timescale of 100,000 years.[107] This may be the cause of the uniformly dark colouration of the Uranian satellites and rings.[110] Uranus has relatively well developed aurorae, which are seen as bright arcs around both magnetic poles.[99] Unlike Jupiter's, Uranus' aurorae seem to be insignificant for the energy balance of the planetary thermosphere.[102]
93
+
94
+ In March 2020, NASA astronomers reported the detection of a large atmospheric magnetic bubble, also known as a plasmoid, released into outer space from the planet Uranus, after reevaluating old data recorded by the Voyager 2 space probe during a flyby of the planet in 1986.[111][112]
95
+
96
+ At ultraviolet and visible wavelengths, Uranus' atmosphere is bland in comparison to the other giant planets, even to Neptune, which it otherwise closely resembles.[22] When Voyager 2 flew by Uranus in 1986, it observed a total of ten cloud features across the entire planet.[20][113] One proposed explanation for this dearth of features is that Uranus' internal heat appears markedly lower than that of the other giant planets. The lowest temperature recorded in Uranus' tropopause is 49 K (−224 °C; −371 °F), making Uranus the coldest planet in the Solar System.[15][83]
97
+
98
+ In 1986, Voyager 2 found that the visible southern hemisphere of Uranus can be subdivided into two regions: a bright polar cap and dark equatorial bands.[20] Their boundary is located at about −45° of latitude. A narrow band straddling the latitudinal range from −45 to −50° is the brightest large feature on its visible surface.[20][114] It is called a southern "collar". The cap and collar are thought to be a dense region of methane clouds located within the pressure range of 1.3 to 2 bar (see above).[115] Besides the large-scale banded structure, Voyager 2 observed ten small bright clouds, most lying several degrees to the north from the collar.[20] In all other respects Uranus looked like a dynamically dead planet in 1986. Voyager 2 arrived during the height of Uranus' southern summer and could not observe the northern hemisphere. At the beginning of the 21st century, when the northern polar region came into view, the Hubble Space Telescope (HST) and Keck telescope initially observed neither a collar nor a polar cap in the northern hemisphere.[114] So Uranus appeared to be asymmetric: bright near the south pole and uniformly dark in the region north of the southern collar.[114] In 2007, when Uranus passed its equinox, the southern collar almost disappeared, and a faint northern collar emerged near 45° of latitude.[116]
99
+
100
+ In the 1990s, the number of the observed bright cloud features grew considerably partly because new high-resolution imaging techniques became available.[22] Most were found in the northern hemisphere as it started to become visible.[22] An early explanation—that bright clouds are easier to identify in its dark part, whereas in the southern hemisphere the bright collar masks them – was shown to be incorrect.[117][118] Nevertheless there are differences between the clouds of each hemisphere. The northern clouds are smaller, sharper and brighter.[118] They appear to lie at a higher altitude.[118] The lifetime of clouds spans several orders of magnitude. Some small clouds live for hours; at least one southern cloud may have persisted since the Voyager 2 flyby.[22][113] Recent observation also discovered that cloud features on Uranus have a lot in common with those on Neptune.[22] For example, the dark spots common on Neptune had never been observed on Uranus before 2006, when the first such feature dubbed Uranus Dark Spot was imaged.[119] The speculation is that Uranus is becoming more Neptune-like during its equinoctial season.[120]
101
+
102
+ The tracking of numerous cloud features allowed determination of zonal winds blowing in the upper troposphere of Uranus.[22] At the equator winds are retrograde, which means that they blow in the reverse direction to the planetary rotation. Their speeds are from −360 to −180 km/h (−220 to −110 mph).[22][114] Wind speeds increase with the distance from the equator, reaching zero values near ±20° latitude, where the troposphere's temperature minimum is located.[22][82] Closer to the poles, the winds shift to a prograde direction, flowing with Uranus' rotation. Wind speeds continue to increase reaching maxima at ±60° latitude before falling to zero at the poles.[22] Wind speeds at −40° latitude range from 540 to 720 km/h (340 to 450 mph). Because the collar obscures all clouds below that parallel, speeds between it and the southern pole are impossible to measure.[22] In contrast, in the northern hemisphere maximum speeds as high as 860 km/h (540 mph) are observed near +50° latitude.[22][114][121]
103
+
104
+ For a short period from March to May 2004, large clouds appeared in the Uranian atmosphere, giving it a Neptune-like appearance.[118][122] Observations included record-breaking wind speeds of 820 km/h (510 mph) and a persistent thunderstorm referred to as "Fourth of July fireworks".[113] On 23 August 2006, researchers at the Space Science Institute (Boulder, Colorado) and the University of Wisconsin observed a dark spot on Uranus' surface, giving scientists more insight into Uranus atmospheric activity.[119] Why this sudden upsurge in activity occurred is not fully known, but it appears that Uranus' extreme axial tilt results in extreme seasonal variations in its weather.[62][120] Determining the nature of this seasonal variation is difficult because good data on Uranus' atmosphere have existed for less than 84 years, or one full Uranian year. Photometry over the course of half a Uranian year (beginning in the 1950s) has shown regular variation in the brightness in two spectral bands, with maxima occurring at the solstices and minima occurring at the equinoxes.[123] A similar periodic variation, with maxima at the solstices, has been noted in microwave measurements of the deep troposphere begun in the 1960s.[124] Stratospheric temperature measurements beginning in the 1970s also showed maximum values near the 1986 solstice.[98] The majority of this variability is thought to occur owing to changes in the viewing geometry.[117]
105
+
106
+ There are some indications that physical seasonal changes are happening in Uranus. Although Uranus is known to have a bright south polar region, the north pole is fairly dim, which is incompatible with the model of the seasonal change outlined above.[120] During its previous northern solstice in 1944, Uranus displayed elevated levels of brightness, which suggests that the north pole was not always so dim.[123] This information implies that the visible pole brightens some time before the solstice and darkens after the equinox.[120] Detailed analysis of the visible and microwave data revealed that the periodical changes of brightness are not completely symmetrical around the solstices, which also indicates a change in the meridional albedo patterns.[120] In the 1990s, as Uranus moved away from its solstice, Hubble and ground-based telescopes revealed that the south polar cap darkened noticeably (except the southern collar, which remained bright),[115] whereas the northern hemisphere demonstrated increasing activity,[113] such as cloud formations and stronger winds, bolstering expectations that it should brighten soon.[118] This indeed happened in 2007 when it passed an equinox: a faint northern polar collar arose, and the southern collar became nearly invisible, although the zonal wind profile remained slightly asymmetric, with northern winds being somewhat slower than southern.[116]
107
+
108
+ The mechanism of these physical changes is still not clear.[120] Near the summer and winter solstices, Uranus' hemispheres lie alternately either in full glare of the Sun's rays or facing deep space. The brightening of the sunlit hemisphere is thought to result from the local thickening of the methane clouds and haze layers located in the troposphere.[115] The bright collar at −45° latitude is also connected with methane clouds.[115] Other changes in the southern polar region can be explained by changes in the lower cloud layers.[115] The variation of the microwave emission from Uranus is probably caused by changes in the deep tropospheric circulation, because thick polar clouds and haze may inhibit convection.[125] Now that the spring and autumn equinoxes are arriving on Uranus, the dynamics are changing and convection can occur again.[113][125]
109
+
110
+ Many argue that the differences between the ice giants and the gas giants extend to their formation.[126][127] The Solar System is hypothesised to have formed from a giant rotating ball of gas and dust known as the presolar nebula. Much of the nebula's gas, primarily hydrogen and helium, formed the Sun, and the dust grains collected together to form the first protoplanets. As the planets grew, some of them eventually accreted enough matter for their gravity to hold on to the nebula's leftover gas.[126][127] The more gas they held onto, the larger they became; the larger they became, the more gas they held onto until a critical point was reached, and their size began to increase exponentially. The ice giants, with only a few Earth masses of nebular gas, never reached that critical point.[126][127][128] Recent simulations of planetary migration have suggested that both ice giants formed closer to the Sun than their present positions, and moved outwards after formation (the Nice model).[126]
111
+
112
+ Uranus has 27 known natural satellites.[128] The names of these satellites are chosen from characters in the works of Shakespeare and Alexander Pope.[72][129] The five main satellites are Miranda, Ariel, Umbriel, Titania, and Oberon.[72] The Uranian satellite system is the least massive among those of the giant planets; the combined mass of the five major satellites would be less than half that of Triton (largest moon of Neptune) alone.[10] The largest of Uranus' satellites, Titania, has a radius of only 788.9 km (490.2 mi), or less than half that of the Moon, but slightly more than Rhea, the second-largest satellite of Saturn, making Titania the eighth-largest moon in the Solar System. Uranus' satellites have relatively low albedos; ranging from 0.20 for Umbriel to 0.35 for Ariel (in green light).[20] They are ice–rock conglomerates composed of roughly 50% ice and 50% rock. The ice may include ammonia and carbon dioxide.[110][130]
113
+
114
+ Among the Uranian satellites, Ariel appears to have the youngest surface with the fewest impact craters and Umbriel's the oldest.[20][110] Miranda has fault canyons 20 km (12 mi) deep, terraced layers, and a chaotic variation in surface ages and features.[20] Miranda's past geologic activity is thought to have been driven by tidal heating at a time when its orbit was more eccentric than currently, probably as a result of a former 3:1 orbital resonance with Umbriel.[131] Extensional processes associated with upwelling diapirs are the likely origin of Miranda's 'racetrack'-like coronae.[132][133] Ariel is thought to have once been held in a 4:1 resonance with Titania.[134]
115
+
116
+ Uranus has at least one horseshoe orbiter occupying the Sun–Uranus L3 Lagrangian point—a gravitationally unstable region at 180° in its orbit, 83982 Crantor.[135][136] Crantor moves inside Uranus' co-orbital region on a complex, temporary horseshoe orbit.
117
+ 2010 EU65 is also a promising Uranus horseshoe librator candidate.[136]
118
+
119
+ The Uranian rings are composed of extremely dark particles, which vary in size from micrometres to a fraction of a metre.[20] Thirteen distinct rings are presently known, the brightest being the ε ring. All except two rings of Uranus are extremely narrow – they are usually a few kilometres wide. The rings are probably quite young; the dynamics considerations indicate that they did not form with Uranus. The matter in the rings may once have been part of a moon (or moons) that was shattered by high-speed impacts. From numerous pieces of debris that formed as a result of those impacts, only a few particles survived, in stable zones corresponding to the locations of the present rings.[110][137]
120
+
121
+ William Herschel described a possible ring around Uranus in 1789. This sighting is generally considered doubtful, because the rings are quite faint, and in the two following centuries none were noted by other observers. Still, Herschel made an accurate description of the epsilon ring's size, its angle relative to Earth, its red colour, and its apparent changes as Uranus travelled around the Sun.[138][139] The ring system was definitively discovered on 10 March 1977 by James L. Elliot, Edward W. Dunham, and Jessica Mink using the Kuiper Airborne Observatory. The discovery was serendipitous; they planned to use the occultation of the star SAO 158687 (also known as HD 128598) by Uranus to study its atmosphere. When their observations were analysed, they found that the star had disappeared briefly from view five times both before and after it disappeared behind Uranus. They concluded that there must be a ring system around Uranus.[140] Later they detected four additional rings.[140] The rings were directly imaged when Voyager 2 passed Uranus in 1986.[20] Voyager 2 also discovered two additional faint rings, bringing the total number to eleven.[20]
122
+
123
+ In December 2005, the Hubble Space Telescope detected a pair of previously unknown rings. The largest is located twice as far from Uranus as the previously known rings. These new rings are so far from Uranus that they are called the "outer" ring system. Hubble also spotted two small satellites, one of which, Mab, shares its orbit with the outermost newly discovered ring. The new rings bring the total number of Uranian rings to 13.[141] In April 2006, images of the new rings from the Keck Observatory yielded the colours of the outer rings: the outermost is blue and the other one red.[142][143]
124
+ One hypothesis concerning the outer ring's blue colour is that it is composed of minute particles of water ice from the surface of Mab that are small enough to scatter blue light.[142][144] In contrast, Uranus' inner rings appear grey.[142]
125
+
126
+ Animation about the discovering occultation in 1977. (Click on it to start)
127
+
128
+ Uranus has a complicated planetary ring system, which was the second such system to be discovered in the Solar System after Saturn's.[137]
129
+
130
+ Uranus' aurorae against its equatorial rings, imaged by the Hubble telescope. Unlike the aurorae of Earth and Jupiter, those of Uranus are not in line with its poles, due to its lopsided magnetic field.
131
+
132
+ In 1986, NASA's Voyager 2 interplanetary probe encountered Uranus. This flyby remains the only investigation of Uranus carried out from a short distance and no other visits are planned. Launched in 1977, Voyager 2 made its closest approach to Uranus on 24 January 1986, coming within 81,500 km (50,600 mi) of the cloudtops, before continuing its journey to Neptune. The spacecraft studied the structure and chemical composition of Uranus' atmosphere,[89] including its unique weather, caused by its axial tilt of 97.77°. It made the first detailed investigations of its five largest moons and discovered 10 new ones. It examined all nine of the system's known rings and discovered two more.[20][110][145] It also studied the magnetic field, its irregular structure, its tilt and its unique corkscrew magnetotail caused by Uranus' sideways orientation.[103]
133
+
134
+ Voyager 1 was unable to visit Uranus because investigation of Saturn's moon Titan was considered a priority. This trajectory took Voyager 1 out of the plane of the ecliptic, ending its planetary science mission.[146]:118
135
+
136
+ The possibility of sending the Cassini spacecraft from Saturn to Uranus was evaluated during a mission extension planning phase in 2009, but was ultimately rejected in favour of destroying it in the Saturnian atmosphere.[147] It would have taken about twenty years to get to the Uranian system after departing Saturn.[147] A Uranus orbiter and probe was recommended by the 2013–2022 Planetary Science Decadal Survey published in 2011; the proposal envisages launch during 2020–2023 and a 13-year cruise to Uranus.[148] A Uranus entry probe could use Pioneer Venus Multiprobe heritage and descend to 1–5 atmospheres.[148] The ESA evaluated a "medium-class" mission called Uranus Pathfinder.[149] A New Frontiers Uranus Orbiter has been evaluated and recommended in the study, The Case for a Uranus Orbiter.[150] Such a mission is aided by the ease with which a relatively big mass can be sent to the system—over 1500 kg with an Atlas 521 and 12-year journey.[151] For more concepts see Proposed Uranus missions.
137
+
138
+ In astrology, the planet Uranus () is the ruling planet of Aquarius. Because Uranus is cyan and Uranus is associated with electricity, the colour electric blue, which is close to cyan, is associated with the sign Aquarius[152] (see Uranus in astrology).
139
+
140
+ The chemical element uranium, discovered in 1789 by the German chemist Martin Heinrich Klaproth, was named after the newly discovered planet Uranus.[153]
141
+
142
+ "Uranus, the Magician" is a movement in Gustav Holst's orchestral suite The Planets, written between 1914 and 1916.
143
+
144
+ Operation Uranus was the successful military operation in World War II by the Red Army to take back Stalingrad and marked the turning point in the land war against the Wehrmacht.
145
+
146
+ The lines "Then felt I like some watcher of the skies/When a new planet swims into his ken", from John Keats's "On First Looking into Chapman's Homer", are a reference to Herschel's discovery of Uranus.[154]
147
+
148
+ Many references to Uranus in English language popular culture and news involve humour about one pronunciation of its name resembling that of the phrase "your anus".[155]
149
+
150
+ Bereits in der am 12ten März 1782 bei der hiesigen naturforschenden Gesellschaft vorgelesenen Abhandlung, habe ich den Namen des Vaters vom Saturn, nemlich Uranos, oder wie er mit der lateinischen Endung gewöhnlicher ist, Uranus vorgeschlagen, und habe seit dem das Vergnügen gehabt, daß verschiedene Astronomen und Mathematiker in ihren Schriften oder in Briefen an mich, diese Benennung aufgenommen oder gebilligt. Meines Erachtens muß man bei dieser Wahl die Mythologie befolgen, aus welcher die uralten Namen der übrigen Planeten entlehnen worden; denn in der Reihe der bisher bekannten, würde der von einer merkwürdigen Person oder Begebenheit der neuern Zeit wahrgenommene Name eines Planeten sehr auffallen. Diodor von Cicilien erzahlt die Geschichte der Atlanten, eines uralten Volks, welches eine der fruchtbarsten Gegenden in Africa bewohnte, und die Meeresküsten seines Landes als das Vaterland der Götter ansah. Uranus war ihr, erster König, Stifter ihres gesitteter Lebens und Erfinder vieler nützlichen Künste. Zugleich wird er auch als ein fleißiger und geschickter Himmelsforscher des Alterthums beschrieben... Noch mehr: Uranus war der Vater des Saturns und des Atlas, so wie der erstere der Vater des Jupiters.
151
+
152
+ Already in the pre-read at the local Natural History Society on 12th March 1782 treatise, I have the father's name from Saturn, namely Uranos, or as it is usually with the Latin suffix, proposed Uranus, and have since had the pleasure that various astronomers and mathematicians, cited in their writings or letters to me approving this designation. In my view, it is necessary to follow the mythology in this election, which had been borrowed from the ancient name of the other planets; because in the series of previously known, perceived by a strange person or event of modern times name of a planet would very noticeable. Diodorus of Cilicia tells the story of Atlas, an ancient people that inhabited one of the most fertile areas in Africa, and looked at the sea shores of his country as the homeland of the gods. Uranus was her first king, founder of their civilized life and inventor of many useful arts. At the same time he is also described as a diligent and skilful astronomers of antiquity ... even more: Uranus was the father of Saturn and the Atlas, as the former is the father of Jupiter.
153
+
154
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
en/5876.html.txt ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Urine is a liquid by-product of metabolism in humans and in many other animals. Urine flows from the kidneys through the ureters to the urinary bladder. Urination results in urine being excreted from the body through the urethra.
4
+
5
+ Cellular metabolism generates many by-products that are rich in nitrogen and must be cleared from the bloodstream, such as urea, uric acid, and creatinine. These by-products are expelled from the body during urination, which is the primary method for excreting water-soluble chemicals from the body. A urinalysis can detect nitrogenous wastes of the mammalian body.
6
+
7
+ Urine has a role in the earth's nitrogen cycle. In balanced ecosystems, urine fertilizes the soil and thus helps plants to grow. Therefore, urine can be used as a fertilizer. Some animals use it to mark their territories. Historically, aged or fermented urine (known as lant) was also used for gunpowder production, household cleaning, tanning of leather and dyeing of textiles.
8
+
9
+ Human urine and feces are collectively referred to as human waste or human excreta, and are managed via sanitation systems. Livestock urine and feces also require proper management if the livestock population density is high.
10
+
11
+ Most animals have excretory systems for elimination of soluble toxic wastes. In humans, soluble wastes are excreted primarily by the urinary system and, to a lesser extent in terms of urea, removed by perspiration.[1] The urinary system consists of the kidneys, ureters, urinary bladder, and urethra. The system produces urine by a process of filtration, reabsorption, and tubular secretion. The kidneys extract the soluble wastes from the bloodstream, as well as excess water, sugars, and a variety of other compounds. The resulting urine contains high concentrations of urea and other substances, including toxins. Urine flows from the kidneys through the ureter, bladder, and finally the urethra before passing from the body.
12
+
13
+ Research looking at the duration of urination in a range of mammal species found that nine larger species urinated for 21 ± 13 seconds irrespective of body size.[2] Smaller species, including rodents and bats, cannot produce steady streams and instead urinate with a series of drops.[2]
14
+
15
+ Average urine production in adult humans is around 1.4 L of urine per person per day with a normal range of 0.6 to 2.6 L per person per day, produced in around 6 to 8 urinations per day depending on state of hydration, activity level, environmental factors, weight, and the individual's health.[3] Producing too much or too little urine needs medical attention. Polyuria is a condition of excessive production of urine (> 2.5 L/day), oliguria when < 400 mL are produced, and anuria being < 100 mL per day.
16
+
17
+ About 91-96% of urine consists of water.[3] Urine also contains an assortment of inorganic salts and organic compounds, including proteins, hormones, and a wide range of metabolites, varying by what is introduced into the body.
18
+
19
+ The total solids in urine are on average 59 g per person per day. Organic matter makes up between 65% and 85% of urine dry solids, with volatile solids comprising 75–85% of total solids. Urea is the largest constituent of the solids, constituting more than 50% of the total. On an elemental level, human urine contains 6.87 g/L carbon, 8.12 g/L nitrogen, 8.25 g/L oxygen, and 1.51 g/L hydrogen. The exact proportions vary with individuals and with factors such as diet and health.[3] In healthy persons, urine contains very little protein and an excess is suggestive of illness.
20
+
21
+ Urine varies in appearance, depending principally upon a body's level of hydration, as well as other factors. Normal urine is a transparent solution ranging from colorless to amber but is usually a pale yellow. In the urine of a healthy individual, the color comes primarily from the presence of urobilin. Urobilin is a final waste product resulting from the breakdown of heme from hemoglobin during the destruction of aging blood cells.
22
+
23
+ Colorless urine indicates over-hydration, generally preferable to dehydration (though it can remove essential salts from the body). Colorless urine in drug tests can suggest an attempt to avoid detection of illicit drugs in the bloodstream through over-hydration.
24
+
25
+ Dark urine due to low fluid intake.
26
+
27
+ Dark red urine due to blood (hematuria).
28
+
29
+ Dark red urine due to choluria.
30
+
31
+ Pinkish urine due to consumption of beetroots.
32
+
33
+ Green urine during long term infusion of the sedative propofol.
34
+
35
+ Sometime after leaving the body, urine may acquire a strong "fish-like" odor because of contamination with bacteria that break down urea into ammonia. This odor is not present in fresh urine of healthy individuals; its presence may be a sign of a urinary tract infection.[citation needed]
36
+
37
+ The odor of normal human urine can reflect what has been consumed or specific diseases. For example, an individual with diabetes mellitus may present a sweetened urine odor. This can be due to kidney diseases as well, such as kidney stones.
38
+
39
+ Eating asparagus can cause a strong odor reminiscent of the vegetable caused by the body's breakdown of asparagusic acid.[4] Likewise consumption of saffron, alcohol, coffee, tuna fish, and onion can result in telltale scents.[citation needed] Particularly spicy foods can have a similar effect, as their compounds pass through the kidneys without being fully broken down before exiting the body.[5][6]
40
+
41
+ Turbid (cloudy) urine may be a symptom of a bacterial infection, but can also be caused by crystallization of salts such as calcium phosphate.[citation needed]
42
+
43
+ The pH normally is within the range of 5.5 to 7 with an average of 6.2.[3] In persons with hyperuricosuria, acidic urine can contribute to the formation of stones of uric acid in the kidneys, ureters, or bladder.[7] Urine pH can be monitored by a physician[8] or at home.
44
+
45
+ A diet which is high in protein from meat and dairy, as well as alcohol consumption can reduce urine pH, whilst potassium and organic acids, such as from diets high in fruit and vegetables, can increase the pH and make it more alkaline.[3] Some drugs also can increase urine pH, including acetazolamide, potassium citrate, and sodium bicarbonate.[citation needed]
46
+
47
+ Cranberries, popularly thought to decrease the pH of urine, have actually been shown not to acidify urine.[9] Drugs that can decrease urine pH include ammonium chloride, chlorothiazide diuretics, and methenamine mandelate.[10][11]
48
+
49
+ Human urine has a specific gravity of 1.003–1.035.[3] Any deviations may be associated with urinary disorders.
50
+
51
+ Healthy urine is not toxic.[12] However, it contains compounds eliminated by the body as undesirable, and can be irritating to skin and eyes. With suitable processing, it is possible to extract potable water from urine.[citation needed]
52
+
53
+ Urine is not sterile, not even in the bladder.[13][14] Earlier studies, with less sophisticated analytical techniques, had found that urine was sterile until it reached the urethra. In the urethra, epithelial cells lining the urethra are colonized by facultatively anaerobic Gram-negative rod and cocci bacteria.[15]
54
+
55
+ Many physicians in ancient history resorted to the inspection and examination of the urine of their patients. Hermogenes wrote about the color and other attributes of urine as indicators of certain diseases. Abdul Malik Ibn Habib of Andalusia d.862 AD, mentions numerous reports of urine examination throughout the Umayyad empire.[16] Diabetes mellitus got its name because the urine is plentiful and sweet. The name uroscopy refers to any visual examination of the urine, including microscopy, although it often refers to the aforementioned prescientific or protoscientific forms of urine examination. Clinical urine tests today duly note the gross color, turbidity, and odor of urine but also include urinalysis, which chemically analyzes the urine and quantifies its constituents. A culture of the urine is performed when a urinary tract infection is suspected, as bacteria in the urine are unusual otherwise. A microscopic examination of the urine may be helpful to identify organic or inorganic substrates and help in the diagnosis.
56
+
57
+ The color and volume of urine can be reliable indicators of hydration level. Clear and copious urine is generally a sign of adequate hydration. Dark urine is a sign of dehydration. The exception occurs when diuretics are consumed, in which case urine can be clear and copious and the person still be dehydrated.
58
+
59
+ Urine contains proteins and other substances that are useful for medical therapy and are ingredients in many prescription drugs (e.g., Ureacin, Urecholine, Urowave).[citation needed] Urine from postmenopausal women is rich in gonadotropins that can yield follicle stimulating hormone and luteinizing hormone for fertility therapy.[17] One such commercial product is Pergonal.[18]
60
+
61
+ Urine from pregnant women contains enough human chorionic gonadotropins for commercial extraction and purification to produce hCG medication. Pregnant mare urine is the source of estrogens, namely Premarin.[17] Urine also contains antibodies, which can be used in diagnostic antibody tests for a range of pathogens, including HIV-1.[19]
62
+
63
+ Urine can also be used to produce urokinase, which is used clinically as a thrombolytic agent.[citation needed]
64
+
65
+ Urine contains large quantities of nitrogen (mostly as urea), as well as reasonable quantities of dissolved potassium. The exact composition of nutrients in urine varies with diet, in particular nitrogen content in urine is related to the quantity of protein in the diet. A high protein diet results in high urea levels in urine.
66
+
67
+ Urine is very high in nitrogen (can be over 10% in a high-protein diet), low in phosphorus (1%), and moderate in potassium (2-3%). Urine typically contributes 70% of the nitrogen and more than half of the potassium found in urban wastewater flows, while making up less than 1% of the overall volume. If urine is to be separated and collected for use as a fertiliser in agriculture, then the easiest method of doing so is with sanitation systems that utilise waterless urinals, urine-diverting dry toilets (UDDTs) or urine diversion flush toilets.[20]
68
+
69
+ Undiluted urine can chemically burn the leaves or roots of some plants, particularly if the soil moisture content is low, therefore it is usually applied diluted with water.
70
+
71
+ When diluted with water (at a 1:5 ratio for container-grown annual crops with fresh growing medium each season or a 1:8 ratio for more general use), it can be applied directly to soil as a fertilizer.[21][22] The fertilization effect of urine has been found to be comparable to that of commercial nitrogen fertilizers.[23] Concentrations of heavy metals such as lead, mercury, and cadmium, commonly found in sewage sludge, are much lower in urine.[24]
72
+
73
+ Urine can also be used safely as a source of nitrogen in carbon-rich compost.[22] The health risks of using urine as a natural source of agricultural fertilizer are generally regarded as negligible, especially when dispersed in the soil rather than on the part of the plant that is consumed. Urine can even be distributed via perforated hoses buried some 10 cm under the surface of the soil among our crop plants, thus minimizing risk of odors, loss of nutrients, or transmission of pathogens.[25]
74
+
75
+ Given that the urea in urine breaks down into ammonia, urine has been used for cleaning. In pre-industrial times, urine was used – in the form of lant or aged urine – as a cleaning fluid.[26]
76
+ Urine was also used for whitening teeth in Ancient Rome.
77
+
78
+ Urine was used before the development of a chemical industry in the manufacture of gunpowder. Urine, a nitrogen source, was used to moisten straw or other organic material, which was kept moist and allowed to rot for several months to over a year. The resulting salts were washed from the heap with water, which was evaporated to allow collection of crude saltpeter crystals, that were usually refined before being used in making gunpowder.[27]
79
+
80
+ The US Army Field Manual[28] advises against drinking urine for survival. These guides explain that drinking urine tends to worsen rather than relieve dehydration due to the salts in it, and that urine should not be consumed in a survival situation, even when there is no other fluid available. In hot weather survival situations, where other sources of water are not available, soaking cloth (a shirt for example) in urine and putting it on the head can help cool the body.
81
+
82
+ During World War I, Germans experimented with numerous poisonous gases as weapons. After the first German chlorine gas attacks, Allied troops were supplied with masks of cotton pads that had been soaked in urine. It was believed that the ammonia in the pad neutralized the chlorine. These pads were held over the face until the soldiers could escape from the poisonous fumes. The Vickers machine gun, used by the British Army during World War I, required water for cooling when fired so soldiers would resort to urine if water was unavailable.[29]
83
+
84
+ Urban legend states that urine works well against jellyfish stings. This scenario has appeared many times in popular culture including in the Friends episode "The One With the Jellyfish", an early episode of Survivor, as well as the films The Real Cancun (2003), The Heartbreak Kid (2007) and The Paperboy (2012). However, at best it is ineffective, and in some cases this treatment may make the injury worse.[30][31][32]
85
+
86
+ Urine has often been used as a mordant to help prepare textiles, especially wool, for dyeing. In the Scottish Highlands and Hebrides, the process of "waulking" (fulling) woven wool is preceded by soaking in urine, preferably infantile.[33]
87
+
88
+ Ancient Romans used fermented human urine (in the form of lant) to cleanse grease stains from clothing.[34] The emperor Nero instituted a tax (Latin: vectigal urinae) on the urine industry, continued by his successor, Vespasian. The Latin saying Pecunia non olet (money doesn't smell) is attributed to Vespasian – said to have been his reply to a complaint from his son about the unpleasant nature of the tax. Vespasian's name is still attached to public urinals in France (vespasiennes), Italy (vespasiani), and Romania (vespasiene).
89
+
90
+ Alchemists spent much time trying to extract gold from urine, which led to discoveries such as white phosphorus by German alchemist Hennig Brand when distilling fermented urine in 1669. In 1773 the French chemist Hilaire Rouelle discovered the organic compound urea by boiling urine dry.
91
+
92
+ The English word urine (/ˈjuːrɪn/, /ˈjɜːrɪn/) comes from the Latin urina (-ae, f.), which is cognate with ancient words in various Indo-European languages that concern water, liquid, diving, rain, and urination. The onomatopoetic term piss was the usual word for urination before the 14th century and is now considered vulgar. Urinate was at first used mostly in medical contexts. Piss is also used in such colloquialisms as to piss off, piss poor, and the slang expression pissing down to mean heavy rain. Euphemisms and expressions used between parents and children (such as wee, pee, and many others) have long existed.
93
+
94
+ Lant is a word for aged urine, originating from the Old English word hland referring to urine in general.
95
+
96
+ Body water: Intracellular fluid/Cytosol
en/5877.html.txt ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Amphicynodontinae
6
+ Hemicyoninae
7
+ Ursavinae
8
+ Agriotheriinae
9
+ Ailuropodinae
10
+ Tremarctinae
11
+ Ursinae
12
+
13
+ Bears are carnivoran mammals of the family Ursidae. They are classified as caniforms, or doglike carnivorans. There are eight species in existence: Asiatic black bears (also called moon bears), brown bears (which include grizzly bears), giant pandas, North American black bears, polar bears, sloth bears, spectacled bears (also called Andean bears), and sun bears[1]. Although only eight species of bears are extant, they are widespread, appearing in a wide variety of habitats throughout the Northern Hemisphere and partially in the Southern Hemisphere. Bears are found on the continents of North America, South America, Europe, and Asia. Common characteristics of modern bears include large bodies with stocky legs, long snouts, small rounded ears, shaggy hair, plantigrade paws with five nonretractile claws, and short tails.
14
+
15
+ While the polar bear is mostly carnivorous, and the giant panda feeds almost entirely on bamboo, the remaining six species are omnivorous with varied diets. With the exception of courting individuals and mothers with their young, bears are typically solitary animals. They may be diurnal or nocturnal and have an excellent sense of smell. Despite their heavy build and awkward gait, they are adept runners, climbers, and swimmers. Bears use shelters, such as caves and logs, as their dens; most species occupy their dens during the winter for a long period of hibernation, up to 100 days.
16
+
17
+ Bears have been hunted since prehistoric times for their meat and fur; they have been used for bear-baiting and other forms of entertainment, such as being made to dance. With their powerful physical presence, they play a prominent role in the arts, mythology, and other cultural aspects of various human societies. In modern times, bears have come under pressure through encroachment on their habitats and illegal trade in bear parts, including the Asian bile bear market. The IUCN lists six bear species as vulnerable or endangered, and even least concern species, such as the brown bear, are at risk of extirpation in certain countries. The poaching and international trade of these most threatened populations are prohibited, but still ongoing.
18
+
19
+ The English word "bear" comes from Old English bera and belongs to a family of names for the bear in Germanic languages, such as Swedish björn, also used as a first name. This form is conventionally said to be related to a Proto-Indo-European word for "brown", so that "bear" would mean "the brown one".[2][3] However, Ringe notes that while this etymology is semantically plausible, a word meaning "brown" of this form cannot be found in Proto-Indo-European. He suggests instead that "bear" is from the Proto-Indo-European word *ǵʰwḗr- ~ *ǵʰwér "wild animal".[4] This terminology for the animal originated as a taboo avoidance term: proto-Germanic tribes replaced their original word for bear—arkto—with this euphemistic expression out of fear that speaking the animal's true name might cause it to appear.[5][6] According to author Ralph Keyes, this is the oldest known euphemism.[7]
20
+
21
+ Bear taxon names such as Arctoidea and Helarctos come from the ancient Greek ἄρκτος (arktos), meaning bear,[8] as do the names "arctic" and "antarctic", via the name of the constellation Ursa Major, the "Great Bear", prominent in the northern sky.[9]
22
+
23
+ Bear taxon names such as Ursidae and Ursus come from Latin Ursus/Ursa, he-bear/she-bear.[9] The female first name "Ursula", originally derived from a Christian saint's name, means "little she-bear" (diminutive of Latin ursa). In Switzerland, the male first name "Urs" is especially popular, while the name of the canton and city of Bern is derived from Bär, German for bear. The Germanic name Bernard (including Bernhardt and similar forms) means "bear-brave", "bear-hardy", or "bold bear".[10][11] The Old English name Beowulf is a kenning, "bee-wolf", for bear, in turn meaning a brave warrior.[12]
24
+
25
+ The family Ursidae is one of nine families in the suborder Caniformia, or "doglike" carnivorans, within the order Carnivora. Bears' closest living relatives are the pinnipeds, canids, and musteloids.[13] Modern bears comprise eight species in three subfamilies: Ailuropodinae (monotypic with the giant panda), Tremarctinae (monotypic with the spectacled bear), and Ursinae (containing six species divided into one to three genera, depending on the authority). Nuclear chromosome analysis show that the karyotype of the six ursine bears is nearly identical, with each having 74 chromosomes (see Ursid hybrid), whereas the giant panda has 42 chromosomes and the spectacled bear 52. These smaller numbers can be explained by the fusing of some chromosomes, and the banding patterns on these match those of the ursine species, but differ from those of procyonids, which supports the inclusion of these two species in Ursidae rather than in Procyonidae, where they had been placed by some earlier authorities.[14]
26
+
27
+ The earliest members of Ursidae belong to the extinct subfamily Amphicynodontinae, including Parictis (late Eocene to early middle Miocene, 38–18 Mya) and the slightly younger Allocyon (early Oligocene, 34–30 Mya), both from North America. These animals looked very different from today's bears, being small and raccoon-like in overall appearance, with diets perhaps more similar to that of a badger. Parictis does not appear in Eurasia and Africa until the Miocene.[15] It is unclear whether late-Eocene ursids were also present in Eurasia, although faunal exchange across the Bering land bridge may have been possible during a major sea level low stand as early as the late Eocene (about 37 Mya) and continuing into the early Oligocene.[16] European genera morphologically very similar to Allocyon, and to the much younger American Kolponomos (about 18 Mya),[17] are known from the Oligocene, including Amphicticeps and Amphicynodon.[16] There has been various morphological evidence linking amphicynodontines with pinnipeds, as both groups were semi-aquatic, otter-like mammals.[18][19][20] In addition to the support of the pinniped–amphicynodontine clade, other morphological and some molecular evidence supports bears being the closet living relatives to pinnipeds.[21][22][23][19][24][19][20]
28
+
29
+ The raccoon-sized, dog-like Cephalogale is the oldest-known member of the subfamily Hemicyoninae, which first appeared during the middle Oligocene in Eurasia about 30 Mya.[16] The subfamily includes the younger genera Phoberocyon (20–15 Mya), and Plithocyon (15–7 Mya). A Cephalogale-like species gave rise to the genus Ursavus during the early Oligocene (30–28 Mya); this genus proliferated into many species in Asia and is ancestral to all living bears. Species of Ursavus subsequently entered North America, together with Amphicynodon and Cephalogale, during the early Miocene (21–18 Mya). Members of the living lineages of bears diverged from Ursavus between 15 and 20 Mya,[25][26] likely via the species Ursavus elmensis. Based on genetic and morphological data, the Ailuropodinae (pandas) were the first to diverge from other living bears about 19 Mya, although no fossils of this group have been found before about 5 Mya.[27]
30
+
31
+ The New World short-faced bears (Tremarctinae) differentiated from Ursinae following a dispersal event into North America during the mid-Miocene (about 13 Mya).[27] They invaded South America (≈2.5 or 1.2 Ma) following formation of the Isthmus of Panama.[28] Their earliest fossil representative is Plionarctos in North America (c. 10–2 Ma). This genus is probably the direct ancestor to the North American short-faced bears (genus Arctodus), the South American short-faced bears (Arctotherium), and the spectacled bears, Tremarctos, represented by both an extinct North American species (T. floridanus), and the lone surviving representative of the Tremarctinae, the South American spectacled bear (T. ornatus).[16]
32
+
33
+ The subfamily Ursinae experienced a dramatic proliferation of taxa about 5.3–4.5 Mya, coincident with major environmental changes; the first members of the genus Ursus appeared around this time.[27] The sloth bear is a modern survivor of one of the earliest lineages to diverge during this radiation event (5.3 Mya); it took on its peculiar morphology, related to its diet of termites and ants, no later than by the early Pleistocene. By 3–4 Mya, the species Ursus minimus appears in the fossil record of Europe; apart from its size, it was nearly identical to today's Asian black bear. It is likely ancestral to all bears within Ursinae, perhaps aside from the sloth bear. Two lineages evolved from U. minimus: the black bears (including the sun bear, the Asian black bear, and the American black bear); and the brown bears (which includes the polar bear). Modern brown bears evolved from U. minimus via Ursus etruscus, which itself is ancestral to the extinct Pleistocene cave bear. Species of Ursinae have migrated repeatedly into North America from Eurasia as early as 4 Mya during the early Pliocene.[29][30] The polar bear is the most recently evolved species and descended from a population of brown bears that became isolated in northern latitudes by glaciation 400,000 years ago.[31]
34
+
35
+ The bears form a clade within the Carnivora. The cladogram is based on molecular phylogeny of six genes in Flynn, 2005.[32]
36
+
37
+ Feliformia
38
+
39
+ Canidae
40
+
41
+ †Hemicyonidae
42
+
43
+ Ursidae
44
+
45
+ Pinnipedia
46
+
47
+ Ailuridae
48
+
49
+ Procyonidae
50
+
51
+ Mustelidae
52
+
53
+ Note that although they are called "bears" in some languages, red pandas and raccoons and their close relatives are not bears, but rather musteloids.[32]
54
+
55
+ There are two phylogenetic hypotheses on the relationships among extant and fossil bear species. One is all species of bears are classified in seven subfamilies as adopted here and related articles: Amphicynodontinae, Hemicyoninae, Ursavinae, Agriotheriinae, Ailuropodinae, Tremarctinae, and Ursinae.[33][34][35][36] Below is a cladogram of the subfamilies of bears after McLellan and Reiner (1992)[33] and Qiu et al. (2014):[36]
56
+
57
+ Amphicynodontinae
58
+
59
+ Hemicyoninae
60
+
61
+ Ursavinae
62
+
63
+ Agriotheriinae
64
+
65
+ Ailuropodinae
66
+
67
+ Tremarctinae
68
+
69
+ Ursinae
70
+
71
+ The second alternative phylogenetic hypothesis was implemented by McKenna et al. (1997) is to classify all the bear species into the superfamily Ursoidea, with Hemicyoninae and Agriotheriinae being classified in the family "Hemicyonidae".[37] Amphicynodontinae under this classification were classified as stem-pinnipeds in the superfamily Phocoidea.[37] In the McKenna and Bell classification both bears and pinnipeds in a parvorder of carnivoran mammals known as Ursida, along with the extinct bear dogs of the family Amphicyonidae.[37] Below is the cladogram based on McKenna and Bell (1997) classification:[37]
72
+
73
+ Amphicyonidae
74
+
75
+ Amphicynodontidae
76
+
77
+ Pinnipedia
78
+
79
+ Hemicyoninae
80
+
81
+ Agriotheriinae
82
+
83
+ Ursavinae
84
+
85
+ Ailuropodinae
86
+
87
+ Tremarctinae
88
+
89
+ Ursinae
90
+
91
+ The phylogeny of extant bear species is shown in a cladogram based on complete mitochondrial DNA sequences from Yu et al. (2007)[38] The giant panda, followed by the spectacled bear are clearly the oldest species. The relationships of the other species are not very well resolved, though the polar bear and the brown bear form a close grouping.[14]
92
+
93
+ Brown bear
94
+
95
+ Polar bear
96
+
97
+ Asian black bear
98
+
99
+ American black bear
100
+
101
+ Sun bear
102
+
103
+ Sloth bear
104
+
105
+ Spectacled bear
106
+
107
+ Giant panda
108
+
109
+ The bear family includes the most massive extant terrestrial members of the order Carnivora.[a] The polar bear is considered to be the largest extant species,[40] with adult males weighing 350–700 kilograms (770–1,500 pounds) and measuring 2.4–3 metres (7 ft 10 in–9 ft 10 in) in total length.[41] The smallest species is the sun bear, which ranges 25–65 kg (55–145 lb) in weight and 100–140 cm (40–55 in) in length.[42] Prehistoric North and South American short-faced bears were the largest species known to have lived. The latter estimated to have weighed 1,600 kg (3,500 lb) and stood 3.4 m (11 ft 2 in) tall.[43][44] Body weight varies throughout the year in bears of temperate and arctic climates, as they build up fat reserves in the summer and autumn and lose weight during the winter.[45]
110
+
111
+ Bears are generally bulky and robust animals with short tails. They are sexually dimorphic with regard to size, with males typically being larger.[46][47] Larger species tend to show increased levels of sexual dimorphism in comparison to smaller species.[47] Relying as they do on strength rather than speed, bears have relatively short limbs with thick bones to support their bulk. The shoulder blades and the pelvis are correspondingly massive. The limbs are much straighter than those of the big cats as there is no need for them to flex in the same way due to the differences in their gait. The strong forelimbs are used to catch prey, to excavate dens, to dig out burrowing animals, to turn over rocks and logs to locate prey, and to club large creatures.[45]
112
+
113
+ Unlike most other land carnivorans, bears are plantigrade. They distribute their weight toward the hind feet, which makes them look lumbering when they walk. They are capable of bursts of speed but soon tire, and as a result mostly rely on ambush rather than the chase. Bears can stand on their hind feet and sit up straight with remarkable balance. Their front paws are flexible enough to grasp fruit and leaves. Bears' non-retractable claws are used for digging, climbing, tearing, and catching prey. The claws on the front feet are larger than those on the back and may be a hindrance when climbing trees; black bears are the most arboreal of the bears, and have the shortest claws. Pandas are unique in having a bony extension on the wrist of the front feet which acts as a thumb, and is used for gripping bamboo shoots as the animals feed.[45]
114
+
115
+ Most mammals have agouti hair, with each individual hair shaft having bands of color corresponding to two different types of melanin pigment. Bears however have a single type of melanin and the hairs have a single color throughout their length, apart from the tip which is sometimes a different shade. The coat consists of long guard hairs, which form a protective shaggy covering, and short dense hairs which form an insulating layer trapping air close to the skin. The shaggy coat helps maintain body heat during winter hibernation and is shed in the spring leaving a shorter summer coat. Polar bears have hollow, translucent guard hairs which gain heat from the sun and conduct it to the dark-colored skin below. They have a thick layer of blubber for extra insulation, and the soles of their feet have a dense pad of fur.[45] While bears tend to be uniform in color, some species may have markings on the chest or face and the giant panda has a bold black-and-white pelage.[48]
116
+
117
+ Bears have small rounded ears so as to minimize heat loss, but neither their hearing or sight are particularly acute. Unlike many other carnivorans they have color vision, perhaps to help them distinguish ripe nuts and fruits. They are unique among carnivorans in not having touch-sensitive whiskers on the muzzle; however, they have an excellent sense of smell, better than that of the dog, or possibly any other mammal. They use smell for signalling to each other (either to warn off rivals or detect mates) and for finding food. Smell is the principal sense used by bears to locate most of their food, and they have excellent memories which helps them to relocate places where they have found food before.[45]
118
+
119
+ The skulls of bears are massive, providing anchorage for the powerful masseter and temporal jaw muscles. The canine teeth are large but mostly used for display, and the molar teeth flat and crushing. Unlike most other members of the Carnivora, bears have relatively undeveloped carnassial teeth, and their teeth are adapted for a diet that includes a significant amount of vegetable matter.[45] Considerable variation occurs in dental formula even within a given species. This may indicate bears are still in the process of evolving from a mainly meat-eating diet to a predominantly herbivorous one. Polar bears appear to have secondarily re-evolved carnassial-like cheek teeth, as their diets have switched back towards carnivory.[49] Sloth bears lack lower central incisors and use their protusible lips for sucking up the termites on which they feed.[45] The general dental formula for living bears is:
120
+ 3.1.2–4.23.1.2–4.3.[45] The structure of the larynx of bears appears to be the most basal of the caniforms.[50] They possess air pouches connected to the pharynx which may amplify their vocalizations.[51]
121
+
122
+ Bears have a fairly simple digestive system typical for carnivorans, with a single stomach, short undifferentiated intestines and no cecum.[52][53] Even the herbivorous giant panda still has the digestive system of a carnivore, as well as carnivore-specific genes. Its ability to digest cellulose is ascribed to the microbes in its gut.[54] Bears must spend much of their time feeding in order to gain enough nutrition from foliage. The panda, in particular, spends 12–15 hours a day feeding.[55]
123
+
124
+ Extant bears are found in sixty countries primarily in the Northern Hemisphere and are concentrated in Asia, North America, and Europe. An exception is the spectacled bear; native to South America, it inhabits the Andean region.[56] The sun bear's range extends below the equator in Southeast Asia.[57] The Atlas bear, a subspecies of the brown bear was distributed in North Africa from Morocco to Libya, but it became extinct around the 1870s.[58]
125
+
126
+ The most widespread species is the brown bear, which occurs from Western Europe eastwards through Asia to the western areas of North America. The American black bear is restricted to North America, and the polar bear is restricted to the Arctic Sea. All the remaining species of bear are Asian.[56] They occur in a range of habitats which include tropical lowland rainforest, both coniferous and broadleaf forests, prairies, steppes, montane grassland, alpine scree slopes, Arctic tundra and in the case of the polar bear, ice floes.[56][59] Bears may dig their dens in hillsides or use caves, hollow logs and dense vegetation for shelter.[59]
127
+
128
+ Brown and American black bears are generally diurnal, meaning that they are active for the most part during the day, though they may forage substantially by night.[60] Other species may be nocturnal, active at night, though female sloth bears with cubs may feed more at daytime to avoid competition from conspecifics and nocturnal predators.[61] Bears are overwhelmingly solitary and are considered to be the most asocial of all the Carnivora. The only times bears are encountered in groups are mothers with young or occasional seasonal bounties of rich food (such as salmon runs).[62][63] Fights between males can occur and older individuals may have extensive scarring, which suggests that maintaining dominance can be intense.[64] With their acute sense of smell, bears can locate carcasses from several kilometres away. They use olfaction to locate other foods, encounter mates, avoid rivals and recognize their cubs.[45]
129
+
130
+ Most bears are opportunistic omnivores and consume more plant than animal matter. They eat anything from leaves, roots, and berries to insects, carrion, fresh meat, and fish, and have digestive systems and teeth adapted to such a diet.[56] At the extremes are the almost entirely herbivorous giant panda and the mostly carnivorous polar bear. However, all bears feed on any food source that becomes seasonally available.[55] For example, Asiatic black bears in Taiwan consume large numbers of acorns when these are most common, and switch to ungulates at other times of the year.[65]
131
+
132
+ When foraging for plants, bears choose to eat them at the stage when they are at their most nutritious and digestible, typically avoiding older grasses, sedges and leaves.[53][55] Hence, in more northern temperate areas, browsing and grazing is more common early in spring and later becomes more restricted.[66] Knowing when plants are ripe for eating is a learned behavior.[55] Berries may be foraged in bushes or at the tops of trees, and bears try to maximize the number of berries consumed versus foliage.[66] In autumn, some bear species forage large amounts of naturally fermented fruits, which affects their behavior.[67] Smaller bears climb trees to obtain mast (edible reproductive parts, such as acorns).[68] Such masts can be very important to the diets of these species, and mast failures may result in long-range movements by bears looking for alternative food sources.[69] Brown bears, with their powerful digging abilities, commonly eat roots.[66] The panda's diet is over 99% bamboo,[70] of 30 different species. Its strong jaws are adapted for crushing the tough stems of these plants, though they prefer to eat the more nutritious leaves.[71][72] Bromeliads can make up to 50% of the diet of the spectacled bear, which also has strong jaws to bite them open.[73]
133
+
134
+ The sloth bear, though not as specialized as polar bears and the panda, has lost several front teeth usually seen in bears, and developed a long, suctioning tongue to feed on the ants, termites, and other burrowing insects they favour. At certain times of the year, these insects can make up 90% of their diets.[74] Some species may raid the nests of wasps and bees for the honey and immature insects, in spite of stinging from the adults.[75] Sun bears use their long tongues to lick up both insects and honey.[76] Fish are an important source of food for some species, and brown bears in particular gather in large numbers at salmon runs. Typically, a bear plunges into the water and seizes a fish with its jaws or front paws. The preferred parts to eat are the brain and eggs. Small burrowing mammals like rodents may be dug out and eaten.[77][66]
135
+
136
+ The brown bear and both species of black bears sometimes take large ungulates, such as deer and bovids, mostly the young and weak.[65][78][77] These animals may be taken by a short rush and ambush, though hiding young may be stiffed out and pounced on.[66][79] The polar bear mainly preys on seals, stalking them from the ice or breaking into their dens. They primarily eat the highly digestible blubber.[80][77] Large mammalian prey is typically killed by a bite to the head or neck, or (in the case of young) simply pinned down and mauled.[66][81] Predatory behavior in bears is typically taught to the young by the mother.[77]
137
+
138
+ Bears are prolific scavengers and kleptoparasites, stealing food caches from rodents, and carcasses from other predators.[53][82] For hibernating species, weight gain is important as it provides nourishment during winter dormancy. A brown bear can eat 41 kg (90 lb) of food and gain 2–3 kg (4–7 lb) of fat a day prior to entering its den.[83]
139
+
140
+ Bears produce a number of vocal and non-vocal sounds. Tongue-clicking, grunting or chuffing many be made in cordial situations, such as between mothers and cubs or courting couples, while moaning, huffing, snorting or blowing air is made when an individual is stressed. Barking is produced during times of alarm, excitement or to give away the animal's position. Warning sounds include jaw-clicking and lip-popping, while teeth-chatters, bellows, growls, roars and pulsing sounds are made in aggressive encounters. Cubs may squeal, bawl, bleat or scream when in distress and make motor-like humming when comfortable or nursing.[50][84][85][86][87][88]
141
+
142
+ Bears sometimes communicate with visual displays such as standing upright, which exaggerates the individual's size. The chest markings of some species may add to this intimidating display. Staring is an aggressive act and the facial markings of spectacled bears and giant pandas may help draw attention to the eyes during agonistic encounters.[48] Individuals may approach each other by stiff-legged walking with the head lowered. Dominance between bears is asserted by making a frontal orientation, showing the canine teeth, muzzle twisting and neck stretching. A subordinate may respond with a lateral orientation, by turning away and dropping the head and by sitting or lying down.[63][89]
143
+
144
+ Bears may mark territory by rubbing against trees and other objects which may serve to spread their scent. This is usually accompanied by clawing and biting the object. Bark may be spread around to draw attention to the marking post.[90] Pandas are known to mark objects with urine and a waxy substance from their anal glands.[91] Polar bears leave behind their scent in their tracks which allow individuals to keep track of one another in the vast Arctic wilderness.[92]
145
+
146
+ The mating system of bears has variously been described as a form of polygyny, promiscuity and serial monogamy.[93][94][95] During the breeding season, males take notice of females in their vicinity and females become more tolerant of males. A male bear may visit a female continuously over a period of several days or weeks, depending on the species, to test her reproductive state. During this time period, males try to prevent rivals from interacting with their mate. Courtship may be brief, although in some Asian species, courting pairs may engage in wrestling, hugging, mock fighting and vocalizing. Ovulation is induced by mating, which can last up to 30 minutes depending on the species.[94]
147
+
148
+ Gestation typically lasts 6–9 months, including delayed implantation, and litter size numbers up to four cubs.[96] Giant pandas may give birth to twins but they can only suckle one young and the other is left to die.[97] In northern living species, birth takes place during winter dormancy. Cubs are born blind and helpless with at most a thin layer of hair, relying on their mother for warmth. The milk of the female bear is rich in fat and antibodies and cubs may suckle for up to a year after they are born. By 2–3 months, cubs can follow their mother outside the den. They usually follow her on foot, but sloth bear cubs may ride on their mother's back.[96][59] Male bears play no role in raising young. Infanticide, where an adult male kills the cubs of another, has been recorded in polar bears, brown bears and American black bears but not in other species.[98] Males kill young to bring the female into estrus.[99] Cubs may flee and the mother defends them even at the cost of her life.[100][101][102]
149
+
150
+ In some species, offspring may become independent around the next spring, through some may stay until the female successfully mates again. Bears reach sexual maturity shortly after they disperse; at around 3–6 years depending on the species. Male Alaskan brown bears and polar bears may continue to grow until they are 11 years old.[96] Lifespan may also vary between species. The brown bear can live an average of 25 years.[103]
151
+
152
+ Bears of northern regions, including the American black bear and the grizzly bear, hibernate in the winter.[104][105] During hibernation, the bear's metabolism slows down, its body temperature decreases slightly, and its heart rate slows from a normal value of 55 to just 9 beats per minute.[106] Bears normally do not wake during their hibernation, and can go the entire period without eating, drinking, urinating, or defecating.[45] A fecal plug is formed in the colon, and is expelled when the bear wakes in the spring.[107] If they have stored enough body fat, their muscles remain in good condition, and their protein maintenance requirements are met from recycling waste urea.[45] Female bears give birth during the hibernation period, and are roused when doing so.[105]
153
+
154
+ Bears do not have many predators. The most important are humans, and as they started cultivating crops, they increasingly came in conflict with the bears that raided them. Since the invention of firearms, people have been able to kill bears with greater ease.[108] Felids like the tiger may also prey on bears,[109][110] particularly cubs, which may also be threatened by canids.[14][95]
155
+
156
+ Bears are parasitized by eighty species of parasites, including single-celled protozoans and gastro-intestinal worms, and nematodes and flukes in their heart, liver, lungs and bloodstream. Externally they have ticks, fleas and lice. A study of American black bears found seventeen species of endoparasite including the protozoan Sarcocystis, the parasitic worm Diphyllobothrium mansonoides, and the nematodes Dirofilaria immitis, Capillaria aerophila, Physaloptera sp., Strongyloides sp. and others. Of these, D. mansonoides and adult C. aerophila were causing pathological symptoms.[111] By contrast, polar bears have few parasites; many parasitic species need a secondary, usually terrestrial, host, and the polar bear's life style is such that few alternative hosts exist in their environment. The protozoan Toxoplasma gondii has been found in polar bears, and the nematode Trichinella nativa can cause a serious infection and decline in older polar bears.[112] Bears in North America are sometimes infected by a Morbillivirus similar to the canine distemper virus.[113] They are susceptible to infectious canine hepatitis (CAV-1), with free-living black bears dying rapidly of encephalitis and hepatitis.[114]
157
+
158
+ In modern times, bears have come under pressure through encroachment on their habitats[115] and illegal trade in bear parts, including the Asian bile bear market, though hunting is now banned, largely replaced by farming.[116] The IUCN lists six bear species as vulnerable;[117] even the two least concern species, the brown bear and the American black bear,[117] are at risk of extirpation in certain areas. In general these two species inhabit remote areas with little interaction with humans, and the main non-natural causes of mortality are hunting, trapping, road-kill and depredation.[118]
159
+
160
+ Laws have been passed in many areas of the world to protect bears from habitat destruction. Public perception of bears is often positive, as people identify with bears due to their omnivorous diets, their ability to stand on two legs, and their symbolic importance.[119] Support for bear protection is widespread, at least in more affluent societies.[120] The giant panda has become a worldwide symbol of conservation. The Sichuan Giant Panda Sanctuaries, which are home to around 30% of the wild panda population, gained a UNESCO World Heritage Site designation in 2006.[121] Where bears raid crops or attack livestock, they may come into conflict with humans.[122][123] In poorer rural regions, attitudes may be more shaped by the dangers posed by bears, and the economic costs they cause to farmers and ranchers.[122]
161
+
162
+ Several bear species are dangerous to humans, especially in areas where they have become used to people; elsewhere, they generally avoid humans. Injuries caused by bears are rare, but are widely reported.[124] Bears may attack humans in response to being startled, in defense of young or food, or even for predatory reasons.[125]
163
+
164
+ Bears in captivity have for centuries been used for entertainment. They have been trained to dance,[126] and were kept for baiting in Europe at least since the 16th century. There were five bear-baiting gardens in Southwark, London at that time; archaeological remains of three of these have survived.[127] Across Europe, nomadic Romani bear handlers called Ursari lived by busking with their bears from the 12th century.[128]
165
+
166
+ Bears have been hunted for sport, food, and folk medicine. Their meat is dark and stringy, like a tough cut of beef. In Cantonese cuisine, bear paws are considered a delicacy. Bear meat should be cooked thoroughly, as it can be infected with the parasite Trichinella spiralis.[129][130]
167
+
168
+ The peoples of eastern Asia use bears' body parts and secretions (notably their gallbladders and bile) as part of traditional Chinese medicine. More than 12,000 bears are thought to be kept on farms in China, Vietnam, and South Korea for the production of bile. Trade in bear products is prohibited under CITES, but bear bile has been detected in shampoos, wine and herbal medicines sold in Canada, the United States and Australia.[131]
169
+
170
+ There is evidence of prehistoric bear worship, though this is disputed by archaeologists.[132] It is possible that bear worship existed in early Chinese and Ainu cultures.[133] The prehistoric Finns,[134] Siberian peoples[135] and more recently Koreans considered the bear as the spirit of their forefathers.[136] In many Native American cultures, the bear is a symbol of rebirth because of its hibernation and re-emergence.[137] The image of the mother bear was prevalent throughout societies in North America and Eurasia, based on the female's devotion and protection of her cubs.[138] Japanese folklore features the Onikuma, a "demon bear" that walks upright.[139] The Ainu of northern Japan, a different people from the Japanese, saw the bear instead as sacred; Hirasawa Byozan painted a scene in documentary style of a bear sacrifice in an Ainu temple, complete with offerings to the dead animal's spirit.[140]
171
+
172
+ In Korean mythology, a tiger and a bear prayed to Hwanung, the son of the Lord of Heaven, that they might become human. Upon hearing their prayers, Hwanung gave them 20 cloves of garlic and a bundle of mugwort, ordering them to eat only this sacred food and remain out of the sunlight for 100 days. The tiger gave up after about twenty days and left the cave. However, the bear persevered and was transformed into a woman. The bear and the tiger are said to represent two tribes that sought the favor of the heavenly prince.[141] The bear-woman (Ungnyeo; 웅녀/熊女) was grateful and made offerings to Hwanung. However, she lacked a husband, and soon became sad and prayed beneath a "divine birch" tree (Korean: 신단수; Hanja: 神檀樹; RR: shindansu) to be blessed with a child. Hwanung, moved by her prayers, took her for his wife and soon she gave birth to a son named Dangun Wanggeom – who was the legendary founder of Gojoseon, the first ever Korean kingdom.[142]
173
+
174
+ Artio (Dea Artio in the Gallo-Roman religion) was a Celtic bear goddess. Evidence of her worship has notably been found at Bern, itself named for the bear. Her name is derived from the Celtic word for "bear", artos.[143] In ancient Greece, archaic cult of Artemis in bear form survived into Classical times at Brauron, where young Athenian girls passed an initiation right as arktai "she bears".[144] For Artemis and one of her nymphs as a she-bear, see the myth of Callisto.
175
+
176
+ The constellations of Ursa Major and Ursa Minor, the great and little bears, are named for their supposed resemblance to bears, from the time of Ptolemy.[b][9] The nearby star Arcturus means "guardian of the bear", as if it were watching the two constellations.[146] Ursa Major has been associated with a bear for as much as 13,000 years since Paleolithic times, in the widespread Cosmic Hunt myths. These are found on both sides of the Bering land bridge, which was lost to the sea some 11,000 years ago.[147]
177
+
178
+ Pliny the Elder's Natural History (1st century AD) claims that "when first born, [bears] are shapeless masses of white flesh, a little larger than mice; their claws alone being prominent. The mother then licks them gradually into proper shape."[148] This belief was echoed by authors of bestiaries throughout the medieval period.[149]
179
+
180
+ Bears are mentioned in the Bible; the Second Book of Kings relates the story of the prophet Elisha calling on them to eat the youths who taunted him.[150] Legends of saints taming bears are common in the Alpine zone. In the arms of the bishopric of Freising, the bear is the dangerous totem animal tamed by St. Corbinian and made to carry his civilized baggage over the mountains. Bears similarly feature in the legends of St. Romedius, Saint Gall and Saint Columbanus. This recurrent motif was used by the Church as a symbol of the victory of Christianity over paganism.[151] In the Norse settlements of northern England during the 10th century, a type of "hogback" grave cover of a long narrow block of stone, with a shaped apex like the roof beam of a long house, is carved with a muzzled, thus Christianized, bear clasping each gable end, as in the church at Brompton, North Yorkshire and across the British Isles.[152]
181
+
182
+ Lāčplēsis, meaning "Bear-slayer", is a Latvian legendary hero who is said to have killed a bear by ripping its jaws apart with his bare hands. However, as revealed in the end of the long epic describing his life, Lāčplēsis' own mother had been a she-bear, and his superhuman strength resided in his bear ears. The modern Latvian military award Order of Lāčplēsis, called for the hero, is also known as The Order of the Bear-Slayer.[citation needed]
183
+
184
+ In the Hindu epic poem The Ramayana, the sloth bear or Asian black bear Jambavan is depicted as the king of bears and helps the title hero Rama defeat the epic's antagonist Ravana and reunite with his queen Sita.[153][154]
185
+
186
+ Bears are popular in children's stories, including Winnie the Pooh,[155] Paddington Bear,[156] Gentle Ben[157] and "The Brown Bear of Norway".[158] An early version of "Goldilocks and the Three Bears",[159] was published as "The Three Bears" in 1837 by Robert Southey, many times retold, and illustrated in 1918 by Arthur Rackham.[160] The Hanna-Barbera character Yogi Bear has appeared in numerous comic books, animated television shows and films.[161][162] The Care Bears began as greeting cards in 1982, and were featured as toys, on clothing and in film.[163] Around the world, many children—and some adults—have teddy bears, stuffed toys in the form of bears, named after the American statesman Theodore Roosevelt when in 1902 he had refused to shoot an American black bear tied to a tree.[164]
187
+
188
+ Bears, like other animals, may symbolize nations. In 1911, the British satirical magazine Punch published a cartoon about the Anglo-Russian Entente by Leonard Raven-Hill in which the British lion watches as the Russian bear sits on the tail of the Persian cat.[165] The Russian Bear has been a common national personification for Russia from the 16th century onward.[166] Smokey Bear has become a part of American culture since his introduction in 1944, with his message "Only you can prevent forest fires".[167] In the United Kingdom, the bear and staff feature on the heraldic arms of the county of Warwickshire.[168] Bears appear in the canting arms of two cities, Bern and Berlin.[169]
189
+
190
+ The International Association for Bear Research & Management, also known as the International Bear Association, and the Bear Specialist Group of the Species Survival Commission, a part of the International Union for Conservation of Nature focus on the natural history, management, and conservation of bears. Bear Trust International works for wild bears and other wildlife through four core program initiatives, namely Conservation Education, Wild Bear Research, Wild Bear Management, and Habitat Conservation.[170]
191
+
192
+ Specialty organizations for each of the eight species of bears worldwide include:
193
+
en/5878.html.txt ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Amphicynodontinae
6
+ Hemicyoninae
7
+ Ursavinae
8
+ Agriotheriinae
9
+ Ailuropodinae
10
+ Tremarctinae
11
+ Ursinae
12
+
13
+ Bears are carnivoran mammals of the family Ursidae. They are classified as caniforms, or doglike carnivorans. There are eight species in existence: Asiatic black bears (also called moon bears), brown bears (which include grizzly bears), giant pandas, North American black bears, polar bears, sloth bears, spectacled bears (also called Andean bears), and sun bears[1]. Although only eight species of bears are extant, they are widespread, appearing in a wide variety of habitats throughout the Northern Hemisphere and partially in the Southern Hemisphere. Bears are found on the continents of North America, South America, Europe, and Asia. Common characteristics of modern bears include large bodies with stocky legs, long snouts, small rounded ears, shaggy hair, plantigrade paws with five nonretractile claws, and short tails.
14
+
15
+ While the polar bear is mostly carnivorous, and the giant panda feeds almost entirely on bamboo, the remaining six species are omnivorous with varied diets. With the exception of courting individuals and mothers with their young, bears are typically solitary animals. They may be diurnal or nocturnal and have an excellent sense of smell. Despite their heavy build and awkward gait, they are adept runners, climbers, and swimmers. Bears use shelters, such as caves and logs, as their dens; most species occupy their dens during the winter for a long period of hibernation, up to 100 days.
16
+
17
+ Bears have been hunted since prehistoric times for their meat and fur; they have been used for bear-baiting and other forms of entertainment, such as being made to dance. With their powerful physical presence, they play a prominent role in the arts, mythology, and other cultural aspects of various human societies. In modern times, bears have come under pressure through encroachment on their habitats and illegal trade in bear parts, including the Asian bile bear market. The IUCN lists six bear species as vulnerable or endangered, and even least concern species, such as the brown bear, are at risk of extirpation in certain countries. The poaching and international trade of these most threatened populations are prohibited, but still ongoing.
18
+
19
+ The English word "bear" comes from Old English bera and belongs to a family of names for the bear in Germanic languages, such as Swedish björn, also used as a first name. This form is conventionally said to be related to a Proto-Indo-European word for "brown", so that "bear" would mean "the brown one".[2][3] However, Ringe notes that while this etymology is semantically plausible, a word meaning "brown" of this form cannot be found in Proto-Indo-European. He suggests instead that "bear" is from the Proto-Indo-European word *ǵʰwḗr- ~ *ǵʰwér "wild animal".[4] This terminology for the animal originated as a taboo avoidance term: proto-Germanic tribes replaced their original word for bear—arkto—with this euphemistic expression out of fear that speaking the animal's true name might cause it to appear.[5][6] According to author Ralph Keyes, this is the oldest known euphemism.[7]
20
+
21
+ Bear taxon names such as Arctoidea and Helarctos come from the ancient Greek ἄρκτος (arktos), meaning bear,[8] as do the names "arctic" and "antarctic", via the name of the constellation Ursa Major, the "Great Bear", prominent in the northern sky.[9]
22
+
23
+ Bear taxon names such as Ursidae and Ursus come from Latin Ursus/Ursa, he-bear/she-bear.[9] The female first name "Ursula", originally derived from a Christian saint's name, means "little she-bear" (diminutive of Latin ursa). In Switzerland, the male first name "Urs" is especially popular, while the name of the canton and city of Bern is derived from Bär, German for bear. The Germanic name Bernard (including Bernhardt and similar forms) means "bear-brave", "bear-hardy", or "bold bear".[10][11] The Old English name Beowulf is a kenning, "bee-wolf", for bear, in turn meaning a brave warrior.[12]
24
+
25
+ The family Ursidae is one of nine families in the suborder Caniformia, or "doglike" carnivorans, within the order Carnivora. Bears' closest living relatives are the pinnipeds, canids, and musteloids.[13] Modern bears comprise eight species in three subfamilies: Ailuropodinae (monotypic with the giant panda), Tremarctinae (monotypic with the spectacled bear), and Ursinae (containing six species divided into one to three genera, depending on the authority). Nuclear chromosome analysis show that the karyotype of the six ursine bears is nearly identical, with each having 74 chromosomes (see Ursid hybrid), whereas the giant panda has 42 chromosomes and the spectacled bear 52. These smaller numbers can be explained by the fusing of some chromosomes, and the banding patterns on these match those of the ursine species, but differ from those of procyonids, which supports the inclusion of these two species in Ursidae rather than in Procyonidae, where they had been placed by some earlier authorities.[14]
26
+
27
+ The earliest members of Ursidae belong to the extinct subfamily Amphicynodontinae, including Parictis (late Eocene to early middle Miocene, 38–18 Mya) and the slightly younger Allocyon (early Oligocene, 34–30 Mya), both from North America. These animals looked very different from today's bears, being small and raccoon-like in overall appearance, with diets perhaps more similar to that of a badger. Parictis does not appear in Eurasia and Africa until the Miocene.[15] It is unclear whether late-Eocene ursids were also present in Eurasia, although faunal exchange across the Bering land bridge may have been possible during a major sea level low stand as early as the late Eocene (about 37 Mya) and continuing into the early Oligocene.[16] European genera morphologically very similar to Allocyon, and to the much younger American Kolponomos (about 18 Mya),[17] are known from the Oligocene, including Amphicticeps and Amphicynodon.[16] There has been various morphological evidence linking amphicynodontines with pinnipeds, as both groups were semi-aquatic, otter-like mammals.[18][19][20] In addition to the support of the pinniped–amphicynodontine clade, other morphological and some molecular evidence supports bears being the closet living relatives to pinnipeds.[21][22][23][19][24][19][20]
28
+
29
+ The raccoon-sized, dog-like Cephalogale is the oldest-known member of the subfamily Hemicyoninae, which first appeared during the middle Oligocene in Eurasia about 30 Mya.[16] The subfamily includes the younger genera Phoberocyon (20–15 Mya), and Plithocyon (15–7 Mya). A Cephalogale-like species gave rise to the genus Ursavus during the early Oligocene (30–28 Mya); this genus proliferated into many species in Asia and is ancestral to all living bears. Species of Ursavus subsequently entered North America, together with Amphicynodon and Cephalogale, during the early Miocene (21–18 Mya). Members of the living lineages of bears diverged from Ursavus between 15 and 20 Mya,[25][26] likely via the species Ursavus elmensis. Based on genetic and morphological data, the Ailuropodinae (pandas) were the first to diverge from other living bears about 19 Mya, although no fossils of this group have been found before about 5 Mya.[27]
30
+
31
+ The New World short-faced bears (Tremarctinae) differentiated from Ursinae following a dispersal event into North America during the mid-Miocene (about 13 Mya).[27] They invaded South America (≈2.5 or 1.2 Ma) following formation of the Isthmus of Panama.[28] Their earliest fossil representative is Plionarctos in North America (c. 10–2 Ma). This genus is probably the direct ancestor to the North American short-faced bears (genus Arctodus), the South American short-faced bears (Arctotherium), and the spectacled bears, Tremarctos, represented by both an extinct North American species (T. floridanus), and the lone surviving representative of the Tremarctinae, the South American spectacled bear (T. ornatus).[16]
32
+
33
+ The subfamily Ursinae experienced a dramatic proliferation of taxa about 5.3–4.5 Mya, coincident with major environmental changes; the first members of the genus Ursus appeared around this time.[27] The sloth bear is a modern survivor of one of the earliest lineages to diverge during this radiation event (5.3 Mya); it took on its peculiar morphology, related to its diet of termites and ants, no later than by the early Pleistocene. By 3–4 Mya, the species Ursus minimus appears in the fossil record of Europe; apart from its size, it was nearly identical to today's Asian black bear. It is likely ancestral to all bears within Ursinae, perhaps aside from the sloth bear. Two lineages evolved from U. minimus: the black bears (including the sun bear, the Asian black bear, and the American black bear); and the brown bears (which includes the polar bear). Modern brown bears evolved from U. minimus via Ursus etruscus, which itself is ancestral to the extinct Pleistocene cave bear. Species of Ursinae have migrated repeatedly into North America from Eurasia as early as 4 Mya during the early Pliocene.[29][30] The polar bear is the most recently evolved species and descended from a population of brown bears that became isolated in northern latitudes by glaciation 400,000 years ago.[31]
34
+
35
+ The bears form a clade within the Carnivora. The cladogram is based on molecular phylogeny of six genes in Flynn, 2005.[32]
36
+
37
+ Feliformia
38
+
39
+ Canidae
40
+
41
+ †Hemicyonidae
42
+
43
+ Ursidae
44
+
45
+ Pinnipedia
46
+
47
+ Ailuridae
48
+
49
+ Procyonidae
50
+
51
+ Mustelidae
52
+
53
+ Note that although they are called "bears" in some languages, red pandas and raccoons and their close relatives are not bears, but rather musteloids.[32]
54
+
55
+ There are two phylogenetic hypotheses on the relationships among extant and fossil bear species. One is all species of bears are classified in seven subfamilies as adopted here and related articles: Amphicynodontinae, Hemicyoninae, Ursavinae, Agriotheriinae, Ailuropodinae, Tremarctinae, and Ursinae.[33][34][35][36] Below is a cladogram of the subfamilies of bears after McLellan and Reiner (1992)[33] and Qiu et al. (2014):[36]
56
+
57
+ Amphicynodontinae
58
+
59
+ Hemicyoninae
60
+
61
+ Ursavinae
62
+
63
+ Agriotheriinae
64
+
65
+ Ailuropodinae
66
+
67
+ Tremarctinae
68
+
69
+ Ursinae
70
+
71
+ The second alternative phylogenetic hypothesis was implemented by McKenna et al. (1997) is to classify all the bear species into the superfamily Ursoidea, with Hemicyoninae and Agriotheriinae being classified in the family "Hemicyonidae".[37] Amphicynodontinae under this classification were classified as stem-pinnipeds in the superfamily Phocoidea.[37] In the McKenna and Bell classification both bears and pinnipeds in a parvorder of carnivoran mammals known as Ursida, along with the extinct bear dogs of the family Amphicyonidae.[37] Below is the cladogram based on McKenna and Bell (1997) classification:[37]
72
+
73
+ Amphicyonidae
74
+
75
+ Amphicynodontidae
76
+
77
+ Pinnipedia
78
+
79
+ Hemicyoninae
80
+
81
+ Agriotheriinae
82
+
83
+ Ursavinae
84
+
85
+ Ailuropodinae
86
+
87
+ Tremarctinae
88
+
89
+ Ursinae
90
+
91
+ The phylogeny of extant bear species is shown in a cladogram based on complete mitochondrial DNA sequences from Yu et al. (2007)[38] The giant panda, followed by the spectacled bear are clearly the oldest species. The relationships of the other species are not very well resolved, though the polar bear and the brown bear form a close grouping.[14]
92
+
93
+ Brown bear
94
+
95
+ Polar bear
96
+
97
+ Asian black bear
98
+
99
+ American black bear
100
+
101
+ Sun bear
102
+
103
+ Sloth bear
104
+
105
+ Spectacled bear
106
+
107
+ Giant panda
108
+
109
+ The bear family includes the most massive extant terrestrial members of the order Carnivora.[a] The polar bear is considered to be the largest extant species,[40] with adult males weighing 350–700 kilograms (770–1,500 pounds) and measuring 2.4–3 metres (7 ft 10 in–9 ft 10 in) in total length.[41] The smallest species is the sun bear, which ranges 25–65 kg (55–145 lb) in weight and 100–140 cm (40–55 in) in length.[42] Prehistoric North and South American short-faced bears were the largest species known to have lived. The latter estimated to have weighed 1,600 kg (3,500 lb) and stood 3.4 m (11 ft 2 in) tall.[43][44] Body weight varies throughout the year in bears of temperate and arctic climates, as they build up fat reserves in the summer and autumn and lose weight during the winter.[45]
110
+
111
+ Bears are generally bulky and robust animals with short tails. They are sexually dimorphic with regard to size, with males typically being larger.[46][47] Larger species tend to show increased levels of sexual dimorphism in comparison to smaller species.[47] Relying as they do on strength rather than speed, bears have relatively short limbs with thick bones to support their bulk. The shoulder blades and the pelvis are correspondingly massive. The limbs are much straighter than those of the big cats as there is no need for them to flex in the same way due to the differences in their gait. The strong forelimbs are used to catch prey, to excavate dens, to dig out burrowing animals, to turn over rocks and logs to locate prey, and to club large creatures.[45]
112
+
113
+ Unlike most other land carnivorans, bears are plantigrade. They distribute their weight toward the hind feet, which makes them look lumbering when they walk. They are capable of bursts of speed but soon tire, and as a result mostly rely on ambush rather than the chase. Bears can stand on their hind feet and sit up straight with remarkable balance. Their front paws are flexible enough to grasp fruit and leaves. Bears' non-retractable claws are used for digging, climbing, tearing, and catching prey. The claws on the front feet are larger than those on the back and may be a hindrance when climbing trees; black bears are the most arboreal of the bears, and have the shortest claws. Pandas are unique in having a bony extension on the wrist of the front feet which acts as a thumb, and is used for gripping bamboo shoots as the animals feed.[45]
114
+
115
+ Most mammals have agouti hair, with each individual hair shaft having bands of color corresponding to two different types of melanin pigment. Bears however have a single type of melanin and the hairs have a single color throughout their length, apart from the tip which is sometimes a different shade. The coat consists of long guard hairs, which form a protective shaggy covering, and short dense hairs which form an insulating layer trapping air close to the skin. The shaggy coat helps maintain body heat during winter hibernation and is shed in the spring leaving a shorter summer coat. Polar bears have hollow, translucent guard hairs which gain heat from the sun and conduct it to the dark-colored skin below. They have a thick layer of blubber for extra insulation, and the soles of their feet have a dense pad of fur.[45] While bears tend to be uniform in color, some species may have markings on the chest or face and the giant panda has a bold black-and-white pelage.[48]
116
+
117
+ Bears have small rounded ears so as to minimize heat loss, but neither their hearing or sight are particularly acute. Unlike many other carnivorans they have color vision, perhaps to help them distinguish ripe nuts and fruits. They are unique among carnivorans in not having touch-sensitive whiskers on the muzzle; however, they have an excellent sense of smell, better than that of the dog, or possibly any other mammal. They use smell for signalling to each other (either to warn off rivals or detect mates) and for finding food. Smell is the principal sense used by bears to locate most of their food, and they have excellent memories which helps them to relocate places where they have found food before.[45]
118
+
119
+ The skulls of bears are massive, providing anchorage for the powerful masseter and temporal jaw muscles. The canine teeth are large but mostly used for display, and the molar teeth flat and crushing. Unlike most other members of the Carnivora, bears have relatively undeveloped carnassial teeth, and their teeth are adapted for a diet that includes a significant amount of vegetable matter.[45] Considerable variation occurs in dental formula even within a given species. This may indicate bears are still in the process of evolving from a mainly meat-eating diet to a predominantly herbivorous one. Polar bears appear to have secondarily re-evolved carnassial-like cheek teeth, as their diets have switched back towards carnivory.[49] Sloth bears lack lower central incisors and use their protusible lips for sucking up the termites on which they feed.[45] The general dental formula for living bears is:
120
+ 3.1.2–4.23.1.2–4.3.[45] The structure of the larynx of bears appears to be the most basal of the caniforms.[50] They possess air pouches connected to the pharynx which may amplify their vocalizations.[51]
121
+
122
+ Bears have a fairly simple digestive system typical for carnivorans, with a single stomach, short undifferentiated intestines and no cecum.[52][53] Even the herbivorous giant panda still has the digestive system of a carnivore, as well as carnivore-specific genes. Its ability to digest cellulose is ascribed to the microbes in its gut.[54] Bears must spend much of their time feeding in order to gain enough nutrition from foliage. The panda, in particular, spends 12–15 hours a day feeding.[55]
123
+
124
+ Extant bears are found in sixty countries primarily in the Northern Hemisphere and are concentrated in Asia, North America, and Europe. An exception is the spectacled bear; native to South America, it inhabits the Andean region.[56] The sun bear's range extends below the equator in Southeast Asia.[57] The Atlas bear, a subspecies of the brown bear was distributed in North Africa from Morocco to Libya, but it became extinct around the 1870s.[58]
125
+
126
+ The most widespread species is the brown bear, which occurs from Western Europe eastwards through Asia to the western areas of North America. The American black bear is restricted to North America, and the polar bear is restricted to the Arctic Sea. All the remaining species of bear are Asian.[56] They occur in a range of habitats which include tropical lowland rainforest, both coniferous and broadleaf forests, prairies, steppes, montane grassland, alpine scree slopes, Arctic tundra and in the case of the polar bear, ice floes.[56][59] Bears may dig their dens in hillsides or use caves, hollow logs and dense vegetation for shelter.[59]
127
+
128
+ Brown and American black bears are generally diurnal, meaning that they are active for the most part during the day, though they may forage substantially by night.[60] Other species may be nocturnal, active at night, though female sloth bears with cubs may feed more at daytime to avoid competition from conspecifics and nocturnal predators.[61] Bears are overwhelmingly solitary and are considered to be the most asocial of all the Carnivora. The only times bears are encountered in groups are mothers with young or occasional seasonal bounties of rich food (such as salmon runs).[62][63] Fights between males can occur and older individuals may have extensive scarring, which suggests that maintaining dominance can be intense.[64] With their acute sense of smell, bears can locate carcasses from several kilometres away. They use olfaction to locate other foods, encounter mates, avoid rivals and recognize their cubs.[45]
129
+
130
+ Most bears are opportunistic omnivores and consume more plant than animal matter. They eat anything from leaves, roots, and berries to insects, carrion, fresh meat, and fish, and have digestive systems and teeth adapted to such a diet.[56] At the extremes are the almost entirely herbivorous giant panda and the mostly carnivorous polar bear. However, all bears feed on any food source that becomes seasonally available.[55] For example, Asiatic black bears in Taiwan consume large numbers of acorns when these are most common, and switch to ungulates at other times of the year.[65]
131
+
132
+ When foraging for plants, bears choose to eat them at the stage when they are at their most nutritious and digestible, typically avoiding older grasses, sedges and leaves.[53][55] Hence, in more northern temperate areas, browsing and grazing is more common early in spring and later becomes more restricted.[66] Knowing when plants are ripe for eating is a learned behavior.[55] Berries may be foraged in bushes or at the tops of trees, and bears try to maximize the number of berries consumed versus foliage.[66] In autumn, some bear species forage large amounts of naturally fermented fruits, which affects their behavior.[67] Smaller bears climb trees to obtain mast (edible reproductive parts, such as acorns).[68] Such masts can be very important to the diets of these species, and mast failures may result in long-range movements by bears looking for alternative food sources.[69] Brown bears, with their powerful digging abilities, commonly eat roots.[66] The panda's diet is over 99% bamboo,[70] of 30 different species. Its strong jaws are adapted for crushing the tough stems of these plants, though they prefer to eat the more nutritious leaves.[71][72] Bromeliads can make up to 50% of the diet of the spectacled bear, which also has strong jaws to bite them open.[73]
133
+
134
+ The sloth bear, though not as specialized as polar bears and the panda, has lost several front teeth usually seen in bears, and developed a long, suctioning tongue to feed on the ants, termites, and other burrowing insects they favour. At certain times of the year, these insects can make up 90% of their diets.[74] Some species may raid the nests of wasps and bees for the honey and immature insects, in spite of stinging from the adults.[75] Sun bears use their long tongues to lick up both insects and honey.[76] Fish are an important source of food for some species, and brown bears in particular gather in large numbers at salmon runs. Typically, a bear plunges into the water and seizes a fish with its jaws or front paws. The preferred parts to eat are the brain and eggs. Small burrowing mammals like rodents may be dug out and eaten.[77][66]
135
+
136
+ The brown bear and both species of black bears sometimes take large ungulates, such as deer and bovids, mostly the young and weak.[65][78][77] These animals may be taken by a short rush and ambush, though hiding young may be stiffed out and pounced on.[66][79] The polar bear mainly preys on seals, stalking them from the ice or breaking into their dens. They primarily eat the highly digestible blubber.[80][77] Large mammalian prey is typically killed by a bite to the head or neck, or (in the case of young) simply pinned down and mauled.[66][81] Predatory behavior in bears is typically taught to the young by the mother.[77]
137
+
138
+ Bears are prolific scavengers and kleptoparasites, stealing food caches from rodents, and carcasses from other predators.[53][82] For hibernating species, weight gain is important as it provides nourishment during winter dormancy. A brown bear can eat 41 kg (90 lb) of food and gain 2–3 kg (4–7 lb) of fat a day prior to entering its den.[83]
139
+
140
+ Bears produce a number of vocal and non-vocal sounds. Tongue-clicking, grunting or chuffing many be made in cordial situations, such as between mothers and cubs or courting couples, while moaning, huffing, snorting or blowing air is made when an individual is stressed. Barking is produced during times of alarm, excitement or to give away the animal's position. Warning sounds include jaw-clicking and lip-popping, while teeth-chatters, bellows, growls, roars and pulsing sounds are made in aggressive encounters. Cubs may squeal, bawl, bleat or scream when in distress and make motor-like humming when comfortable or nursing.[50][84][85][86][87][88]
141
+
142
+ Bears sometimes communicate with visual displays such as standing upright, which exaggerates the individual's size. The chest markings of some species may add to this intimidating display. Staring is an aggressive act and the facial markings of spectacled bears and giant pandas may help draw attention to the eyes during agonistic encounters.[48] Individuals may approach each other by stiff-legged walking with the head lowered. Dominance between bears is asserted by making a frontal orientation, showing the canine teeth, muzzle twisting and neck stretching. A subordinate may respond with a lateral orientation, by turning away and dropping the head and by sitting or lying down.[63][89]
143
+
144
+ Bears may mark territory by rubbing against trees and other objects which may serve to spread their scent. This is usually accompanied by clawing and biting the object. Bark may be spread around to draw attention to the marking post.[90] Pandas are known to mark objects with urine and a waxy substance from their anal glands.[91] Polar bears leave behind their scent in their tracks which allow individuals to keep track of one another in the vast Arctic wilderness.[92]
145
+
146
+ The mating system of bears has variously been described as a form of polygyny, promiscuity and serial monogamy.[93][94][95] During the breeding season, males take notice of females in their vicinity and females become more tolerant of males. A male bear may visit a female continuously over a period of several days or weeks, depending on the species, to test her reproductive state. During this time period, males try to prevent rivals from interacting with their mate. Courtship may be brief, although in some Asian species, courting pairs may engage in wrestling, hugging, mock fighting and vocalizing. Ovulation is induced by mating, which can last up to 30 minutes depending on the species.[94]
147
+
148
+ Gestation typically lasts 6–9 months, including delayed implantation, and litter size numbers up to four cubs.[96] Giant pandas may give birth to twins but they can only suckle one young and the other is left to die.[97] In northern living species, birth takes place during winter dormancy. Cubs are born blind and helpless with at most a thin layer of hair, relying on their mother for warmth. The milk of the female bear is rich in fat and antibodies and cubs may suckle for up to a year after they are born. By 2–3 months, cubs can follow their mother outside the den. They usually follow her on foot, but sloth bear cubs may ride on their mother's back.[96][59] Male bears play no role in raising young. Infanticide, where an adult male kills the cubs of another, has been recorded in polar bears, brown bears and American black bears but not in other species.[98] Males kill young to bring the female into estrus.[99] Cubs may flee and the mother defends them even at the cost of her life.[100][101][102]
149
+
150
+ In some species, offspring may become independent around the next spring, through some may stay until the female successfully mates again. Bears reach sexual maturity shortly after they disperse; at around 3–6 years depending on the species. Male Alaskan brown bears and polar bears may continue to grow until they are 11 years old.[96] Lifespan may also vary between species. The brown bear can live an average of 25 years.[103]
151
+
152
+ Bears of northern regions, including the American black bear and the grizzly bear, hibernate in the winter.[104][105] During hibernation, the bear's metabolism slows down, its body temperature decreases slightly, and its heart rate slows from a normal value of 55 to just 9 beats per minute.[106] Bears normally do not wake during their hibernation, and can go the entire period without eating, drinking, urinating, or defecating.[45] A fecal plug is formed in the colon, and is expelled when the bear wakes in the spring.[107] If they have stored enough body fat, their muscles remain in good condition, and their protein maintenance requirements are met from recycling waste urea.[45] Female bears give birth during the hibernation period, and are roused when doing so.[105]
153
+
154
+ Bears do not have many predators. The most important are humans, and as they started cultivating crops, they increasingly came in conflict with the bears that raided them. Since the invention of firearms, people have been able to kill bears with greater ease.[108] Felids like the tiger may also prey on bears,[109][110] particularly cubs, which may also be threatened by canids.[14][95]
155
+
156
+ Bears are parasitized by eighty species of parasites, including single-celled protozoans and gastro-intestinal worms, and nematodes and flukes in their heart, liver, lungs and bloodstream. Externally they have ticks, fleas and lice. A study of American black bears found seventeen species of endoparasite including the protozoan Sarcocystis, the parasitic worm Diphyllobothrium mansonoides, and the nematodes Dirofilaria immitis, Capillaria aerophila, Physaloptera sp., Strongyloides sp. and others. Of these, D. mansonoides and adult C. aerophila were causing pathological symptoms.[111] By contrast, polar bears have few parasites; many parasitic species need a secondary, usually terrestrial, host, and the polar bear's life style is such that few alternative hosts exist in their environment. The protozoan Toxoplasma gondii has been found in polar bears, and the nematode Trichinella nativa can cause a serious infection and decline in older polar bears.[112] Bears in North America are sometimes infected by a Morbillivirus similar to the canine distemper virus.[113] They are susceptible to infectious canine hepatitis (CAV-1), with free-living black bears dying rapidly of encephalitis and hepatitis.[114]
157
+
158
+ In modern times, bears have come under pressure through encroachment on their habitats[115] and illegal trade in bear parts, including the Asian bile bear market, though hunting is now banned, largely replaced by farming.[116] The IUCN lists six bear species as vulnerable;[117] even the two least concern species, the brown bear and the American black bear,[117] are at risk of extirpation in certain areas. In general these two species inhabit remote areas with little interaction with humans, and the main non-natural causes of mortality are hunting, trapping, road-kill and depredation.[118]
159
+
160
+ Laws have been passed in many areas of the world to protect bears from habitat destruction. Public perception of bears is often positive, as people identify with bears due to their omnivorous diets, their ability to stand on two legs, and their symbolic importance.[119] Support for bear protection is widespread, at least in more affluent societies.[120] The giant panda has become a worldwide symbol of conservation. The Sichuan Giant Panda Sanctuaries, which are home to around 30% of the wild panda population, gained a UNESCO World Heritage Site designation in 2006.[121] Where bears raid crops or attack livestock, they may come into conflict with humans.[122][123] In poorer rural regions, attitudes may be more shaped by the dangers posed by bears, and the economic costs they cause to farmers and ranchers.[122]
161
+
162
+ Several bear species are dangerous to humans, especially in areas where they have become used to people; elsewhere, they generally avoid humans. Injuries caused by bears are rare, but are widely reported.[124] Bears may attack humans in response to being startled, in defense of young or food, or even for predatory reasons.[125]
163
+
164
+ Bears in captivity have for centuries been used for entertainment. They have been trained to dance,[126] and were kept for baiting in Europe at least since the 16th century. There were five bear-baiting gardens in Southwark, London at that time; archaeological remains of three of these have survived.[127] Across Europe, nomadic Romani bear handlers called Ursari lived by busking with their bears from the 12th century.[128]
165
+
166
+ Bears have been hunted for sport, food, and folk medicine. Their meat is dark and stringy, like a tough cut of beef. In Cantonese cuisine, bear paws are considered a delicacy. Bear meat should be cooked thoroughly, as it can be infected with the parasite Trichinella spiralis.[129][130]
167
+
168
+ The peoples of eastern Asia use bears' body parts and secretions (notably their gallbladders and bile) as part of traditional Chinese medicine. More than 12,000 bears are thought to be kept on farms in China, Vietnam, and South Korea for the production of bile. Trade in bear products is prohibited under CITES, but bear bile has been detected in shampoos, wine and herbal medicines sold in Canada, the United States and Australia.[131]
169
+
170
+ There is evidence of prehistoric bear worship, though this is disputed by archaeologists.[132] It is possible that bear worship existed in early Chinese and Ainu cultures.[133] The prehistoric Finns,[134] Siberian peoples[135] and more recently Koreans considered the bear as the spirit of their forefathers.[136] In many Native American cultures, the bear is a symbol of rebirth because of its hibernation and re-emergence.[137] The image of the mother bear was prevalent throughout societies in North America and Eurasia, based on the female's devotion and protection of her cubs.[138] Japanese folklore features the Onikuma, a "demon bear" that walks upright.[139] The Ainu of northern Japan, a different people from the Japanese, saw the bear instead as sacred; Hirasawa Byozan painted a scene in documentary style of a bear sacrifice in an Ainu temple, complete with offerings to the dead animal's spirit.[140]
171
+
172
+ In Korean mythology, a tiger and a bear prayed to Hwanung, the son of the Lord of Heaven, that they might become human. Upon hearing their prayers, Hwanung gave them 20 cloves of garlic and a bundle of mugwort, ordering them to eat only this sacred food and remain out of the sunlight for 100 days. The tiger gave up after about twenty days and left the cave. However, the bear persevered and was transformed into a woman. The bear and the tiger are said to represent two tribes that sought the favor of the heavenly prince.[141] The bear-woman (Ungnyeo; 웅녀/熊女) was grateful and made offerings to Hwanung. However, she lacked a husband, and soon became sad and prayed beneath a "divine birch" tree (Korean: 신단수; Hanja: 神檀樹; RR: shindansu) to be blessed with a child. Hwanung, moved by her prayers, took her for his wife and soon she gave birth to a son named Dangun Wanggeom – who was the legendary founder of Gojoseon, the first ever Korean kingdom.[142]
173
+
174
+ Artio (Dea Artio in the Gallo-Roman religion) was a Celtic bear goddess. Evidence of her worship has notably been found at Bern, itself named for the bear. Her name is derived from the Celtic word for "bear", artos.[143] In ancient Greece, archaic cult of Artemis in bear form survived into Classical times at Brauron, where young Athenian girls passed an initiation right as arktai "she bears".[144] For Artemis and one of her nymphs as a she-bear, see the myth of Callisto.
175
+
176
+ The constellations of Ursa Major and Ursa Minor, the great and little bears, are named for their supposed resemblance to bears, from the time of Ptolemy.[b][9] The nearby star Arcturus means "guardian of the bear", as if it were watching the two constellations.[146] Ursa Major has been associated with a bear for as much as 13,000 years since Paleolithic times, in the widespread Cosmic Hunt myths. These are found on both sides of the Bering land bridge, which was lost to the sea some 11,000 years ago.[147]
177
+
178
+ Pliny the Elder's Natural History (1st century AD) claims that "when first born, [bears] are shapeless masses of white flesh, a little larger than mice; their claws alone being prominent. The mother then licks them gradually into proper shape."[148] This belief was echoed by authors of bestiaries throughout the medieval period.[149]
179
+
180
+ Bears are mentioned in the Bible; the Second Book of Kings relates the story of the prophet Elisha calling on them to eat the youths who taunted him.[150] Legends of saints taming bears are common in the Alpine zone. In the arms of the bishopric of Freising, the bear is the dangerous totem animal tamed by St. Corbinian and made to carry his civilized baggage over the mountains. Bears similarly feature in the legends of St. Romedius, Saint Gall and Saint Columbanus. This recurrent motif was used by the Church as a symbol of the victory of Christianity over paganism.[151] In the Norse settlements of northern England during the 10th century, a type of "hogback" grave cover of a long narrow block of stone, with a shaped apex like the roof beam of a long house, is carved with a muzzled, thus Christianized, bear clasping each gable end, as in the church at Brompton, North Yorkshire and across the British Isles.[152]
181
+
182
+ Lāčplēsis, meaning "Bear-slayer", is a Latvian legendary hero who is said to have killed a bear by ripping its jaws apart with his bare hands. However, as revealed in the end of the long epic describing his life, Lāčplēsis' own mother had been a she-bear, and his superhuman strength resided in his bear ears. The modern Latvian military award Order of Lāčplēsis, called for the hero, is also known as The Order of the Bear-Slayer.[citation needed]
183
+
184
+ In the Hindu epic poem The Ramayana, the sloth bear or Asian black bear Jambavan is depicted as the king of bears and helps the title hero Rama defeat the epic's antagonist Ravana and reunite with his queen Sita.[153][154]
185
+
186
+ Bears are popular in children's stories, including Winnie the Pooh,[155] Paddington Bear,[156] Gentle Ben[157] and "The Brown Bear of Norway".[158] An early version of "Goldilocks and the Three Bears",[159] was published as "The Three Bears" in 1837 by Robert Southey, many times retold, and illustrated in 1918 by Arthur Rackham.[160] The Hanna-Barbera character Yogi Bear has appeared in numerous comic books, animated television shows and films.[161][162] The Care Bears began as greeting cards in 1982, and were featured as toys, on clothing and in film.[163] Around the world, many children—and some adults—have teddy bears, stuffed toys in the form of bears, named after the American statesman Theodore Roosevelt when in 1902 he had refused to shoot an American black bear tied to a tree.[164]
187
+
188
+ Bears, like other animals, may symbolize nations. In 1911, the British satirical magazine Punch published a cartoon about the Anglo-Russian Entente by Leonard Raven-Hill in which the British lion watches as the Russian bear sits on the tail of the Persian cat.[165] The Russian Bear has been a common national personification for Russia from the 16th century onward.[166] Smokey Bear has become a part of American culture since his introduction in 1944, with his message "Only you can prevent forest fires".[167] In the United Kingdom, the bear and staff feature on the heraldic arms of the county of Warwickshire.[168] Bears appear in the canting arms of two cities, Bern and Berlin.[169]
189
+
190
+ The International Association for Bear Research & Management, also known as the International Bear Association, and the Bear Specialist Group of the Species Survival Commission, a part of the International Union for Conservation of Nature focus on the natural history, management, and conservation of bears. Bear Trust International works for wild bears and other wildlife through four core program initiatives, namely Conservation Education, Wild Bear Research, Wild Bear Management, and Habitat Conservation.[170]
191
+
192
+ Specialty organizations for each of the eight species of bears worldwide include:
193
+
en/5879.html.txt ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Amphicynodontinae
6
+ Hemicyoninae
7
+ Ursavinae
8
+ Agriotheriinae
9
+ Ailuropodinae
10
+ Tremarctinae
11
+ Ursinae
12
+
13
+ Bears are carnivoran mammals of the family Ursidae. They are classified as caniforms, or doglike carnivorans. There are eight species in existence: Asiatic black bears (also called moon bears), brown bears (which include grizzly bears), giant pandas, North American black bears, polar bears, sloth bears, spectacled bears (also called Andean bears), and sun bears[1]. Although only eight species of bears are extant, they are widespread, appearing in a wide variety of habitats throughout the Northern Hemisphere and partially in the Southern Hemisphere. Bears are found on the continents of North America, South America, Europe, and Asia. Common characteristics of modern bears include large bodies with stocky legs, long snouts, small rounded ears, shaggy hair, plantigrade paws with five nonretractile claws, and short tails.
14
+
15
+ While the polar bear is mostly carnivorous, and the giant panda feeds almost entirely on bamboo, the remaining six species are omnivorous with varied diets. With the exception of courting individuals and mothers with their young, bears are typically solitary animals. They may be diurnal or nocturnal and have an excellent sense of smell. Despite their heavy build and awkward gait, they are adept runners, climbers, and swimmers. Bears use shelters, such as caves and logs, as their dens; most species occupy their dens during the winter for a long period of hibernation, up to 100 days.
16
+
17
+ Bears have been hunted since prehistoric times for their meat and fur; they have been used for bear-baiting and other forms of entertainment, such as being made to dance. With their powerful physical presence, they play a prominent role in the arts, mythology, and other cultural aspects of various human societies. In modern times, bears have come under pressure through encroachment on their habitats and illegal trade in bear parts, including the Asian bile bear market. The IUCN lists six bear species as vulnerable or endangered, and even least concern species, such as the brown bear, are at risk of extirpation in certain countries. The poaching and international trade of these most threatened populations are prohibited, but still ongoing.
18
+
19
+ The English word "bear" comes from Old English bera and belongs to a family of names for the bear in Germanic languages, such as Swedish björn, also used as a first name. This form is conventionally said to be related to a Proto-Indo-European word for "brown", so that "bear" would mean "the brown one".[2][3] However, Ringe notes that while this etymology is semantically plausible, a word meaning "brown" of this form cannot be found in Proto-Indo-European. He suggests instead that "bear" is from the Proto-Indo-European word *ǵʰwḗr- ~ *ǵʰwér "wild animal".[4] This terminology for the animal originated as a taboo avoidance term: proto-Germanic tribes replaced their original word for bear—arkto—with this euphemistic expression out of fear that speaking the animal's true name might cause it to appear.[5][6] According to author Ralph Keyes, this is the oldest known euphemism.[7]
20
+
21
+ Bear taxon names such as Arctoidea and Helarctos come from the ancient Greek ἄρκτος (arktos), meaning bear,[8] as do the names "arctic" and "antarctic", via the name of the constellation Ursa Major, the "Great Bear", prominent in the northern sky.[9]
22
+
23
+ Bear taxon names such as Ursidae and Ursus come from Latin Ursus/Ursa, he-bear/she-bear.[9] The female first name "Ursula", originally derived from a Christian saint's name, means "little she-bear" (diminutive of Latin ursa). In Switzerland, the male first name "Urs" is especially popular, while the name of the canton and city of Bern is derived from Bär, German for bear. The Germanic name Bernard (including Bernhardt and similar forms) means "bear-brave", "bear-hardy", or "bold bear".[10][11] The Old English name Beowulf is a kenning, "bee-wolf", for bear, in turn meaning a brave warrior.[12]
24
+
25
+ The family Ursidae is one of nine families in the suborder Caniformia, or "doglike" carnivorans, within the order Carnivora. Bears' closest living relatives are the pinnipeds, canids, and musteloids.[13] Modern bears comprise eight species in three subfamilies: Ailuropodinae (monotypic with the giant panda), Tremarctinae (monotypic with the spectacled bear), and Ursinae (containing six species divided into one to three genera, depending on the authority). Nuclear chromosome analysis show that the karyotype of the six ursine bears is nearly identical, with each having 74 chromosomes (see Ursid hybrid), whereas the giant panda has 42 chromosomes and the spectacled bear 52. These smaller numbers can be explained by the fusing of some chromosomes, and the banding patterns on these match those of the ursine species, but differ from those of procyonids, which supports the inclusion of these two species in Ursidae rather than in Procyonidae, where they had been placed by some earlier authorities.[14]
26
+
27
+ The earliest members of Ursidae belong to the extinct subfamily Amphicynodontinae, including Parictis (late Eocene to early middle Miocene, 38–18 Mya) and the slightly younger Allocyon (early Oligocene, 34–30 Mya), both from North America. These animals looked very different from today's bears, being small and raccoon-like in overall appearance, with diets perhaps more similar to that of a badger. Parictis does not appear in Eurasia and Africa until the Miocene.[15] It is unclear whether late-Eocene ursids were also present in Eurasia, although faunal exchange across the Bering land bridge may have been possible during a major sea level low stand as early as the late Eocene (about 37 Mya) and continuing into the early Oligocene.[16] European genera morphologically very similar to Allocyon, and to the much younger American Kolponomos (about 18 Mya),[17] are known from the Oligocene, including Amphicticeps and Amphicynodon.[16] There has been various morphological evidence linking amphicynodontines with pinnipeds, as both groups were semi-aquatic, otter-like mammals.[18][19][20] In addition to the support of the pinniped–amphicynodontine clade, other morphological and some molecular evidence supports bears being the closet living relatives to pinnipeds.[21][22][23][19][24][19][20]
28
+
29
+ The raccoon-sized, dog-like Cephalogale is the oldest-known member of the subfamily Hemicyoninae, which first appeared during the middle Oligocene in Eurasia about 30 Mya.[16] The subfamily includes the younger genera Phoberocyon (20–15 Mya), and Plithocyon (15–7 Mya). A Cephalogale-like species gave rise to the genus Ursavus during the early Oligocene (30–28 Mya); this genus proliferated into many species in Asia and is ancestral to all living bears. Species of Ursavus subsequently entered North America, together with Amphicynodon and Cephalogale, during the early Miocene (21–18 Mya). Members of the living lineages of bears diverged from Ursavus between 15 and 20 Mya,[25][26] likely via the species Ursavus elmensis. Based on genetic and morphological data, the Ailuropodinae (pandas) were the first to diverge from other living bears about 19 Mya, although no fossils of this group have been found before about 5 Mya.[27]
30
+
31
+ The New World short-faced bears (Tremarctinae) differentiated from Ursinae following a dispersal event into North America during the mid-Miocene (about 13 Mya).[27] They invaded South America (≈2.5 or 1.2 Ma) following formation of the Isthmus of Panama.[28] Their earliest fossil representative is Plionarctos in North America (c. 10–2 Ma). This genus is probably the direct ancestor to the North American short-faced bears (genus Arctodus), the South American short-faced bears (Arctotherium), and the spectacled bears, Tremarctos, represented by both an extinct North American species (T. floridanus), and the lone surviving representative of the Tremarctinae, the South American spectacled bear (T. ornatus).[16]
32
+
33
+ The subfamily Ursinae experienced a dramatic proliferation of taxa about 5.3–4.5 Mya, coincident with major environmental changes; the first members of the genus Ursus appeared around this time.[27] The sloth bear is a modern survivor of one of the earliest lineages to diverge during this radiation event (5.3 Mya); it took on its peculiar morphology, related to its diet of termites and ants, no later than by the early Pleistocene. By 3–4 Mya, the species Ursus minimus appears in the fossil record of Europe; apart from its size, it was nearly identical to today's Asian black bear. It is likely ancestral to all bears within Ursinae, perhaps aside from the sloth bear. Two lineages evolved from U. minimus: the black bears (including the sun bear, the Asian black bear, and the American black bear); and the brown bears (which includes the polar bear). Modern brown bears evolved from U. minimus via Ursus etruscus, which itself is ancestral to the extinct Pleistocene cave bear. Species of Ursinae have migrated repeatedly into North America from Eurasia as early as 4 Mya during the early Pliocene.[29][30] The polar bear is the most recently evolved species and descended from a population of brown bears that became isolated in northern latitudes by glaciation 400,000 years ago.[31]
34
+
35
+ The bears form a clade within the Carnivora. The cladogram is based on molecular phylogeny of six genes in Flynn, 2005.[32]
36
+
37
+ Feliformia
38
+
39
+ Canidae
40
+
41
+ †Hemicyonidae
42
+
43
+ Ursidae
44
+
45
+ Pinnipedia
46
+
47
+ Ailuridae
48
+
49
+ Procyonidae
50
+
51
+ Mustelidae
52
+
53
+ Note that although they are called "bears" in some languages, red pandas and raccoons and their close relatives are not bears, but rather musteloids.[32]
54
+
55
+ There are two phylogenetic hypotheses on the relationships among extant and fossil bear species. One is all species of bears are classified in seven subfamilies as adopted here and related articles: Amphicynodontinae, Hemicyoninae, Ursavinae, Agriotheriinae, Ailuropodinae, Tremarctinae, and Ursinae.[33][34][35][36] Below is a cladogram of the subfamilies of bears after McLellan and Reiner (1992)[33] and Qiu et al. (2014):[36]
56
+
57
+ Amphicynodontinae
58
+
59
+ Hemicyoninae
60
+
61
+ Ursavinae
62
+
63
+ Agriotheriinae
64
+
65
+ Ailuropodinae
66
+
67
+ Tremarctinae
68
+
69
+ Ursinae
70
+
71
+ The second alternative phylogenetic hypothesis was implemented by McKenna et al. (1997) is to classify all the bear species into the superfamily Ursoidea, with Hemicyoninae and Agriotheriinae being classified in the family "Hemicyonidae".[37] Amphicynodontinae under this classification were classified as stem-pinnipeds in the superfamily Phocoidea.[37] In the McKenna and Bell classification both bears and pinnipeds in a parvorder of carnivoran mammals known as Ursida, along with the extinct bear dogs of the family Amphicyonidae.[37] Below is the cladogram based on McKenna and Bell (1997) classification:[37]
72
+
73
+ Amphicyonidae
74
+
75
+ Amphicynodontidae
76
+
77
+ Pinnipedia
78
+
79
+ Hemicyoninae
80
+
81
+ Agriotheriinae
82
+
83
+ Ursavinae
84
+
85
+ Ailuropodinae
86
+
87
+ Tremarctinae
88
+
89
+ Ursinae
90
+
91
+ The phylogeny of extant bear species is shown in a cladogram based on complete mitochondrial DNA sequences from Yu et al. (2007)[38] The giant panda, followed by the spectacled bear are clearly the oldest species. The relationships of the other species are not very well resolved, though the polar bear and the brown bear form a close grouping.[14]
92
+
93
+ Brown bear
94
+
95
+ Polar bear
96
+
97
+ Asian black bear
98
+
99
+ American black bear
100
+
101
+ Sun bear
102
+
103
+ Sloth bear
104
+
105
+ Spectacled bear
106
+
107
+ Giant panda
108
+
109
+ The bear family includes the most massive extant terrestrial members of the order Carnivora.[a] The polar bear is considered to be the largest extant species,[40] with adult males weighing 350–700 kilograms (770–1,500 pounds) and measuring 2.4–3 metres (7 ft 10 in–9 ft 10 in) in total length.[41] The smallest species is the sun bear, which ranges 25–65 kg (55–145 lb) in weight and 100–140 cm (40–55 in) in length.[42] Prehistoric North and South American short-faced bears were the largest species known to have lived. The latter estimated to have weighed 1,600 kg (3,500 lb) and stood 3.4 m (11 ft 2 in) tall.[43][44] Body weight varies throughout the year in bears of temperate and arctic climates, as they build up fat reserves in the summer and autumn and lose weight during the winter.[45]
110
+
111
+ Bears are generally bulky and robust animals with short tails. They are sexually dimorphic with regard to size, with males typically being larger.[46][47] Larger species tend to show increased levels of sexual dimorphism in comparison to smaller species.[47] Relying as they do on strength rather than speed, bears have relatively short limbs with thick bones to support their bulk. The shoulder blades and the pelvis are correspondingly massive. The limbs are much straighter than those of the big cats as there is no need for them to flex in the same way due to the differences in their gait. The strong forelimbs are used to catch prey, to excavate dens, to dig out burrowing animals, to turn over rocks and logs to locate prey, and to club large creatures.[45]
112
+
113
+ Unlike most other land carnivorans, bears are plantigrade. They distribute their weight toward the hind feet, which makes them look lumbering when they walk. They are capable of bursts of speed but soon tire, and as a result mostly rely on ambush rather than the chase. Bears can stand on their hind feet and sit up straight with remarkable balance. Their front paws are flexible enough to grasp fruit and leaves. Bears' non-retractable claws are used for digging, climbing, tearing, and catching prey. The claws on the front feet are larger than those on the back and may be a hindrance when climbing trees; black bears are the most arboreal of the bears, and have the shortest claws. Pandas are unique in having a bony extension on the wrist of the front feet which acts as a thumb, and is used for gripping bamboo shoots as the animals feed.[45]
114
+
115
+ Most mammals have agouti hair, with each individual hair shaft having bands of color corresponding to two different types of melanin pigment. Bears however have a single type of melanin and the hairs have a single color throughout their length, apart from the tip which is sometimes a different shade. The coat consists of long guard hairs, which form a protective shaggy covering, and short dense hairs which form an insulating layer trapping air close to the skin. The shaggy coat helps maintain body heat during winter hibernation and is shed in the spring leaving a shorter summer coat. Polar bears have hollow, translucent guard hairs which gain heat from the sun and conduct it to the dark-colored skin below. They have a thick layer of blubber for extra insulation, and the soles of their feet have a dense pad of fur.[45] While bears tend to be uniform in color, some species may have markings on the chest or face and the giant panda has a bold black-and-white pelage.[48]
116
+
117
+ Bears have small rounded ears so as to minimize heat loss, but neither their hearing or sight are particularly acute. Unlike many other carnivorans they have color vision, perhaps to help them distinguish ripe nuts and fruits. They are unique among carnivorans in not having touch-sensitive whiskers on the muzzle; however, they have an excellent sense of smell, better than that of the dog, or possibly any other mammal. They use smell for signalling to each other (either to warn off rivals or detect mates) and for finding food. Smell is the principal sense used by bears to locate most of their food, and they have excellent memories which helps them to relocate places where they have found food before.[45]
118
+
119
+ The skulls of bears are massive, providing anchorage for the powerful masseter and temporal jaw muscles. The canine teeth are large but mostly used for display, and the molar teeth flat and crushing. Unlike most other members of the Carnivora, bears have relatively undeveloped carnassial teeth, and their teeth are adapted for a diet that includes a significant amount of vegetable matter.[45] Considerable variation occurs in dental formula even within a given species. This may indicate bears are still in the process of evolving from a mainly meat-eating diet to a predominantly herbivorous one. Polar bears appear to have secondarily re-evolved carnassial-like cheek teeth, as their diets have switched back towards carnivory.[49] Sloth bears lack lower central incisors and use their protusible lips for sucking up the termites on which they feed.[45] The general dental formula for living bears is:
120
+ 3.1.2–4.23.1.2–4.3.[45] The structure of the larynx of bears appears to be the most basal of the caniforms.[50] They possess air pouches connected to the pharynx which may amplify their vocalizations.[51]
121
+
122
+ Bears have a fairly simple digestive system typical for carnivorans, with a single stomach, short undifferentiated intestines and no cecum.[52][53] Even the herbivorous giant panda still has the digestive system of a carnivore, as well as carnivore-specific genes. Its ability to digest cellulose is ascribed to the microbes in its gut.[54] Bears must spend much of their time feeding in order to gain enough nutrition from foliage. The panda, in particular, spends 12–15 hours a day feeding.[55]
123
+
124
+ Extant bears are found in sixty countries primarily in the Northern Hemisphere and are concentrated in Asia, North America, and Europe. An exception is the spectacled bear; native to South America, it inhabits the Andean region.[56] The sun bear's range extends below the equator in Southeast Asia.[57] The Atlas bear, a subspecies of the brown bear was distributed in North Africa from Morocco to Libya, but it became extinct around the 1870s.[58]
125
+
126
+ The most widespread species is the brown bear, which occurs from Western Europe eastwards through Asia to the western areas of North America. The American black bear is restricted to North America, and the polar bear is restricted to the Arctic Sea. All the remaining species of bear are Asian.[56] They occur in a range of habitats which include tropical lowland rainforest, both coniferous and broadleaf forests, prairies, steppes, montane grassland, alpine scree slopes, Arctic tundra and in the case of the polar bear, ice floes.[56][59] Bears may dig their dens in hillsides or use caves, hollow logs and dense vegetation for shelter.[59]
127
+
128
+ Brown and American black bears are generally diurnal, meaning that they are active for the most part during the day, though they may forage substantially by night.[60] Other species may be nocturnal, active at night, though female sloth bears with cubs may feed more at daytime to avoid competition from conspecifics and nocturnal predators.[61] Bears are overwhelmingly solitary and are considered to be the most asocial of all the Carnivora. The only times bears are encountered in groups are mothers with young or occasional seasonal bounties of rich food (such as salmon runs).[62][63] Fights between males can occur and older individuals may have extensive scarring, which suggests that maintaining dominance can be intense.[64] With their acute sense of smell, bears can locate carcasses from several kilometres away. They use olfaction to locate other foods, encounter mates, avoid rivals and recognize their cubs.[45]
129
+
130
+ Most bears are opportunistic omnivores and consume more plant than animal matter. They eat anything from leaves, roots, and berries to insects, carrion, fresh meat, and fish, and have digestive systems and teeth adapted to such a diet.[56] At the extremes are the almost entirely herbivorous giant panda and the mostly carnivorous polar bear. However, all bears feed on any food source that becomes seasonally available.[55] For example, Asiatic black bears in Taiwan consume large numbers of acorns when these are most common, and switch to ungulates at other times of the year.[65]
131
+
132
+ When foraging for plants, bears choose to eat them at the stage when they are at their most nutritious and digestible, typically avoiding older grasses, sedges and leaves.[53][55] Hence, in more northern temperate areas, browsing and grazing is more common early in spring and later becomes more restricted.[66] Knowing when plants are ripe for eating is a learned behavior.[55] Berries may be foraged in bushes or at the tops of trees, and bears try to maximize the number of berries consumed versus foliage.[66] In autumn, some bear species forage large amounts of naturally fermented fruits, which affects their behavior.[67] Smaller bears climb trees to obtain mast (edible reproductive parts, such as acorns).[68] Such masts can be very important to the diets of these species, and mast failures may result in long-range movements by bears looking for alternative food sources.[69] Brown bears, with their powerful digging abilities, commonly eat roots.[66] The panda's diet is over 99% bamboo,[70] of 30 different species. Its strong jaws are adapted for crushing the tough stems of these plants, though they prefer to eat the more nutritious leaves.[71][72] Bromeliads can make up to 50% of the diet of the spectacled bear, which also has strong jaws to bite them open.[73]
133
+
134
+ The sloth bear, though not as specialized as polar bears and the panda, has lost several front teeth usually seen in bears, and developed a long, suctioning tongue to feed on the ants, termites, and other burrowing insects they favour. At certain times of the year, these insects can make up 90% of their diets.[74] Some species may raid the nests of wasps and bees for the honey and immature insects, in spite of stinging from the adults.[75] Sun bears use their long tongues to lick up both insects and honey.[76] Fish are an important source of food for some species, and brown bears in particular gather in large numbers at salmon runs. Typically, a bear plunges into the water and seizes a fish with its jaws or front paws. The preferred parts to eat are the brain and eggs. Small burrowing mammals like rodents may be dug out and eaten.[77][66]
135
+
136
+ The brown bear and both species of black bears sometimes take large ungulates, such as deer and bovids, mostly the young and weak.[65][78][77] These animals may be taken by a short rush and ambush, though hiding young may be stiffed out and pounced on.[66][79] The polar bear mainly preys on seals, stalking them from the ice or breaking into their dens. They primarily eat the highly digestible blubber.[80][77] Large mammalian prey is typically killed by a bite to the head or neck, or (in the case of young) simply pinned down and mauled.[66][81] Predatory behavior in bears is typically taught to the young by the mother.[77]
137
+
138
+ Bears are prolific scavengers and kleptoparasites, stealing food caches from rodents, and carcasses from other predators.[53][82] For hibernating species, weight gain is important as it provides nourishment during winter dormancy. A brown bear can eat 41 kg (90 lb) of food and gain 2–3 kg (4–7 lb) of fat a day prior to entering its den.[83]
139
+
140
+ Bears produce a number of vocal and non-vocal sounds. Tongue-clicking, grunting or chuffing many be made in cordial situations, such as between mothers and cubs or courting couples, while moaning, huffing, snorting or blowing air is made when an individual is stressed. Barking is produced during times of alarm, excitement or to give away the animal's position. Warning sounds include jaw-clicking and lip-popping, while teeth-chatters, bellows, growls, roars and pulsing sounds are made in aggressive encounters. Cubs may squeal, bawl, bleat or scream when in distress and make motor-like humming when comfortable or nursing.[50][84][85][86][87][88]
141
+
142
+ Bears sometimes communicate with visual displays such as standing upright, which exaggerates the individual's size. The chest markings of some species may add to this intimidating display. Staring is an aggressive act and the facial markings of spectacled bears and giant pandas may help draw attention to the eyes during agonistic encounters.[48] Individuals may approach each other by stiff-legged walking with the head lowered. Dominance between bears is asserted by making a frontal orientation, showing the canine teeth, muzzle twisting and neck stretching. A subordinate may respond with a lateral orientation, by turning away and dropping the head and by sitting or lying down.[63][89]
143
+
144
+ Bears may mark territory by rubbing against trees and other objects which may serve to spread their scent. This is usually accompanied by clawing and biting the object. Bark may be spread around to draw attention to the marking post.[90] Pandas are known to mark objects with urine and a waxy substance from their anal glands.[91] Polar bears leave behind their scent in their tracks which allow individuals to keep track of one another in the vast Arctic wilderness.[92]
145
+
146
+ The mating system of bears has variously been described as a form of polygyny, promiscuity and serial monogamy.[93][94][95] During the breeding season, males take notice of females in their vicinity and females become more tolerant of males. A male bear may visit a female continuously over a period of several days or weeks, depending on the species, to test her reproductive state. During this time period, males try to prevent rivals from interacting with their mate. Courtship may be brief, although in some Asian species, courting pairs may engage in wrestling, hugging, mock fighting and vocalizing. Ovulation is induced by mating, which can last up to 30 minutes depending on the species.[94]
147
+
148
+ Gestation typically lasts 6–9 months, including delayed implantation, and litter size numbers up to four cubs.[96] Giant pandas may give birth to twins but they can only suckle one young and the other is left to die.[97] In northern living species, birth takes place during winter dormancy. Cubs are born blind and helpless with at most a thin layer of hair, relying on their mother for warmth. The milk of the female bear is rich in fat and antibodies and cubs may suckle for up to a year after they are born. By 2–3 months, cubs can follow their mother outside the den. They usually follow her on foot, but sloth bear cubs may ride on their mother's back.[96][59] Male bears play no role in raising young. Infanticide, where an adult male kills the cubs of another, has been recorded in polar bears, brown bears and American black bears but not in other species.[98] Males kill young to bring the female into estrus.[99] Cubs may flee and the mother defends them even at the cost of her life.[100][101][102]
149
+
150
+ In some species, offspring may become independent around the next spring, through some may stay until the female successfully mates again. Bears reach sexual maturity shortly after they disperse; at around 3–6 years depending on the species. Male Alaskan brown bears and polar bears may continue to grow until they are 11 years old.[96] Lifespan may also vary between species. The brown bear can live an average of 25 years.[103]
151
+
152
+ Bears of northern regions, including the American black bear and the grizzly bear, hibernate in the winter.[104][105] During hibernation, the bear's metabolism slows down, its body temperature decreases slightly, and its heart rate slows from a normal value of 55 to just 9 beats per minute.[106] Bears normally do not wake during their hibernation, and can go the entire period without eating, drinking, urinating, or defecating.[45] A fecal plug is formed in the colon, and is expelled when the bear wakes in the spring.[107] If they have stored enough body fat, their muscles remain in good condition, and their protein maintenance requirements are met from recycling waste urea.[45] Female bears give birth during the hibernation period, and are roused when doing so.[105]
153
+
154
+ Bears do not have many predators. The most important are humans, and as they started cultivating crops, they increasingly came in conflict with the bears that raided them. Since the invention of firearms, people have been able to kill bears with greater ease.[108] Felids like the tiger may also prey on bears,[109][110] particularly cubs, which may also be threatened by canids.[14][95]
155
+
156
+ Bears are parasitized by eighty species of parasites, including single-celled protozoans and gastro-intestinal worms, and nematodes and flukes in their heart, liver, lungs and bloodstream. Externally they have ticks, fleas and lice. A study of American black bears found seventeen species of endoparasite including the protozoan Sarcocystis, the parasitic worm Diphyllobothrium mansonoides, and the nematodes Dirofilaria immitis, Capillaria aerophila, Physaloptera sp., Strongyloides sp. and others. Of these, D. mansonoides and adult C. aerophila were causing pathological symptoms.[111] By contrast, polar bears have few parasites; many parasitic species need a secondary, usually terrestrial, host, and the polar bear's life style is such that few alternative hosts exist in their environment. The protozoan Toxoplasma gondii has been found in polar bears, and the nematode Trichinella nativa can cause a serious infection and decline in older polar bears.[112] Bears in North America are sometimes infected by a Morbillivirus similar to the canine distemper virus.[113] They are susceptible to infectious canine hepatitis (CAV-1), with free-living black bears dying rapidly of encephalitis and hepatitis.[114]
157
+
158
+ In modern times, bears have come under pressure through encroachment on their habitats[115] and illegal trade in bear parts, including the Asian bile bear market, though hunting is now banned, largely replaced by farming.[116] The IUCN lists six bear species as vulnerable;[117] even the two least concern species, the brown bear and the American black bear,[117] are at risk of extirpation in certain areas. In general these two species inhabit remote areas with little interaction with humans, and the main non-natural causes of mortality are hunting, trapping, road-kill and depredation.[118]
159
+
160
+ Laws have been passed in many areas of the world to protect bears from habitat destruction. Public perception of bears is often positive, as people identify with bears due to their omnivorous diets, their ability to stand on two legs, and their symbolic importance.[119] Support for bear protection is widespread, at least in more affluent societies.[120] The giant panda has become a worldwide symbol of conservation. The Sichuan Giant Panda Sanctuaries, which are home to around 30% of the wild panda population, gained a UNESCO World Heritage Site designation in 2006.[121] Where bears raid crops or attack livestock, they may come into conflict with humans.[122][123] In poorer rural regions, attitudes may be more shaped by the dangers posed by bears, and the economic costs they cause to farmers and ranchers.[122]
161
+
162
+ Several bear species are dangerous to humans, especially in areas where they have become used to people; elsewhere, they generally avoid humans. Injuries caused by bears are rare, but are widely reported.[124] Bears may attack humans in response to being startled, in defense of young or food, or even for predatory reasons.[125]
163
+
164
+ Bears in captivity have for centuries been used for entertainment. They have been trained to dance,[126] and were kept for baiting in Europe at least since the 16th century. There were five bear-baiting gardens in Southwark, London at that time; archaeological remains of three of these have survived.[127] Across Europe, nomadic Romani bear handlers called Ursari lived by busking with their bears from the 12th century.[128]
165
+
166
+ Bears have been hunted for sport, food, and folk medicine. Their meat is dark and stringy, like a tough cut of beef. In Cantonese cuisine, bear paws are considered a delicacy. Bear meat should be cooked thoroughly, as it can be infected with the parasite Trichinella spiralis.[129][130]
167
+
168
+ The peoples of eastern Asia use bears' body parts and secretions (notably their gallbladders and bile) as part of traditional Chinese medicine. More than 12,000 bears are thought to be kept on farms in China, Vietnam, and South Korea for the production of bile. Trade in bear products is prohibited under CITES, but bear bile has been detected in shampoos, wine and herbal medicines sold in Canada, the United States and Australia.[131]
169
+
170
+ There is evidence of prehistoric bear worship, though this is disputed by archaeologists.[132] It is possible that bear worship existed in early Chinese and Ainu cultures.[133] The prehistoric Finns,[134] Siberian peoples[135] and more recently Koreans considered the bear as the spirit of their forefathers.[136] In many Native American cultures, the bear is a symbol of rebirth because of its hibernation and re-emergence.[137] The image of the mother bear was prevalent throughout societies in North America and Eurasia, based on the female's devotion and protection of her cubs.[138] Japanese folklore features the Onikuma, a "demon bear" that walks upright.[139] The Ainu of northern Japan, a different people from the Japanese, saw the bear instead as sacred; Hirasawa Byozan painted a scene in documentary style of a bear sacrifice in an Ainu temple, complete with offerings to the dead animal's spirit.[140]
171
+
172
+ In Korean mythology, a tiger and a bear prayed to Hwanung, the son of the Lord of Heaven, that they might become human. Upon hearing their prayers, Hwanung gave them 20 cloves of garlic and a bundle of mugwort, ordering them to eat only this sacred food and remain out of the sunlight for 100 days. The tiger gave up after about twenty days and left the cave. However, the bear persevered and was transformed into a woman. The bear and the tiger are said to represent two tribes that sought the favor of the heavenly prince.[141] The bear-woman (Ungnyeo; 웅녀/熊女) was grateful and made offerings to Hwanung. However, she lacked a husband, and soon became sad and prayed beneath a "divine birch" tree (Korean: 신단수; Hanja: 神檀樹; RR: shindansu) to be blessed with a child. Hwanung, moved by her prayers, took her for his wife and soon she gave birth to a son named Dangun Wanggeom – who was the legendary founder of Gojoseon, the first ever Korean kingdom.[142]
173
+
174
+ Artio (Dea Artio in the Gallo-Roman religion) was a Celtic bear goddess. Evidence of her worship has notably been found at Bern, itself named for the bear. Her name is derived from the Celtic word for "bear", artos.[143] In ancient Greece, archaic cult of Artemis in bear form survived into Classical times at Brauron, where young Athenian girls passed an initiation right as arktai "she bears".[144] For Artemis and one of her nymphs as a she-bear, see the myth of Callisto.
175
+
176
+ The constellations of Ursa Major and Ursa Minor, the great and little bears, are named for their supposed resemblance to bears, from the time of Ptolemy.[b][9] The nearby star Arcturus means "guardian of the bear", as if it were watching the two constellations.[146] Ursa Major has been associated with a bear for as much as 13,000 years since Paleolithic times, in the widespread Cosmic Hunt myths. These are found on both sides of the Bering land bridge, which was lost to the sea some 11,000 years ago.[147]
177
+
178
+ Pliny the Elder's Natural History (1st century AD) claims that "when first born, [bears] are shapeless masses of white flesh, a little larger than mice; their claws alone being prominent. The mother then licks them gradually into proper shape."[148] This belief was echoed by authors of bestiaries throughout the medieval period.[149]
179
+
180
+ Bears are mentioned in the Bible; the Second Book of Kings relates the story of the prophet Elisha calling on them to eat the youths who taunted him.[150] Legends of saints taming bears are common in the Alpine zone. In the arms of the bishopric of Freising, the bear is the dangerous totem animal tamed by St. Corbinian and made to carry his civilized baggage over the mountains. Bears similarly feature in the legends of St. Romedius, Saint Gall and Saint Columbanus. This recurrent motif was used by the Church as a symbol of the victory of Christianity over paganism.[151] In the Norse settlements of northern England during the 10th century, a type of "hogback" grave cover of a long narrow block of stone, with a shaped apex like the roof beam of a long house, is carved with a muzzled, thus Christianized, bear clasping each gable end, as in the church at Brompton, North Yorkshire and across the British Isles.[152]
181
+
182
+ Lāčplēsis, meaning "Bear-slayer", is a Latvian legendary hero who is said to have killed a bear by ripping its jaws apart with his bare hands. However, as revealed in the end of the long epic describing his life, Lāčplēsis' own mother had been a she-bear, and his superhuman strength resided in his bear ears. The modern Latvian military award Order of Lāčplēsis, called for the hero, is also known as The Order of the Bear-Slayer.[citation needed]
183
+
184
+ In the Hindu epic poem The Ramayana, the sloth bear or Asian black bear Jambavan is depicted as the king of bears and helps the title hero Rama defeat the epic's antagonist Ravana and reunite with his queen Sita.[153][154]
185
+
186
+ Bears are popular in children's stories, including Winnie the Pooh,[155] Paddington Bear,[156] Gentle Ben[157] and "The Brown Bear of Norway".[158] An early version of "Goldilocks and the Three Bears",[159] was published as "The Three Bears" in 1837 by Robert Southey, many times retold, and illustrated in 1918 by Arthur Rackham.[160] The Hanna-Barbera character Yogi Bear has appeared in numerous comic books, animated television shows and films.[161][162] The Care Bears began as greeting cards in 1982, and were featured as toys, on clothing and in film.[163] Around the world, many children—and some adults—have teddy bears, stuffed toys in the form of bears, named after the American statesman Theodore Roosevelt when in 1902 he had refused to shoot an American black bear tied to a tree.[164]
187
+
188
+ Bears, like other animals, may symbolize nations. In 1911, the British satirical magazine Punch published a cartoon about the Anglo-Russian Entente by Leonard Raven-Hill in which the British lion watches as the Russian bear sits on the tail of the Persian cat.[165] The Russian Bear has been a common national personification for Russia from the 16th century onward.[166] Smokey Bear has become a part of American culture since his introduction in 1944, with his message "Only you can prevent forest fires".[167] In the United Kingdom, the bear and staff feature on the heraldic arms of the county of Warwickshire.[168] Bears appear in the canting arms of two cities, Bern and Berlin.[169]
189
+
190
+ The International Association for Bear Research & Management, also known as the International Bear Association, and the Bear Specialist Group of the Species Survival Commission, a part of the International Union for Conservation of Nature focus on the natural history, management, and conservation of bears. Bear Trust International works for wild bears and other wildlife through four core program initiatives, namely Conservation Education, Wild Bear Research, Wild Bear Management, and Habitat Conservation.[170]
191
+
192
+ Specialty organizations for each of the eight species of bears worldwide include:
193
+
en/588.html.txt ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A sailing ship is a sea-going vessel that uses sails mounted on masts to harness the power of wind and propel the vessel. There is a variety of sail plans that propel sailing ships, employing square-rigged or fore-and-aft sails. Some ships carry square sails on each mast—the brig and full-rigged ship, said to be "ship-rigged" when there are three or more masts.[1] Others carry only fore-and-aft sails on each mast—schooners. Still others employ a combination of square and fore-and aft sails, including the barque, barquentine, and brigantine.[2] Sailing ships developed differently in Asia, which produced the junk and dhow—vessels that incorporated innovations absent in European ships of the time. Technically in the Age of Sail a ship was a specific type of vessel, with a bowsprit and three masts, each of which consists of a lower, top, and topgallant mast.[3]
2
+
3
+ Sailing ships with predominantly square rigs became prevalent during the Age of Discovery, when they crossed oceans between continents and around the world. Most sailing ships were merchantmen, but the Age of Sail also saw the development of large fleets of well-armed warships. The Age of Sail waned with the advent of steam-powered ships, which did not depend upon a favourable wind.
4
+
5
+ The first sailing vessels were developed for use in the South China Sea and also independently in lands abutting the western Mediterranean Sea by the 2nd millennium BCE. In Asia, early vessels were equipped with crab claw sails—with a spar on the top and bottom of the sail, arranged fore-and-aft when needed. In the Mediterranean, vessels were powered downwind by square sails that supplemented propulsion by oars. Sailing ships evolved differently in the South China Sea and in the Indian Ocean, where fore-and-aft sail plans were developed several centuries into the Common Era. By the time of the Age of Discovery—starting in the 15th century—square-rigged, multi-masted vessels were the norm and were guided by navigation techniques that included the magnetic compass and making sightings of the sun and stars that allowed transoceanic voyages. The Age of Sail reached its peak in the 18th and 19th centuries with large, heavily armed battleships and merchant sailing ships that were able to travel at speeds that exceeded those of the newly introduced steamships. Ultimately, the steamships' independence from the wind and their ability to take shorter routes, passing through the Suez and Panama Canals,[4] made sailing ships uneconomical.
6
+
7
+ Initially sails provided supplementary power to ships with oars, because the sails were not designed to sail to windward. In Asia sailing ships were equipped with fore-and-aft rigs that made sailing to windward possible. Later square-rigged vessels too were able to sail to windward, and became the standard for European ships through the Age of Discovery when vessels ventured around Africa to India, to the Americas and around the world. Later during this period—in the late 15th century—"ship-rigged" vessels with multiple square sails on each mast appeared and became common for sailing ships.[5]
8
+
9
+ Sailing ships in the Mediterranean region date to 3000 BCE, when Egyptians used a bipod mast to support a single square sail on a vessel that mainly relied on multiple paddlers. Later the mast became a single pole, and paddles were supplanted with oars. Such vessels plied both the Nile and the Mediterranean coast. The inhabitants of Crete had sailing vessels by 1200 BCE. Between 1000 BCE and 400 CE, the Phoenicians, Greeks and Romans developed ships that were powered by square sails, sometimes with oars to supplement their capabilities. Such vessels used a steering oar as a rudder to control direction. Fore-and-aft sails started appearing on sailing vessels in the Mediterranean ca.1200 CE,[5] an influence of rigs introduced in Asia and the Indian Ocean.[6]
10
+
11
+ Starting in the 8th century in Denmark, Vikings were building clinker-constructed longships propelled by a single, square sail, when practical, and oars, when necessary.[7] A related craft was the knarr, which plied the Baltic and North Seas, using primarily sail power.[8] The windward edge of the sail was stiffened with a beitass, a pole that fitted into the lower corner of the sail, when sailing close to the wind.[9]
12
+
13
+ The first sea-going sailing ships in Asia were developed by the Austronesian peoples from what is now Southern China and Taiwan. Their invention of catamarans, outriggers, and crab claw sails enabled the Austronesian Expansion at around 3000 to 1500 BCE. From Taiwan, they rapidly colonized the islands of Maritime Southeast Asia, then sailed further onwards to Micronesia, Island Melanesia, Polynesia, and Madagascar. Austronesian rigs were distinctive in that they had spars supporting both the upper and lower edges of the sails (and sometimes in between), in contrast to western rigs which only had a spar on the upper edge.[10][11][12]
14
+
15
+ Early Austronesian sailors also influenced the development of sailing technologies in Sri Lanka and Southern India through the Austronesian maritime trade network of the Indian Ocean, the precursor to the spice trade route and the maritime silk road.[13]
16
+ Austronesians established the first maritime trade network with ocean-going merchant ships which plied the early trade routes from Southeast Asia from at least 1500 BCE. They reached as far northeast as Japan and as far west as eastern Africa. They colonized Madagascar and their trade routes were the precursors to the spice trade route and the maritime silk road. They mainly facilitated trade of goods from China and Japan to South India, Sri Lanka, the Persian Gulf, and the Red Sea.[13]][14][15] An important invention in this region was the fore-and-aft rig, which made sailing against the wind possible. Such sails may have originated at least several hundred years BCE.[16] Balance lugsails and tanja sails also originated from this region. Vessels with such sails explored and traded along the western coast of Africa. This type of sail propagated to the west and influenced Arab lateen designs.[16]
17
+
18
+ Large Austronesian trading ships with as many as four sails were recorded by Han Dynasty (206 BCE – 220 CE) scholars as the kunlun bo (崑崙舶, lit. "ship of the Kunlun people"). They were booked by Chinese Buddhist pilgrims for passage to Southern India and Sri Lanka.[17] Bas reliefs of Sailendran and Srivijayan large merchant ships with various configurations of tanja sails and outriggers are also found in the Borobudur temple, dating back to the 8th century CE.[18][19]
19
+
20
+ By the 10th century CE, the Song Dynasty started building the first Chinese junks, which were adopted from the design of the Javanese djongs. The junk rig in particular, became associated with Chinese coast-hugging trading ships.[20][21] Junks in China were constructed from teak with pegs and nails; they featured watertight compartments and acquired center-mounted tillers and rudders.[22] These ships became the basis for the development of Chinese warships during the Mongol Yuan Dynasty, and were used in the unsuccessful Mongol invasions of Japan and Java.[23][24]
21
+
22
+ The Ming dynasty (1368–1644) saw the use of junks as long-distance trading vessels. Chinese Admiral Zheng He reportedly sailed to India, Arabia, and southern Africa on a trade and diplomatic mission.[25][26] Literary lore suggests that his largest vessel, the "Treasure Ship", measured 400 feet (120 m) in length and 150 feet (46 m) in width, whereas modern research suggests that it was unlikely to have exceeded 200 feet (61 m) in length.[27]
23
+
24
+ The Indian Ocean was the venue for increasing trade between India and Africa between 1200 and 1500. The vessels employed would be classified as dhows with lateen rigs. During this interval such vessels grew in capacity from 100 to 400 tonnes. Dhows were often built with teak planks from India and Southeast Asia, sewn together with coconut husk fiber—no nails were employed. This period also saw the implementation of center-mounted rudders, controlled with a tiller.[28]
25
+
26
+ Technological advancements that were important to the Age of Discovery in the 15th century were the adoption of the magnetic compass and advances in ship design.
27
+
28
+ The compass was an addition to the ancient method of navigation based on sightings of the sun and stars. The compass was invented by Chinese. It had been used for navigation in China by the 11th century and was adopted by the Arab traders in the Indian Ocean. The compass spread to Europe by the late 12th or early 13th century.[6] Use of the compass for navigation in the Indian Ocean was first mentioned in 1232.[20] The Europeans used a "dry" compass, with a needle on a pivot. The compass card was also a European invention.[20]
29
+
30
+ At the beginning of the 15th century, the carrack was the most capable European ocean-going ship. It was carvel-built and large enough to be stable in heavy seas. It was capable of carrying a large cargo and the provisions needed for very long voyages. Later carracks were square-rigged on the foremast and mainmast and lateen-rigged on the mizzenmast. They had a high rounded stern with large aftcastle, forecastle and bowsprit at the stem. As the predecessor of the galleon, the carrack was one of the most influential ship designs in history; while ships became more specialized in the following centuries, the basic design remained unchanged throughout this period.[29]
31
+
32
+ Ships of this era were only able to sail approximately 70° into the wind and tacked from one side to the other across the wind with difficulty, which made it challenging to avoid shipwrecks when near shores or shoals during storms.[30] Nonetheless, such vessels reached India around Africa with Vasco da Gama,[31] the Americas with Christopher Columbus,[32] and around the world under Ferdinand Magellan.[33]
33
+
34
+ Sailing ships became longer and faster over time, with ship-rigged vessels carrying taller masts with more square sails. Other sail plans emerged, as well, that had just fore-and-aft sails (schooners), or a mixture of the two (brigantines, barques and barquentines).[5]
35
+
36
+ Cannon were present in the 14th century, but did not become common at sea until they could be reloaded quickly enough to be reused in the same battle. The size of a ship required to carry a large number of cannon made oar-based propulsion impossible, and warships came to rely primarily on sails. The sailing man-of-war emerged during the 16th century.[34]
37
+
38
+ By the middle of the 17th century, warships were carrying increasing numbers of cannon on three decks. Naval tactics evolved to bring each ship's firepower to bear in a line of battle—coordinated movements of a fleet of warships to engage a line of ships in the enemy fleet.[35] Carracks with a single cannon deck evolved into galleons with as many as two full cannon decks,[36] which evolved into the man-of-war, and further into the ship of the line—designed for engaging the enemy in a line of battle. One side of a ship was expected to shoot broadsides against an enemy ship at close range.[35] In the 18th century, the small and fast frigate and sloop-of-war—too small to stand in the line of battle—evolved to convoy trade, scout for enemy ships and blockade enemy coasts.[37]
39
+
40
+ Fast schooners and brigantines, called Baltimore clippers, were used for blockade running and as privateers in the early 1800s. These evolved into three-masted, usually ship-rigged sailing vessels, optimized for speed with fine lines that lessened their cargo capacity.[38] Sea trade with China became important in that period which favored a combination of speed and cargo volume, which was met by building vessels with long waterlines, fine bows and tall masts, generously equipped with sails for maximum speed. Masts were as high as 100 feet (30 m) and were able to achieve speeds of 19 knots (35 km/h), allowing for passages of up to 465 nautical miles (861 km) per 24 hours. Clippers yielded to bulkier, slower vessels, which became economically competitive in the mid 19th century.[39]
41
+
42
+ During the Age of Sail, ships' hulls were under frequent attack by shipworm (which affected the structural strength of timbers), and barnacles and various marine weeds (which affected ship speed).[40] Since before the common era, a variety of coatings had been applied to hulls to counter this effect, including pitch, wax, tar, oil, sulfur and arsenic.[41] In the mid 18th century copper sheathing was developed as a defense against such bottom fouling.[42] After coping with problems of galvanic deterioration of metal hull fasteners, sacrificial anodes were developed, which were designed to corrode, instead of the hull fasteners.[43] The practice became widespread on naval vessels, starting in the late18th century,[44] and on merchant vessels, starting in the early 19th century, until the advent of iron and steel hulls.[43]
43
+
44
+ Iron-hulled sailing ships, often referred to as "windjammers" or "tall ships",[45] represented the final evolution of sailing ships at the end of the Age of Sail. They were built to carry bulk cargo for long distances in the nineteenth and early twentieth centuries. They were the largest of merchant sailing ships, with three to five masts and square sails, as well as other sail plans. They carried lumber, guano, grain or ore between continents. Later examples had steel hulls. Iron-hulled sailing ships were mainly built from the 1870s to 1900, when steamships began to outpace them economically, due to their ability to keep a schedule regardless of the wind. Steel hulls also replaced iron hulls at around the same time. Even into the twentieth century, sailing ships could hold their own on transoceanic voyages such as Australia to Europe, since they did not require bunkerage for coal nor fresh water for steam, and they were faster than the early steamers, which usually could barely make 8 knots (15 km/h).[46]
45
+
46
+ The four-masted, iron-hulled ship, introduced in 1875 with the full-rigged County of Peebles, represented an especially efficient configuration that prolonged the competitiveness of sail against steam in the later part of the 19th century.[47] The largest example of such ships was the five-masted, full-rigged ship Preussen, which had a load capacity of 7,800 tonnes.[48] Ships transitioned from all sail to all steam-power during from the mid 19th century into the 20th.[49] Five-masted Preussen used steam power for driving the winches, hoists and pumps, and could be manned by a crew of 48, compared with four-masted Kruzenshtern, which has a crew of 257.[50]
47
+
48
+ Coastal top-sail schooners with a crew as small as two managing the sail handling became an efficient way to carry bulk cargo, since only the fore-sails required tending while tacking and steam-driven machinery was often available for raising the sails and the anchor.[51]
49
+
50
+ In the 20th century, the DynaRig allowed central, automated control of all sails in a manner that obviates the need for sending crew aloft. This was developed in the 1960s in Germany as a low-carbon footprint propulsion alternative for commercial ships. The rig automatically sets and reefs sails; its mast rotates to align the sails with the wind. The sailing yachts Maltese Falcon and Black Pearl employ the rig.[50][52]
51
+
52
+ Every sailing ship has a sail plan that is adapted to the purpose of the vessel and the ability of the crew; each has a hull, rigging and masts to hold up the sails that use the wind to power the ship; the masts are supported by standing rigging and the sails are adjusted by running rigging.
53
+
54
+ Hull shapes for sailing ships evolved from being relatively short and blunt to being longer and finer at the bow.[5] By the nineteenth century, ships were built with reference to a half model, made from wooden layers that were pinned together. Each layer could be scaled to the actual size of the vessel in order to lay out its hull structure, starting with the keel and leading to the ship's ribs. The ribs were pieced together from curved elements, called futtocks and tied in place until the installation of the planking. Typically, planking was caulked with a tar-impregnated yarn made from manila or hemp to make the planking watertight.[53] Starting in the mid-19th century, iron was used first for the hull structure and later for its watertight sheathing.[54]
55
+
56
+ Until the mid-19th century all vessels' masts were made of wood formed from a single or several pieces of timber which typically consisted of the trunk of a conifer tree. From the 16th century, vessels were often built of a size requiring masts taller and thicker than could be made from single tree trunks. On these larger vessels, to achieve the required height, the masts were built from up to four sections (also called masts), known in order of rising height above the decks as the lower, top, topgallant and royal masts.[56] Giving the lower sections sufficient thickness necessitated building them up from separate pieces of wood. Such a section was known as a made mast, as opposed to sections formed from single pieces of timber, which were known as pole masts.[57] Starting in the second half of the 19th century, masts were made of iron or steel.[5]
57
+
58
+ For ships with square sails the principal masts, given their standard names in bow to stern (front to back) order, are:
59
+
60
+ Each rig is configured in a sail plan, appropriate to the size of the sailing craft. Both square-rigged and fore-and-aft rigged vessels have been built with a wide range of configurations for single and multiple masts.[60]
61
+
62
+ Types of sail that can be part of a sail plan can be broadly classed by how they are attached to the sailing craft:
63
+
64
+ Sailing ships have standing rigging to support the masts and running rigging to raise the sails and control their ability to draw power from the wind. The running rigging has three main roles, to support the sail structure, to shape the sail and to adjust its angle to the wind. Square-rigged vessels require more controlling lines than fore-and-aft rigged ones.
65
+
66
+ Sailing ships prior to the mid-19th century used wood masts with hemp-fiber standing rigging. As rigs became taller by the end of the 19th Century, masts relied more heavily on successive spars, stepped one atop the other to form the whole, from bottom to top: the lower mast, top mast, and topgallant mast. This construction relied heavily on support by a complex array of stays and shrouds. Each stay in either the fore-and-aft or athwartships direction had a corresponding one in the opposite direction providing counter-tension. Fore-and-aft the system of tensioning started with the stays that were anchored in front each mast. Shrouds were tensioned by pairs deadeyes, circular blocks that had the large-diameter line run around them, whilst multiple holes allowed smaller line—lanyard—to pass multiple times between the two and thereby allow tensioning of the shroud. After the mid-19th century square-rigged vessels were equipped with steel-cable standing rigging.[61]
67
+
68
+ Halyards, used to raise and lower the yards, are the primary supporting lines.[62] In addition, square rigs have lines that lift the sail or the yard from which it is suspended that include: brails, buntlines, lifts and leechlines. Bowlines and clew lines shape a square sail.[55] To adjust the angle of the sail to wind braces are used to adjust the fore and aft angle of a yard of a square sail, while sheets attach to the clews (bottom corners) of a sail to control the sail's angle to the wind. Sheets run aft, whereas tacks are used to haul the clew of a square sail forward.[55]
69
+
70
+ The crew of a sailing ship is divided between officers (the captain and his subordinates) and seamen or ordinary hands. An able seaman was expected to "hand, reef, and steer" (handle the lines and other equipment, reef the sails, and steer the vessel).[63] The crew is organized to stand watch—the oversight of the ship for a period—typically four hours each.[64] Richard Henry Dana Jr. and Herman Melville each had personal experience aboard sailing vessels of the 19th century.
71
+
72
+ Dana described the crew of the merchant brig, Pilgrim, as comprising six to eight common sailors, four specialist crew members (the steward, cook, carpenter and sailmaker), and three officers: the captain, the first mate and the second mate. He contrasted the American crew complement with that of other nations on whose similarly sized ships the crew might number as many as 30.[65] Larger merchant vessels had larger crews.[66]
73
+
74
+ Melville described the crew complement of the frigate warship, United States, as about 500—including officers, enlisted personnel and 50 Marines. The crew was divided into the starboard and larboard watches. It was also divided into three tops, bands of crew responsible for setting sails on the three masts; a band of sheet-anchor men, whose station was forward and whose job was to tend the fore-yard, anchors and forward sails; the after guard, who were stationed aft and tended the mainsail, spanker and man the various sheets, controlling the position of the sails; the waisters, who were stationed midships and had menial duties attending the livestock, etc.; and the holders, who occupied the lower decks of the vessel and were responsible for the inner workings of the ship. He additionally named such positions as, boatswains, gunners, carpenters, coopers, painters, tinkers, stewards, cooks and various boys as functions on the man-of-war.[67] 18-19th century ships of the line had a complement as high as 850.[68]
75
+
76
+ Handling a sailing ship requires management of its sails to power—but not overpower—the ship and navigation to guide the ship, both at sea and in and out of harbors.
77
+
78
+ Key elements of sailing a ship are setting the right amount of sail to generate maximum power without endangering the ship, adjusting the sails to the wind direction on the course sailed, and changing tack to bring the wind from one side of the vessel to the other.
79
+
80
+ A sailing ship crew manages the running rigging of each square sail. Each sail has two sheets that control its lower corners, two braces that control the angle of the yard, two clewlines, four buntlines and two reef tackles. All these lines must be manned as the sail is deployed and the yard raised. They use a halyard to raise each yard and its sail; then they pull or ease the braces to set the angle of the yard across the vessel; they pull on sheets to haul lower corners of the sail, clews, out to yard below. Under way, the crew manages reef tackles, haul leeches, reef points, to manage the size and angle of the sail; bowlines pull the leading edge of the sail (leech) taut when close hauled. When furling the sail, the crew uses clewlines, haul up the clews and buntlines to haul up the middle of sail up; when lowered, lifts support each yard.[69]
81
+
82
+ In strong winds, the crew is directed to reduce the number of sails or, alternatively, the amount of each given sail that is presented to the wind by a process called reefing. To pull the sail up, seamen on the yardarm pull on reef tackles, attached to reef cringles, to pull the sail up and secure it with lines, called reef points.[70] Dana spoke of the hardships of sail handling during high wind and rain or with ice covering the ship and its rigging.[65]
83
+
84
+ Sailing vessels cannot sail directly into the wind. Instead, square-riggers must sail a course that is between 60° and 70° away from the wind direction[71] and fore-and aft vessels can typically sail no closer than 45°.[72] To reach a destination, sailing vessels may have to change course and allow the wind to come from the opposite side in a procedure, called tacking, when the wind comes across the bow during the maneuver.
85
+
86
+ When tacking, a square-rigged vessel's sails must be presented squarely to the wind and thus impede forward motion as they are swung around via the yardarms through the wind as controlled by the vessel's running rigging, using braces—adjusting the fore and aft angle of each yardarm around the mast—and sheets attached to the clews (bottom corners) of each sail to control the sail's angle to the wind.[55] The procedure is to turn the vessel into the wind with the hind-most fore-and-aft sail (the spanker), pulled to windward to help turn the ship through the eye of the wind. Once the ship has come about, all the sails are adjusted to align properly with the new tack. Because square-rigger masts are more strongly braced from behind than from ahead, tacking is a dangerous procedure in strong winds; the ship may lose forward momentum (become caught in stays) and the rigging may fail from the wind coming from ahead. The ship may also lose momentum at wind speeds of less than 10 knots (19 km/h).[71] Under these conditions, the choice may be to wear ship—to turn the ship away from the wind and around 240° onto the next tack (60° off the wind).[73][74]
87
+
88
+ A fore-and-aft rig permits the wind to flow past the sail, as the craft head through the eye of the wind. Most rigs pivot around a stay or the mast, while this occurs. For a jib, the old leeward sheet is released as the craft heads through the wind and the old windward sheet is tightened as the new leeward sheet to allow the sail to draw wind. Mainsails are often self-tending and slide on a traveler to the opposite side.[75] On certain rigs, such as lateens[76] and luggers,[77] the sail may be partially lowered to bring it to the opposite side.
89
+
90
+ Early navigational techniques employed observations of the sun, stars, waves and birdlife. In the 15th century, the Chinese were using the magnetic compass to identify direction of travel. By the 16th century in Europe, navigational instruments included the quadrant, the astrolabe, cross staff, dividers and compass. By the time of the Age of Exploration these tools were being used in combination with a log to measure speed, a lead line to measure soundings, and a lookout to identify potential hazards. Later, an accurate marine sextant became standard for determining latitude and an accurate chronometer became standard for determining longitude.[78][79]
91
+
92
+ Passage planning begins with laying out a route along a chart, which comprises a series of courses between fixes—verifiable locations that confirm the actual track of the ship on the ocean. Once a course has been set, the person at the helm attempts to follow its direction with reference to the compass. The navigator notes the time and speed at each fix to estimate the arrival at the next fix, a process called dead reckoning. For coast-wise navigation, sightings from known landmarks or navigational aids may be used to establish fixes, a process called pilotage.[1] At sea, sailing ships used celestial navigation on a daily schedule, as follows:[80]
93
+
94
+ Fixes were taken with a marine sextant, which measures the distance of the celestial body above the horizon.[78]
95
+
96
+ Given the limited maneuverability of sailing ships, it could be difficult to enter and leave harbor with the presence of a tide without coordinating arrivals with a flooding tide and departures with an ebbing tide. In harbor, a sailing ship stood at anchor, unless it needed to be loaded or unloaded at a dock or pier, in which case it had to be towed to shore by its boats or by other vessels.[81]
97
+
98
+ These are examples of sailing ships; some terms have multiple meanings:
99
+
100
+ Defined by general configuration
101
+
102
+
103
+
104
+ Defined by sail plan
105
+
106
+ All masts have fore-and-aft sails
107
+
108
+ All masts have square sails
109
+
110
+ Mixture of masts with square sails and masts with fore-and-aft sails
111
+
112
+ Military vessels
113
+
114
+
115
+
116
+ Götheborg, a sailing replica of a Swedish East Indiaman
117
+
118
+ Cutty Sark, the only surviving clipper ship[82]
119
+
120
+ USS Constitution with sails on display in 2012, the oldest commissioned warship still afloat[83]
121
+
122
+ French steam-powered, screw-propelled battleship, Napoléon
123
+
124
+ INS Tarangini, a three-masted barque in service with the Indian Navy
125
+
126
+ Maltese Falcon with all-rotating, stayless DynaRig
127
+
128
+ Media related to Sailing ships at Wikimedia Commons
en/5880.html.txt ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The Soviet Union,[d] officially the Union of Soviet Socialist Republics[e] (USSR),[f] was a federal socialist state in Northern Eurasia that existed from 1922 to 1991. Nominally a union of multiple national Soviet republics,[g] in practice its government and economy were highly centralized until its final years. It was a one-party state governed by the Communist Party, with Moscow as its capital in its largest republic, the Russian SFSR. Other major urban centers were Leningrad, Kiev, Minsk, Tashkent, Alma-Ata and Novosibirsk. It was the largest country in the world by surface area,[18] spanning over 10,000 kilometers (6,200 mi) east to west across 11 time zones and over 7,200 kilometers (4,500 mi) north to south. Its territory included much of Eastern Europe as well as part of Northern Europe and all of Northern and Central Asia. It had five climate zones such as tundra, taiga, steppes, desert, and mountains. Its diverse population was collectively known as Soviet people.
6
+
7
+ The Soviet Union had its roots in the October Revolution of 1917, when the Bolsheviks, headed by Vladimir Lenin, overthrew the Provisional Government that had earlier replaced the monarchy. They established the Russian Soviet Republic[h], beginning a civil war between the Bolshevik Red Army and many anti-Bolshevik forces across the former Empire, among whom the largest faction was the White Guard. The disastrous distractive effect of the war and the Bolshevik policies led to 5 million deaths during the 1921–1922 famine in the region of Povolzhye. The Red Army expanded and helped local Communists take power, establishing soviets, repressing their political opponents and rebellious peasants through the policies of Red Terror and War Communism. In 1922, the Communists were victorious, forming the Soviet Union with the unification of the Russian, Transcaucasian, Ukrainian and Byelorussian republics. The New Economic Policy (NEP) which was introduced by Lenin led to a partial return of a free market and private property, resulting in a period of economic recovery.
8
+
9
+ Following Lenin's death in 1924, a troika and a brief power struggle, Joseph Stalin came to power in the mid-1920s. Stalin suppressed all political opposition to his rule inside the Communist Party, committed the state ideology to Marxism–Leninism, ended the NEP, initiating a centrally planned economy. As a result, the country underwent a period of rapid industrialization and forced collectivization, which led to a significant economic growth, but also created a man-made famine of 1932–1933 and expanded the Gulag labour camp system founded back in 1918. Stalin also fomented political paranoia and conducted the Great Purge to remove opponents of his from the Party through the mass arbitrary arrest of many people (military leaders, Communist Party members and ordinary citizens alike) who were then sent to correctional labor camps or sentenced to death.
10
+
11
+ On 23 August 1939, after unsuccessful efforts to form an anti-fascist alliance with Western powers, the Soviets signed the non-aggression agreement with Nazi Germany. After the start of World War II, the formally neutral Soviets invaded and annexed territories of several Eastern European states, including eastern Poland and the Baltic states. In June 1941 the Germans invaded, opening the largest and bloodiest theater of war in history. Soviet war casualties accounted for the highest proportion of the conflict in the cost of acquiring the upper hand over Axis forces at intense battles such as Stalingrad. Soviet forces eventually captured Berlin and won World War II in Europe on 9 May 1945. The territory overtaken by the Red Army became satellite states of the Eastern Bloc. The Cold War emerged in 1947 as a result of a post-war Soviet dominance in Eastern Europe, where the Eastern Bloc confronted the Western Bloc that united in the North Atlantic Treaty Organization in 1949.
12
+
13
+ Following Stalin's death in 1953, a period known as de-Stalinization and Khrushchev Thaw occurred under the leadership of Nikita Khrushchev. The country developed rapidly, as millions of peasants were moved into industrialized cities. The USSR took an early lead in the Space Race with the first ever satellite and the first human spaceflight. In the 1970s, there was a brief détente of relations with the United States, but tensions resumed when the Soviet Union deployed troops in Afghanistan in 1979. The war drained economic resources and was matched by an escalation of American military aid to Mujahideen fighters.
14
+
15
+ In the mid-1980s, the last Soviet leader, Mikhail Gorbachev, sought to further reform and liberalize the economy through his policies of glasnost and perestroika. The goal was to preserve the Communist Party while reversing economic stagnation. The Cold War ended during his tenure, and in 1989 Soviet satellite countries in Eastern Europe overthrew their respective communist regimes. This led to the rise of strong nationalist and separatist movements inside the USSR as well. Central authorities initiated a referendum—boycotted by the Baltic republics, Armenia, Georgia, and Moldova—which resulted in the majority of participating citizens voting in favor of preserving the Union as a renewed federation. In August 1991, a coup d'état was attempted by Communist Party hardliners. It failed, with Russian President Boris Yeltsin playing a high-profile role in facing down the coup, resulting in the banning of the Communist Party. On 25 December 1991, Gorbachev resigned and the remaining twelve constituent republics emerged from the dissolution of the Soviet Union as independent post-Soviet states. The Russian Federation (formerly the Russian SFSR) assumed the Soviet Union's rights and obligations and is recognized as its continued legal personality.
16
+
17
+ The USSR produced many significant social and technological achievements and innovations of the 20th century, including the world's first ministry of health, first human-made satellite, the first humans in space and the first probe to land on another planet, Venus. The country had the world's second-largest economy and the largest standing military in the world.[19][20][21] The USSR was recognized as one of the five nuclear weapons states. It was a founding permanent member of the United Nations Security Council as well as a member of the Organization for Security and Co-operation in Europe, the World Federation of Trade Unions and the leading member of the Council for Mutual Economic Assistance and the Warsaw Pact.
18
+
19
+ The word soviet is derived from the Russian word sovet (Russian: совет), meaning "council", "assembly", "advice", "harmony", "concord",[note 1] ultimately deriving from the proto-Slavic verbal stem of vět-iti ("to inform"), related to Slavic věst ("news"), English "wise", the root in "ad-vis-or" (which came to English through French), or the Dutch weten ("to know"; cf. wetenschap meaning "science"). The word sovietnik means "councillor".[22]
20
+
21
+ Some organizations in Russian history were called council (Russian: совет). In the Russian Empire, the State Council which functioned from 1810 to 1917 was referred to as a Council of Ministers after the revolt of 1905.[22]
22
+
23
+ During the Georgian Affair, Vladimir Lenin envisioned an expression of Great Russian ethnic chauvinism by Joseph Stalin and his supporters, calling for these nation-states to join Russia as semi-independent parts of a greater union which he initially named as the Union of Soviet Republics of Europe and Asia (Russian: Союз Советских Республик Европы и Азии, tr. Soyuz Sovetskikh Respublik Evropy i Azii).[23] Stalin initially resisted the proposal but ultimately accepted it, although with Lenin's agreement changed the name to the Union of Soviet Socialist Republics (USSR), albeit all the republics began as socialist soviet and did not change to the other order until 1936. In addition, in the national languages of several republics, the word council or conciliar in the respective language was only quite late changed to an adaptation of the Russian soviet and never in others, e.g. Ukraine.
24
+
25
+ СССР (in Latin alphabet: SSSR) is the abbreviation of USSR in Russian. It is written in Cyrillic alphabets. The Soviets used the Cyrillic abbreviation so frequently that audiences worldwide became familiar with its meaning. Notably, both Cyrillic letters used have orthographically-similar (but transliterally distinct) letters in Latin alphabets. Because of widespread familiarity with the Cyrillic abbreviation, Latin alphabet users in particular almost always use the orthographically-similar Latin letters C and P (as opposed to the transliteral Latin letters S and R) when rendering the USSR's native abbreviation.
26
+
27
+ After СССР, the most common short form names for the Soviet state in Russian were Советский Союз (transliteration: Sovetskiy Soyuz) which literally means Soviet Union, and also Союз ССР (transliteration: Soyuz SSR) which, after compensating for grammatical differences, essentially translates to Union of SSR's in English.
28
+
29
+ In the English language media, the state was referred to as the Soviet Union or the USSR. In other European languages, the locally translated short forms and abbreviations are usually used such as Union soviétique and URSS in French, or Sowjetunion and UdSSR in German. In the English-speaking world, the Soviet Union was also informally called Russia and its citizens Russians,[24] although that was technically incorrect since Russia was only one of the republics.[25] Such misapplications of the linguistic equivalents to the term Russia and its derivatives were frequent in other languages as well.
30
+
31
+ With an area of 22,402,200 square kilometres (8,649,500 sq mi), the Soviet Union was the world's largest country, a status that is retained by the Russian Federation.[26] Covering a sixth of Earth's land surface, its size was comparable to that of North America.[27] Two other successor states, Kazakhstan and Ukraine, rank among the top 10 countries by land area, and the largest country entirely in Europe, respectively. The European portion accounted for a quarter of the country's area and was the cultural and economic center. The eastern part in Asia extended to the Pacific Ocean to the east and Afghanistan to the south, and, except some areas in Central Asia, was much less populous. It spanned over 10,000 kilometres (6,200 mi) east to west across 11 time zones, and over 7,200 kilometres (4,500 mi) north to south. It had five climate zones: tundra, taiga, steppes, desert and mountains.
32
+
33
+ The USSR had the world's longest border, like Russia, measuring over 60,000 kilometres (37,000 mi), or ​1 1⁄2 circumferences of Earth. Two-thirds of it was a coastline. Across the Bering Strait was the United States. The country bordered Afghanistan, China, Czechoslovakia, Finland, Hungary, Iran, Mongolia, North Korea, Norway, Poland, Romania, and Turkey from 1945 to 1991.
34
+
35
+ The country's highest mountain was Communism Peak (now Ismoil Somoni Peak) in Tajikistan, at 7,495 metres (24,590 ft). The USSR also included most of the world's largest lakes; the Caspian Sea (shared with Iran), and Lake Baikal, the world's largest (by volume) and deepest freshwater lake that is also an internal body of water in Russia.
36
+
37
+ Modern revolutionary activity in the Russian Empire began with the 1825 Decembrist revolt. Although serfdom was abolished in 1861, it was done on terms unfavorable to the peasants and served to encourage revolutionaries. A parliament—the State Duma—was established in 1906 after the Russian Revolution of 1905, but Tsar Nicholas II resisted attempts to move from absolute to a constitutional monarchy. Social unrest continued and was aggravated during World War I by military defeat and food shortages in major cities.
38
+
39
+ A spontaneous popular uprising in Petrograd, in response to the wartime decay of Russia's economy and morale, culminated in the February Revolution and the toppling of Nicholas II and the imperial government in March 1917. The tsarist autocracy was replaced by the Russian Provisional Government, which intended to conduct elections to the Russian Constituent Assembly and to continue fighting on the side of the Entente in World War I.
40
+
41
+ At the same time, workers' councils, known in Russian as "Soviets", sprang up across the country. The Bolsheviks, led by Vladimir Lenin, pushed for socialist revolution in the Soviets and on the streets. On 7 November 1917, the Red Guards stormed the Winter Palace in Petrograd, ending the rule of the Provisional Government and leaving all political power to the Soviets.[30] This event would later be officially known in Soviet bibliographies as the Great October Socialist Revolution. In December, the Bolsheviks signed an armistice with the Central Powers, though by February 1918, fighting had resumed. In March, the Soviets ended involvement in the war and signed the Treaty of Brest-Litovsk.
42
+
43
+ A long and bloody Civil War ensued between the Reds and the Whites, starting in 1917 and ending in 1923 with the Reds' victory. It included foreign intervention, the execution of the former tsar and his family, and the famine of 1921, which killed about five million people.[31] In March 1921, during a related conflict with Poland, the Peace of Riga was signed, splitting disputed territories in Belarus and Ukraine between the Republic of Poland and Soviet Russia. Soviet Russia had to resolve similar conflicts with the newly established republics of Finland, Estonia, Latvia, and Lithuania.
44
+
45
+ On 28 December 1922, a conference of plenipotentiary delegations from the Russian SFSR, the Transcaucasian SFSR, the Ukrainian SSR and the Byelorussian SSR approved the Treaty on the Creation of the USSR[32] and the Declaration of the Creation of the USSR, forming the Union of Soviet Socialist Republics.[33] These two documents were confirmed by the first Congress of Soviets of the USSR and signed by the heads of the delegations,[34] Mikhail Kalinin, Mikhail Tskhakaya, Mikhail Frunze, Grigory Petrovsky, and Alexander Chervyakov,[35] on 30 December 1922. The formal proclamation was made from the stage of the Bolshoi Theatre.
46
+
47
+ An intensive restructuring of the economy, industry and politics of the country began in the early days of Soviet power in 1917. A large part of this was done according to the Bolshevik Initial Decrees, government documents signed by Vladimir Lenin. One of the most prominent breakthroughs was the GOELRO plan, which envisioned a major restructuring of the Soviet economy based on total electrification of the country.[36] The plan became the prototype for subsequent Five-Year Plans and was fulfilled by 1931.[37] After the economic policy of "War communism" during the Russian Civil War, as a prelude to fully developing socialism in the country, the Soviet government permitted some private enterprise to coexist alongside nationalized industry in the 1920s, and total food requisition in the countryside was replaced by a food tax.
48
+
49
+ From its creation, the government in the Soviet Union was based on the one-party rule of the Communist Party (Bolsheviks).[38] The stated purpose was to prevent the return of capitalist exploitation, and that the principles of democratic centralism would be the most effective in representing the people's will in a practical manner. The debate over the future of the economy provided the background for a power struggle in the years after Lenin's death in 1924. Initially, Lenin was to be replaced by a "troika" consisting of Grigory Zinoviev of the Ukrainian SSR, Lev Kamenev of the Russian SFSR, and Joseph Stalin of the Transcaucasian SFSR.
50
+
51
+ On 1 February 1924, the USSR was recognized by the United Kingdom. The same year, a Soviet Constitution was approved, legitimizing the December 1922 union. Despite the foundation of the Soviet state as a federative entity of many constituent republics, each with its own political and administrative entities, the term "Soviet Russia" – strictly applicable only to the Russian Federative Socialist Republic – was often applied to the entire country by non-Soviet writers and politicians.
52
+
53
+ On 3 April 1922, Stalin was named the General Secretary of the Communist Party of the Soviet Union. Lenin had appointed Stalin the head of the Workers' and Peasants' Inspectorate, which gave Stalin considerable power. By gradually consolidating his influence and isolating and outmanoeuvring his rivals within the party, Stalin became the undisputed leader of the country and, by the end of the 1920s, established a totalitarian rule. In October 1927, Zinoviev and Leon Trotsky were expelled from the Central Committee and forced into exile.
54
+
55
+ In 1928, Stalin introduced the first five-year plan for building a socialist economy. In place of the internationalism expressed by Lenin throughout the Revolution, it aimed to build Socialism in One Country. In industry, the state assumed control over all existing enterprises and undertook an intensive program of industrialization. In agriculture, rather than adhering to the "lead by example" policy advocated by Lenin,[39] forced collectivization of farms was implemented all over the country.
56
+
57
+ Famines ensued as a result, causing deaths estimated at three to seven million; surviving kulaks were persecuted, and many were sent to Gulags to do forced labor.[40][41] Social upheaval continued in the mid-1930s. Despite the turmoil of the mid-to-late 1930s, the country developed a robust industrial economy in the years preceding World War II.
58
+
59
+ Closer cooperation between the USSR and the West developed in the early 1930s. From 1932 to 1934, the country participated in the World Disarmament Conference. In 1933, diplomatic relations between the United States and the USSR were established when in November, the newly elected President of the United States, Franklin D. Roosevelt, chose to recognize Stalin's Communist government formally and negotiated a new trade agreement between the two countries.[42] In September 1934, the country joined the League of Nations. After the Spanish Civil War broke out in 1936, the USSR actively supported the Republican forces against the Nationalists, who were supported by Fascist Italy and Nazi Germany.[43]
60
+
61
+ In December 1936, Stalin unveiled a new constitution that was praised by supporters around the world as the most democratic constitution imaginable, though there was some skepticism.[i] Stalin's Great Purge resulted in the detainment or execution of many "Old Bolsheviks" who had participated in the October Revolution with Lenin. According to declassified Soviet archives, the NKVD arrested more than one and a half million people in 1937 and 1938, of whom 681,692 were shot.[45] Over those two years, there were an average of over one thousand executions a day.[46][j]
62
+
63
+ In 1939, the Soviet Union made a dramatic shift toward Nazi Germany. Almost a year after Britain and France had concluded the Munich Agreement with Germany, the Soviet Union made agreements with Germany as well, both militarily and economically during extensive talks. The two countries concluded the Molotov–Ribbentrop Pact and the German–Soviet Commercial Agreement in August 1939. The former made possible the Soviet occupation of Lithuania, Latvia, Estonia, Bessarabia, northern Bukovina, and eastern Poland. In late November, unable to coerce the Republic of Finland by diplomatic means into moving its border 25 kilometres (16 mi) back from Leningrad, Stalin ordered the invasion of Finland. In the east, the Soviet military won several decisive victories during border clashes with the Empire of Japan in 1938 and 1939. However, in April 1941, the USSR signed the Soviet–Japanese Neutrality Pact with Japan, recognizing the territorial integrity of Manchukuo, a Japanese puppet state.
64
+
65
+ Germany broke the Molotov–Ribbentrop Pact and invaded the Soviet Union on 22 June 1941 starting what was known in the USSR as the Great Patriotic War. The Red Army stopped the seemingly invincible German Army at the Battle of Moscow, aided by an unusually harsh winter. The Battle of Stalingrad, which lasted from late 1942 to early 1943, dealt a severe blow to Germany from which they never fully recovered and became a turning point in the war. After Stalingrad, Soviet forces drove through Eastern Europe to Berlin before Germany surrendered in 1945. The German Army suffered 80% of its military deaths in the Eastern Front.[50] Harry Hopkins, a close foreign policy advisor to Franklin D. Roosevelt, spoke on 10 August 1943 of the USSR's decisive role in the war.[k]
66
+
67
+ In the same year, the USSR, in fulfilment of its agreement with the Allies at the Yalta Conference, denounced the Soviet–Japanese Neutrality Pact in April 1945[52] and invaded Manchukuo and other Japan-controlled territories on 9 August 1945.[53] This conflict ended with a decisive Soviet victory, contributing to the unconditional surrender of Japan and the end of World War II.
68
+
69
+ The USSR suffered greatly in the war, losing around 27 million people.[54] Approximately 2.8 million Soviet POWs died of starvation, mistreatment, or executions in just eight months of 1941–42.[55][56] During the war, the country together with the United States, the United Kingdom and China were considered the Big Four Allied powers,[57] and later became the Four Policemen that formed the basis of the United Nations Security Council.[58] It emerged as a superpower in the post-war period. Once denied diplomatic recognition by the Western world, the USSR had official relations with practically every country by the late 1940s. A member of the United Nations at its foundation in 1945, the country became one of the five permanent members of the United Nations Security Council, which gave it the right to veto any of its resolutions.
70
+
71
+ During the immediate post-war period, the Soviet Union rebuilt and expanded its economy, while maintaining its strictly centralized control. It took effective control over most of the countries of Eastern Europe (except Yugoslavia and later Albania), turning them into satellite states. The USSR bound its satellite states in a military alliance, the Warsaw Pact, in 1955, and an economic organization, Council for Mutual Economic Assistance or Comecon, a counterpart to the European Economic Community (EEC), from 1949 to 1991.[59] The USSR concentrated on its own recovery, seizing and transferring most of Germany's industrial plants, and it exacted war reparations from East Germany, Hungary, Romania, and Bulgaria using Soviet-dominated joint enterprises. It also instituted trading arrangements deliberately designed to favor the country. Moscow controlled the Communist parties that ruled the satellite states, and they followed orders from the Kremlin.[m] Later, the Comecon supplied aid to the eventually victorious Communist Party of China, and its influence grew elsewhere in the world. Fearing its ambitions, the Soviet Union's wartime allies, the United Kingdom and the United States, became its enemies. In the ensuing Cold War, the two sides clashed indirectly in proxy wars.
72
+
73
+ Stalin died on 5 March 1953. Without a mutually agreeable successor, the highest Communist Party officials initially opted to rule the Soviet Union jointly through a troika headed by Georgy Malenkov. This did not last, however, and Nikita Khrushchev eventually won the ensuing power struggle by the mid-1950s. In 1956, he denounced Stalin's use of repression and proceeded to ease controls over the party and society. This was known as de-Stalinization.
74
+
75
+ Moscow considered Eastern Europe to be a critically vital buffer zone for the forward defence of its western borders, in case of another major invasion such as the German invasion of 1941. For this reason, the USSR sought to cement its control of the region by transforming the Eastern European countries into satellite states, dependent upon and subservient to its leadership. Soviet military force was used to suppress anti-Stalinist uprisings in Hungary and Poland in 1956.
76
+
77
+ In the late 1950s, a confrontation with China regarding the Soviet rapprochement with the West, and what Mao Zedong perceived as Khrushchev's revisionism, led to the Sino–Soviet split. This resulted in a break throughout the global Marxist–Leninist movement, with the governments in Albania, Cambodia and Somalia choosing to ally with China.
78
+
79
+ During this period of the late 1950s and early 1960s, the USSR continued to realize scientific and technological exploits in the Space Race, rivaling the United States: launching the first artificial satellite, Sputnik 1 in 1957; a living dog named Laika in 1957; the first human being, Yuri Gagarin in 1961; the first woman in space, Valentina Tereshkova in 1963; Alexei Leonov, the first person to walk in space in 1965; the first soft landing on the Moon by spacecraft Luna 9 in 1966; and the first Moon rovers, Lunokhod 1 and Lunokhod 2.[61]
80
+
81
+ Khrushchev initiated "The Thaw", a complex shift in political, cultural and economic life in the country. This included some openness and contact with other nations and new social and economic policies with more emphasis on commodity goods, allowing a dramatic rise in living standards while maintaining high levels of economic growth. Censorship was relaxed as well. Khrushchev's reforms in agriculture and administration, however, were generally unproductive. In 1962, he precipitated a crisis with the United States over the Soviet deployment of nuclear missiles in Cuba. An agreement was made with the United States to remove nuclear missiles from both Cuba and Turkey, concluding the crisis. This event caused Khrushchev much embarrassment and loss of prestige, resulting in his removal from power in 1964.
82
+
83
+ Following the ousting of Khrushchev, another period of collective leadership ensued, consisting of Leonid Brezhnev as General Secretary, Alexei Kosygin as Premier and Nikolai Podgorny as Chairman of the Presidium, lasting until Brezhnev established himself in the early 1970s as the preeminent Soviet leader.
84
+
85
+ In 1968, the Soviet Union and Warsaw Pact allies invaded Czechoslovakia to halt the Prague Spring reforms. In the aftermath, Brezhnev justified the invasion along with the earlier invasions of Eastern European states by introducing the Brezhnev Doctrine, which claimed the right of the Soviet Union to violate the sovereignty of any country that attempted to replace Marxism–Leninism with capitalism.
86
+
87
+ Brezhnev presided throughout détente with the West that resulted in treaties on armament control (SALT I, SALT II, Anti-Ballistic Missile Treaty) while at the same time building up Soviet military might.
88
+
89
+ In October 1977, the third Soviet Constitution was unanimously adopted. The prevailing mood of the Soviet leadership at the time of Brezhnev's death in 1982 was one of aversion to change. The long period of Brezhnev's rule had come to be dubbed one of "standstill", with an ageing and ossified top political leadership. This period is also known as the Era of Stagnation, a period of adverse economic, political, and social effects in the country, which began during the rule of Brezhnev and continued under his successors Yuri Andropov and Konstantin Chernenko.
90
+
91
+ In late 1979, the Soviet Union's military intervened in the ongoing civil war in neighboring Afghanistan, effectively ending a détente with the West.
92
+
93
+ Two developments dominated the decade that followed: the increasingly apparent crumbling of the Soviet Union's economic and political structures, and the patchwork attempts at reforms to reverse that process. Kenneth S. Deffeyes argued in Beyond Oil that the Reagan administration encouraged Saudi Arabia to lower the price of oil to the point where the Soviets could not make a profit selling their oil, and resulted in the depletion of the country's hard currency reserves.[62]
94
+
95
+ Brezhnev's next two successors, transitional figures with deep roots in his tradition, did not last long. Yuri Andropov was 68 years old and Konstantin Chernenko 72 when they assumed power; both died in less than two years. In an attempt to avoid a third short-lived leader, in 1985, the Soviets turned to the next generation and selected Mikhail Gorbachev. He made significant changes in the economy and party leadership, called perestroika. His policy of glasnost freed public access to information after decades of heavy government censorship. Gorbachev also moved to end the Cold War. In 1988, the USSR abandoned its war in Afghanistan and began to withdraw its forces. In the following year, Gorbachev refused to interfere in the internal affairs of the Soviet satellite states, which paved the way for the Revolutions of 1989. With the tearing down of the Berlin Wall and with East and West Germany pursuing unification, the Iron Curtain between the West and Soviet-controlled regions came down.
96
+
97
+ At the same time, the Soviet republics started legal moves towards potentially declaring sovereignty over their territories, citing the freedom to secede in Article 72 of the USSR constitution.[63] On 7 April 1990, a law was passed allowing a republic to secede if more than two-thirds of its residents voted for it in a referendum.[64] Many held their first free elections in the Soviet era for their own national legislatures in 1990. Many of these legislatures proceeded to produce legislation contradicting the Union laws in what was known as the "War of Laws". In 1989, the Russian SFSR convened a newly elected Congress of People's Deputies. Boris Yeltsin was elected its chairman. On 12 June 1990, the Congress declared Russia's sovereignty over its territory and proceeded to pass laws that attempted to supersede some of the Soviet laws. After a landslide victory of Sąjūdis in Lithuania, that country declared its independence restored on 11 March 1990.
98
+
99
+ A referendum for the preservation of the USSR was held on 17 March 1991 in nine republics (the remainder having boycotted the vote), with the majority of the population in those republics voting for preservation of the Union. The referendum gave Gorbachev a minor boost. In the summer of 1991, the New Union Treaty, which would have turned the country into a much looser Union, was agreed upon by eight republics. The signing of the treaty, however, was interrupted by the August Coup—an attempted coup d'état by hardline members of the government and the KGB who sought to reverse Gorbachev's reforms and reassert the central government's control over the republics. After the coup collapsed, Yeltsin was seen as a hero for his decisive actions, while Gorbachev's power was effectively ended. The balance of power tipped significantly towards the republics. In August 1991, Latvia and Estonia immediately declared the restoration of their full independence (following Lithuania's 1990 example). Gorbachev resigned as general secretary in late August, and soon afterwards, the party's activities were indefinitely suspended—effectively ending its rule. By the fall, Gorbachev could no longer influence events outside Moscow, and he was being challenged even there by Yeltsin, who had been elected President of Russia in July 1991.
100
+
101
+ The remaining 12 republics continued discussing new, increasingly looser, models of the Union. However, by December all except Russia and Kazakhstan had formally declared independence. During this time, Yeltsin took over what remained of the Soviet government, including the Moscow Kremlin. The final blow was struck on 1 December when Ukraine, the second-most powerful republic, voted overwhelmingly for independence. Ukraine's secession ended any realistic chance of the country staying together even on a limited scale.
102
+
103
+ On 8 December 1991, the presidents of Russia, Ukraine and Belarus (formerly Byelorussia), signed the Belavezha Accords, which declared the Soviet Union dissolved and established the Commonwealth of Independent States (CIS) in its place. While doubts remained over the authority of the accords to do this, on 21 December 1991, the representatives of all Soviet republics except Georgia signed the Alma-Ata Protocol, which confirmed the accords. On 25 December 1991, Gorbachev resigned as the President of the USSR, declaring the office extinct. He turned the powers that had been vested in the presidency over to Yeltsin. That night, the Soviet flag was lowered for the last time, and the Russian tricolor was raised in its place.
104
+
105
+ The following day, the Supreme Soviet, the highest governmental body, voted both itself and the country out of existence. This is generally recognized as marking the official, final dissolution of the Soviet Union as a functioning state, and the end of the Cold War.[65] The Soviet Army initially remained under overall CIS command but was soon absorbed into the different military forces of the newly independent states. The few remaining Soviet institutions that had not been taken over by Russia ceased to function by the end of 1991.
106
+
107
+ Following the dissolution, Russia was internationally recognized[66] as its legal successor on the international stage. To that end, Russia voluntarily accepted all Soviet foreign debt and claimed Soviet overseas properties as its own. Under the 1992 Lisbon Protocol, Russia also agreed to receive all nuclear weapons remaining in the territory of other former Soviet republics. Since then, the Russian Federation has assumed the Soviet Union's rights and obligations. Ukraine has refused to recognize exclusive Russian claims to succession of the USSR and claimed such status for Ukraine as well, which was codified in Articles 7 and 8 of its 1991 law On Legal Succession of Ukraine. Since its independence in 1991, Ukraine has continued to pursue claims against Russia in foreign courts, seeking to recover its share of the foreign property that was owned by the USSR.
108
+
109
+ The dissolution was followed by a severe drop in economic and social conditions in post-Soviet states,[67][68] including a rapid increase in poverty,[69][70][71][72] crime,[73][74] corruption,[75][76] unemployment,[77] homelessness,[78][79] rates of disease,[80][81][82] demographic losses,[83] income inequality and the rise of an oligarchical class,[84][69] along with decreases in calorie intake, life expectancy, adult literacy, and income.[85] Between 1988/1989 and 1993/1995, the Gini ratio increased by an average of 9 points for all former socialist countries.[69] The economic shocks that accompanied wholesale privatization were associated with sharp increases in mortality. Data shows Russia, Kazakhstan, Latvia, Lithuania and Estonia saw a tripling of unemployment and a 42% increase in male death rates between 1991 and 1994.[86][87] In the following decades, only five or six of the post-communist states are on a path to joining the wealthy capitalist West while most are falling behind, some to such an extent that it will take over fifty years to catch up to where they were before the fall of the Soviet Bloc.[88][89]
110
+
111
+ In summing up the international ramifications of these events, Vladislav Zubok stated: "The collapse of the Soviet empire was an event of epochal geopolitical, military, ideological, and economic significance."[90] Before the dissolution, the country had maintained its status as one of the world's two superpowers for four decades after World War II through its hegemony in Eastern Europe, military strength, economic strength, aid to developing countries, and scientific research, especially in space technology and weaponry.[91]
112
+
113
+ The analysis of the succession of states for the 15 post-Soviet states is complex. The Russian Federation is seen as the legal continuator state and is for most purposes the heir to the Soviet Union. It retained ownership of all former Soviet embassy properties, as well as the old Soviet UN membership and permanent membership on the Security Council.
114
+
115
+ Of the two other co-founding states of the USSR at the time of the dissolution, Ukraine was the only one that had passed similar to Russia's laws that it is a state-successor of both the Ukrainian SSR and the USSR.[92] Soviet treaties laid groundwork for Ukraine's future foreign agreements as well as they led to Ukraine agreeing to undertake 16.37% of debts of the Soviet Union for which it was going to receive its share of USSR's foreign property. Although it had a tough position at the time, due to Russia's position as a "single continuation of the USSR" that became widely accepted in the West as well as a constant pressure from the Western countries, allowed Russia to dispose state property of USSR abroad and conceal information about it. Due to that Ukraine never ratified "zero option" agreement that Russian Federation had signed with other former Soviet republics, as it denied disclosing of information about Soviet Gold Reserves and its Diamond Fund.[93][94] Dispute over former Soviet property and assets between two former republics is still ongoing:
116
+
117
+ The conflict is unsolvable. We can continue to poke Kiev handouts in the calculation of "solve the problem", only it won't be solved. Going to a trial is also pointless: for a number of European countries this is a political issue, and they will make a decision clearly in whose favor. What to do in this situation is an open question. Search for non-trivial solutions. But we must remember that in 2014, with the filing of the then Ukrainian Prime Minister Yatsenyuk, litigation with Russia resumed in 32 countries.
118
+
119
+ Similar situation occurred with restitution of cultural property. Although on 14 February 1992 Russia and other former Soviet republics signed agreement "On the return of cultural and historic property to the origin states" in Minsk, it was halted by Russian State Duma that had eventually passed "Federal Law on Cultural Valuables Displaced to the USSR as a Result of the Second World War and Located on the Territory of the Russian Federation" which made restitution currently impossible.[96]
120
+
121
+ There are additionally four states that claim independence from the other internationally recognised post-Soviet states but possess limited international recognition: Abkhazia, Nagorno-Karabakh, South Ossetia and Transnistria. The Chechen separatist movement of the Chechen Republic of Ichkeria lacks any international recognition.
122
+
123
+ During his rule, Stalin always made the final policy decisions. Otherwise, Soviet foreign policy was set by the Commission on the Foreign Policy of the Central Committee of the Communist Party of the Soviet Union, or by the party's highest body the Politburo. Operations were handled by the separate Ministry of Foreign Affairs. It was known as the People's Commissariat for Foreign Affairs (or Narkomindel), until 1946. The most influential spokesmen were Georgy Chicherin (1872–1936), Maxim Litvinov (1876–1951), Vyacheslav Molotov (1890–1986), Andrey Vyshinsky (1883–1954) and Andrei Gromyko (1909–1989). Intellectuals were based in the Moscow State Institute of International Relations.[97]
124
+
125
+ The Communist leadership of the Soviet Union intensely debated foreign policy issues and change directions several times. Even after Stalin assumed dictatorial control in the late 1920s, there were debates, and he frequently changed positions.[106]
126
+
127
+ During the country's early period, it was assumed that Communist revolutions would break out soon in every major industrial country, and it was the Soviet responsibility to assist them. The Comintern was the weapon of choice. A few revolutions did break out, but they were quickly suppressed (the longest lasting one was in Hungary)—the Hungarian Soviet Republic—lasted only from 21 March 1919 to 1 August 1919. The Russian Bolsheviks were in no position to give any help.
128
+
129
+ By 1921, Lenin, Trotsky, and Stalin realized that capitalism had stabilized itself in Europe and there would not be any widespread revolutions anytime soon. It became the duty of the Russian Bolsheviks to protect what they had in Russia, and avoid military confrontations that might destroy their bridgehead. Russia was now a pariah state, along with Germany. The two came to terms in 1922 with the Treaty of Rapallo that settled long-standing grievances. At the same time, the two countries secretly set up training programs for the illegal German army and air force operations at hidden camps in the USSR.[107]
130
+
131
+ Moscow eventually stopped threatening other states, and instead worked to open peaceful relationships in terms of trade, and diplomatic recognition. The United Kingdom dismissed the warnings of Winston Churchill and a few others about a continuing communist threat, and opened trade relations and de facto diplomatic recognition in 1922. There was hope for a settlement of the pre-war tsarist debts, but it was repeatedly postponed. Formal recognition came when the new Labour Party came to power in 1924.[108] All the other countries followed suit in opening trade relations. Henry Ford opened large-scale business relations with the Soviets in the late 1920s, hoping that it would lead to long-term peace. Finally, in 1933, the United States officially recognized the USSR, a decision backed by the public opinion and especially by US business interests that expected an opening of a new profitable market.[109]
132
+
133
+ In the late 1920s and early 1930s, Stalin ordered Communist parties across the world to strongly oppose non-communist political parties, labor unions or other organizations on the left. Stalin reversed himself in 1934 with the Popular Front program that called on all Communist parties to join together with all anti-Fascist political, labor, and organizational forces that were opposed to fascism, especially of the Nazi variety.[110][111]
134
+
135
+ In 1939, half a year after the Munich Agreement, the USSR attempted to form an anti-Nazi alliance with France and Britain.[112] Adolf Hitler proposed a better deal, which would give the USSR control over much of Eastern Europe through the Molotov–Ribbentrop Pact. In September, Germany invaded Poland, and the USSR also invaded later that month, resulting in the partition of Poland. In response, Britain and France declared war on Germany, marking the beginning of World War II.[113]
136
+
137
+ There were three power hierarchies in the Soviet Union: the legislature represented by the Supreme Soviet of the Soviet Union, the government represented by the Council of Ministers, and the Communist Party of the Soviet Union (CPSU), the only legal party and the final policymaker in the country.[114]
138
+
139
+ At the top of the Communist Party was the Central Committee, elected at Party Congresses and Conferences. In turn, the Central Committee voted for a Politburo (called the Presidium between 1952–1966), Secretariat and the General Secretary (First Secretary from 1953 to 1966), the de facto highest office in the Soviet Union.[115] Depending on the degree of power consolidation, it was either the Politburo as a collective body or the General Secretary, who always was one of the Politburo members, that effectively led the party and the country[116] (except for the period of the highly personalized authority of Stalin, exercised directly through his position in the Council of Ministers rather than the Politburo after 1941).[117] They were not controlled by the general party membership, as the key principle of the party organization was democratic centralism, demanding strict subordination to higher bodies, and elections went uncontested, endorsing the candidates proposed from above.[118]
140
+
141
+ The Communist Party maintained its dominance over the state mainly through its control over the system of appointments. All senior government officials and most deputies of the Supreme Soviet were members of the CPSU. Of the party heads themselves, Stalin (1941–1953) and Khrushchev (1958–1964) were Premiers. Upon the forced retirement of Khrushchev, the party leader was prohibited from this kind of double membership,[119] but the later General Secretaries for at least some part of their tenure occupied the mostly ceremonial position of Chairman of the Presidium of the Supreme Soviet, the nominal head of state. The institutions at lower levels were overseen and at times supplanted by primary party organizations.[120]
142
+
143
+ However, in practice the degree of control the party was able to exercise over the state bureaucracy, particularly after the death of Stalin, was far from total, with the bureaucracy pursuing different interests that were at times in conflict with the party.[121] Nor was the party itself monolithic from top to bottom, although factions were officially banned.[122]
144
+
145
+ The Supreme Soviet (successor of the Congress of Soviets and Central Executive Committee) was nominally the highest state body for most of the Soviet history,[123] at first acting as a rubber stamp institution, approving and implementing all decisions made by the party. However, its powers and functions were extended in the late 1950s, 1960s and 1970s, including the creation of new state commissions and committees. It gained additional powers relating to the approval of the Five-Year Plans and the government budget.[124] The Supreme Soviet elected a Presidium to wield its power between plenary sessions,[125] ordinarily held twice a year, and appointed the Supreme Court,[126] the Procurator General[127] and the Council of Ministers (known before 1946 as the Council of People's Commissars), headed by the Chairman (Premier) and managing an enormous bureaucracy responsible for the administration of the economy and society.[125] State and party structures of the constituent republics largely emulated the structure of the central institutions, although the Russian SFSR, unlike the other constituent republics, for most of its history had no republican branch of the CPSU, being ruled directly by the union-wide party until 1990. Local authorities were organized likewise into party committees, local Soviets and executive committees. While the state system was nominally federal, the party was unitary.[128]
146
+
147
+ The state security police (the KGB and its predecessor agencies) played an important role in Soviet politics. It was instrumental in the Great Purge,[129] but was brought under strict party control after Stalin's death. Under Yuri Andropov, the KGB engaged in the suppression of political dissent and maintained an extensive network of informers, reasserting itself as a political actor to some extent independent of the party-state structure,[130] culminating in the anti-corruption campaign targeting high-ranking party officials in the late 1970s and early 1980s.[131]
148
+
149
+ The constitution, which was promulgated in 1918, 1924, 1936 and 1977,[132] did not limit state power. No formal separation of powers existed between the Party, Supreme Soviet and Council of Ministers[133] that represented executive and legislative branches of the government. The system was governed less by statute than by informal conventions, and no settled mechanism of leadership succession existed. Bitter and at times deadly power struggles took place in the Politburo after the deaths of Lenin[134] and Stalin,[135] as well as after Khrushchev's dismissal,[136] itself due to a decision by both the Politburo and the Central Committee.[137] All leaders of the Communist Party before Gorbachev died in office, except Georgy Malenkov[138] and Khrushchev, both dismissed from the party leadership amid internal struggle within the party.[137]
150
+
151
+ Between 1988 and 1990, facing considerable opposition, Mikhail Gorbachev enacted reforms shifting power away from the highest bodies of the party and making the Supreme Soviet less dependent on them. The Congress of People's Deputies was established, the majority of whose members were directly elected in competitive elections held in March 1989. The Congress now elected the Supreme Soviet, which became a full-time parliament, and much stronger than before. For the first time since the 1920s, it refused to rubber stamp proposals from the party and Council of Ministers.[139] In 1990, Gorbachev introduced and assumed the position of the President of the Soviet Union, concentrated power in his executive office, independent of the party, and subordinated the government,[140] now renamed the Cabinet of Ministers of the USSR, to himself.[141]
152
+
153
+ Tensions grew between the Union-wide authorities under Gorbachev, reformists led in Russia by Boris Yeltsin and controlling the newly elected Supreme Soviet of the Russian SFSR, and communist hardliners. On 19–21 August 1991, a group of hardliners staged a coup attempt. The coup failed, and the State Council of the Soviet Union became the highest organ of state power "in the period of transition".[142] Gorbachev resigned as General Secretary, only remaining President for the final months of the existence of the USSR.[143]
154
+
155
+ The judiciary was not independent of the other branches of government. The Supreme Court supervised the lower courts (People's Court) and applied the law as established by the constitution or as interpreted by the Supreme Soviet. The Constitutional Oversight Committee reviewed the constitutionality of laws and acts. The Soviet Union used the inquisitorial system of Roman law, where the judge, procurator, and defence attorney collaborate to establish the truth.[144]
156
+
157
+ Constitutionally, the USSR was a federation of constituent Union Republics, which were either unitary states, such as Ukraine or Byelorussia (SSRs), or federations, such as Russia or Transcaucasia (SFSRs),[114] all four being the founding republics who signed the Treaty on the Creation of the USSR in December 1922. In 1924, during the national delimitation in Central Asia, Uzbekistan and Turkmenistan were formed from parts of Russia's Turkestan ASSR and two Soviet dependencies, the Khorezm and Bukharan SSRs. In 1929, Tajikistan was split off from the Uzbekistan SSR. With the constitution of 1936, the Transcaucasian SFSR was dissolved, resulting in its constituent republics of Armenia, Georgia and Azerbaijan being elevated to Union Republics, while Kazakhstan and Kirghizia were split off from Russian SFSR, resulting in the same status.[145] In August 1940, Moldavia was formed from parts of Ukraine and Bessarabia and northern Bukovina. Estonia, Latvia and Lithuania (SSRs) were also admitted into the union which was not recognized by most of the international community and was considered an illegal occupation. Karelia was split off from Russia as a Union Republic in March 1940 and was reabsorbed in 1956. Between July 1956 and September 1991, there were 15 union republics (see map below).[146]
158
+
159
+ While nominally a union of equals, in practice the Soviet Union was dominated by Russians. The domination was so absolute that for most of its existence, the country was commonly (but incorrectly) referred to as "Russia". While the RSFSR was technically only one republic within the larger union, it was by far the largest (both in terms of population and area), most powerful, most developed, and the industrial center of the Soviet Union. Historian Matthew White wrote that it was an open secret that the country's federal structure was "window dressing" for Russian dominance. For that reason, the people of the USSR were usually called "Russians", not "Soviets", since "everyone knew who really ran the show".[147]
160
+
161
+ Under the Military Law of September 1925, the Soviet Armed Forces consisted of three components, namely the Land Forces, the Air Force, the Navy, Joint State Political Directorate (OGPU), and the Internal Troops.[148] The OGPU later became independent and in 1934 joined the NKVD, and so its internal troops were under the joint leadership of the defense and internal commissariats. After World War II, Strategic Missile Forces (1959), Air Defense Forces (1948) and National Civil Defense Forces (1970) were formed, which ranked first, third, and sixth in the official Soviet system of importance (ground forces were second, Air Force Fourth, and Navy Fifth).
162
+
163
+ The army had the greatest political influence. In 1989, there served two million soldiers divided between 150 motorized and 52 armored divisions. Until the early 1960s, the Soviet navy was a rather small military branch, but after the Caribbean crisis, under the leadership of Sergei Gorshkov, it expanded significantly. It became known for battlecruisers and submarines. In 1989 there served 500 000 men. The Soviet Air Force focused on a fleet of strategic bombers and during war situation was to eradicate enemy infrastructure and nuclear capacity. The air force also had a number of fighters and tactical bombers to support the army in the war. Strategic missile forces had more than 1,400 intercontinental ballistic missiles (ICBMs), deployed between 28 bases and 300 command centers.
164
+
165
+ In the post-war period, the Soviet Army was directly involved in several military operations abroad. These included the suppression of the uprising in East Germany (1953), Hungarian revolution (1956) and the invasion of Czechoslovakia (1968). The Soviet Union also participated in the war in Afghanistan between 1979 and 1989.
166
+
167
+ In the Soviet Union, general conscription applied.
168
+
169
+ At the end of the 1950s, with the help of engineers and technologies captured and imported from defeated Nazi Germany, the Soviets constructed the first satellite - Sputnik 1 and thus overtook the United States. This was followed by other successful satellites and experimental dogs were sent. On April 12, 1961, the first cosmonaut, Yuri Gagarin, was sent to the space. He once flew around the Earth and successfully landed in the Kazakh steppe. At that time, the first plans for space shuttles and orbital stations were drawn up in Soviet design offices, but in the end personal disputes between designers and management prevented this.
170
+
171
+ The first big fiasco for the USSR was the landing on the moon by the Americans, when the Russians were not able to respond to the Americans in time with the same project. In the 1970s, more specific proposals for the design of the space shuttle began to emerge, but shortcomings, especially in the electronics industry (rapid overheating of electronics), postponed the program until the end of the 1980s. The first shuttle, the Buran, flew in 1988, but without a human crew. Another shuttle, Ptichka, eventually ended up under construction, as the shuttle project was canceled in 1991. For their launch into space, there is today an unused superpower rocket, Energia, which is the most powerful in the world.
172
+
173
+ In the late 1980s, the Soviet Union managed to build the Mir orbital station. It was built on the construction of Salyut stations and its tasks were purely civilian and research. In the 1990s, when the US Skylab was shut down due to lack of funds, it was the only orbital station in operation. Gradually, other modules were added to it, including American ones. However, the technical condition of the station deteriorated rapidly, especially after the fire, so in 2001 it was decided to bring it into the atmosphere where it burned down.
174
+
175
+ The Soviet Union adopted a command economy, whereby production and distribution of goods were centralized and directed by the government. The first Bolshevik experience with a command economy was the policy of War communism, which involved the nationalization of industry, centralized distribution of output, coercive requisition of agricultural production, and attempts to eliminate money circulation, private enterprises and free trade. After the severe economic collapse, Lenin replaced war communism by the New Economic Policy (NEP) in 1921, legalizing free trade and private ownership of small businesses. The economy quickly recovered as a result.[149]
176
+
177
+ After a long debate among the members of Politburo about the course of economic development, by 1928–1929, upon gaining control of the country, Stalin abandoned the NEP and pushed for full central planning, starting forced collectivization of agriculture and enacting draconian labor legislation. Resources were mobilized for rapid industrialization, which significantly expanded Soviet capacity in heavy industry and capital goods during the 1930s.[149] The primary motivation for industrialization was preparation for war, mostly due to distrust of the outside capitalist world.[150] As a result, the USSR was transformed from a largely agrarian economy into a great industrial power, leading the way for its emergence as a superpower after World War II.[151] The war caused extensive devastation of the Soviet economy and infrastructure, which required massive reconstruction.[152]
178
+
179
+ By the early 1940s, the Soviet economy had become relatively self-sufficient; for most of the period until the creation of Comecon, only a tiny share of domestic products was traded internationally.[153] After the creation of the Eastern Bloc, external trade rose rapidly. However, the influence of the world economy on the USSR was limited by fixed domestic prices and a state monopoly on foreign trade.[154] Grain and sophisticated consumer manufactures became major import articles from around the 1960s.[153] During the arms race of the Cold War, the Soviet economy was burdened by military expenditures, heavily lobbied for by a powerful bureaucracy dependent on the arms industry. At the same time, the USSR became the largest arms exporter to the Third World. Significant amounts of Soviet resources during the Cold War were allocated in aid to the other socialist states.[153]
180
+
181
+ From the 1930s until its dissolution in late 1991, the way the Soviet economy operated remained essentially unchanged. The economy was formally directed by central planning, carried out by Gosplan and organized in five-year plans. However, in practice, the plans were highly aggregated and provisional, subject to ad hoc intervention by superiors. All critical economic decisions were taken by the political leadership. Allocated resources and plan targets were usually denominated in rubles rather than in physical goods. Credit was discouraged, but widespread. The final allocation of output was achieved through relatively decentralized, unplanned contracting. Although in theory prices were legally set from above, in practice they were often negotiated, and informal horizontal links (e.g. between producer factories) were widespread.[149]
182
+
183
+ A number of basic services were state-funded, such as education and health care. In the manufacturing sector, heavy industry and defence were prioritized over consumer goods.[155] Consumer goods, particularly outside large cities, were often scarce, of poor quality and limited variety. Under the command economy, consumers had almost no influence on production, and the changing demands of a population with growing incomes could not be satisfied by supplies at rigidly fixed prices.[156] A massive unplanned second economy grew up at low levels alongside the planned one, providing some of the goods and services that the planners could not. The legalization of some elements of the decentralized economy was attempted with the reform of 1965.[149]
184
+
185
+ Although statistics of the Soviet economy are notoriously unreliable and its economic growth difficult to estimate precisely,[157][158] by most accounts, the economy continued to expand until the mid-1980s. During the 1950s and 1960s, it had comparatively high growth and was catching up to the West.[159] However, after 1970, the growth, while still positive, steadily declined much more quickly and consistently than in other countries, despite a rapid increase in the capital stock (the rate of capital increase was only surpassed by Japan).[149]
186
+
187
+ Overall, the growth rate of per capita income in the Soviet Union between 1960 and 1989 was slightly above the world average (based on 102 countries).[citation needed] According to Stanley Fischer and William Easterly, growth could have been faster. By their calculation, per capita income in 1989 should have been twice higher than it was, considering the amount of investment, education and population. The authors attribute this poor performance to the low productivity of capital.[160] Steven Rosenfielde states that the standard of living declined due to Stalin's despotism. While there was a brief improvement after his death, it lapsed into stagnation.[161]
188
+
189
+ In 1987, Mikhail Gorbachev attempted to reform and revitalize the economy with his program of perestroika. His policies relaxed state control over enterprises but did not replace it by market incentives, resulting in a sharp decline in output. The economy, already suffering from reduced petroleum export revenues, started to collapse. Prices were still fixed, and the property was still largely state-owned until after the country's dissolution.[149][156] For most of the period after World War II until its collapse, Soviet GDP (PPP) was the second-largest in the world, and third during the second half of the 1980s,[162] although on a per-capita basis, it was behind that of First World countries.[163] Compared to countries with similar per-capita GDP in 1928, the Soviet Union experienced significant growth.[164]
190
+
191
+ In 1990, the country had a Human Development Index of 0.920, placing it in the "high" category of human development. It was the third-highest in the Eastern Bloc, behind Czechoslovakia and East Germany, and the 25th in the world of 130 countries.[165]
192
+
193
+ The need for fuel declined in the Soviet Union from the 1970s to the 1980s,[166] both per ruble of gross social product and per ruble of industrial product. At the start, this decline grew very rapidly but gradually slowed down between 1970 and 1975. From 1975 and 1980, it grew even slower,[clarification needed] only 2.6%.[167] David Wilson, a historian, believed that the gas industry would account for 40% of Soviet fuel production by the end of the century. His theory did not come to fruition because of the USSR's collapse.[168] The USSR, in theory, would have continued to have an economic growth rate of 2–2.5% during the 1990s because of Soviet energy fields.[clarification needed][169] However, the energy sector faced many difficulties, among them the country's high military expenditure and hostile relations with the First World.[170]
194
+
195
+ In 1991, the Soviet Union had a pipeline network of 82,000 kilometres (51,000 mi) for crude oil and another 206,500 kilometres (128,300 mi) for natural gas.[171] Petroleum and petroleum-based products, natural gas, metals, wood, agricultural products, and a variety of manufactured goods, primarily machinery, arms and military equipment, were exported.[172] In the 1970s and 1980s, the USSR heavily relied on fossil fuel exports to earn hard currency.[153] At its peak in 1988, it was the largest producer and second-largest exporter of crude oil, surpassed only by Saudi Arabia.[173]
196
+
197
+ The Soviet Union placed great emphasis on science and technology within its economy,[174] however, the most remarkable Soviet successes in technology, such as producing the world's first space satellite, typically were the responsibility of the military.[155] Lenin believed that the USSR would never overtake the developed world if it remained as technologically backward as it was upon its founding. Soviet authorities proved their commitment to Lenin's belief by developing massive networks, research and development organizations. In the early 1960s, the Soviets awarded 40% of chemistry PhDs to women, compared to only 5% in the United States.[175] By 1989, Soviet scientists were among the world's best-trained specialists in several areas, such as energy physics, selected areas of medicine, mathematics, welding and military technologies. Due to rigid state planning and bureaucracy, the Soviets remained far behind technologically in chemistry, biology, and computers when compared to the First World.
198
+
199
+ Under the Reagan administration, Project Socrates determined that the Soviet Union addressed the acquisition of science and technology in a manner that was radically different from what the US was using. In the case of the US, economic prioritization was being used for indigenous research and development as the means to acquire science and technology in both the private and public sectors. In contrast, the USSR was offensively and defensively maneuvering in the acquisition and utilization of the worldwide technology, to increase the competitive advantage that they acquired from the technology while preventing the US from acquiring a competitive advantage. However, technology-based planning was executed in a centralized, government-centric manner that greatly hindered its flexibility. This was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.[176][177][178]
200
+
201
+ Transport was a vital component of the country's economy. The economic centralization of the late 1920s and 1930s led to the development of infrastructure on a massive scale, most notably the establishment of Aeroflot, an aviation enterprise.[179] The country had a wide variety of modes of transport by land, water and air.[171] However, due to inadequate maintenance, much of the road, water and Soviet civil aviation transport were outdated and technologically backward compared to the First World.[180]
202
+
203
+ Soviet rail transport was the largest and most intensively used in the world;[180] it was also better developed than most of its Western counterparts.[181] By the late 1970s and early 1980s, Soviet economists were calling for the construction of more roads to alleviate some of the burdens from the railways and to improve the Soviet government budget.[182] The street network and automotive industry[183] remained underdeveloped,[184] and dirt roads were common outside major cities.[185] Soviet maintenance projects proved unable to take care of even the few roads the country had. By the early-to-mid-1980s, the Soviet authorities tried to solve the road problem by ordering the construction of new ones.[185] Meanwhile, the automobile industry was growing at a faster rate than road construction.[186] The underdeveloped road network led to a growing demand for public transport.[187]
204
+
205
+ Despite improvements, several aspects of the transport sector were still[when?] riddled with problems due to outdated infrastructure, lack of investment, corruption and bad decision-making. Soviet authorities were unable to meet the growing demand for transport infrastructure and services.
206
+
207
+ The Soviet merchant navy was one of the largest in the world.[171]
208
+
209
+ Excess deaths throughout World War I and the Russian Civil War (including the postwar famine) amounted to a combined total of 18 million,[188] some 10 million in the 1930s,[47] and more than 26 million in 1941–5. The postwar Soviet population was 45 to 50 million smaller than it would have been if pre-war demographic growth had continued.[54] According to Catherine Merridale, "... reasonable estimate would place the total number of excess deaths for the whole period somewhere around 60 million."[189]
210
+
211
+ The birth rate of the USSR decreased from 44.0 per thousand in 1926 to 18.0 in 1974, mainly due to increasing urbanization and the rising average age of marriages. The mortality rate demonstrated a gradual decrease as well – from 23.7 per thousand in 1926 to 8.7 in 1974. In general, the birth rates of the southern republics in Transcaucasia and Central Asia were considerably higher than those in the northern parts of the Soviet Union, and in some cases even increased in the post–World War II period, a phenomenon partly attributed to slower rates of urbanistion and traditionally earlier marriages in the southern republics.[190] Soviet Europe moved towards sub-replacement fertility, while Soviet Central Asia continued to exhibit population growth well above replacement-level fertility.[191]
212
+
213
+ The late 1960s and the 1970s witnessed a reversal of the declining trajectory of the rate of mortality in the USSR, and was especially notable among men of working age, but was also prevalent in Russia and other predominantly Slavic areas of the country.[192] An analysis of the official data from the late 1980s showed that after worsening in the late-1970s and the early 1980s, adult mortality began to improve again.[193] The infant mortality rate increased from 24.7 in 1970 to 27.9 in 1974. Some researchers regarded the rise as mostly real, a consequence of worsening health conditions and services.[194] The rises in both adult and infant mortality were not explained or defended by Soviet officials, and the Soviet government stopped publishing all mortality statistics for ten years. Soviet demographers and health specialists remained silent about the mortality increases until the late-1980s, when the publication of mortality data resumed, and researchers could delve into the real causes.[195]
214
+
215
+ Under Lenin, the state made explicit commitments to promote the equality of men and women. Many early Russian feminists and ordinary Russian working women actively participated in the Revolution, and many more were affected by the events of that period and the new policies. Beginning in October 1918, the Lenin's government liberalized divorce and abortion laws, decriminalized homosexuality (re-criminalized in the 1930s), permitted cohabitation, and ushered in a host of reforms.[196] However, without birth control, the new system produced many broken marriages, as well as countless out-of-wedlock children.[197] The epidemic of divorces and extramarital affairs created social hardships when Soviet leaders wanted people to concentrate their efforts on growing the economy. Giving women control over their fertility also led to a precipitous decline in the birth rate, perceived as a threat to their country's military power. By 1936, Stalin reversed most of the liberal laws, ushering in a pronatalist era that lasted for decades.[198]
216
+
217
+ By 1917, Russia became the first great power to grant women the right to vote.[199] After heavy casualties in World War I and II, women outnumbered men in Russia by a 4:3 ratio.[200] This contributed to the larger role women played in Russian society compared to other great powers at the time.
218
+
219
+ Anatoly Lunacharsky became the first People's Commissar for Education of Soviet Russia. In the beginning, the Soviet authorities placed great emphasis on the elimination of illiteracy. All left-handed kids were forced to write with their right hand in the Soviet school system.[201][202][203][204] Literate people were automatically hired as teachers.[citation needed] For a short period, quality was sacrificed for quantity. By 1940, Stalin could announce that illiteracy had been eliminated. Throughout the 1930s, social mobility rose sharply, which has been attributed to reforms in education.[205] In the aftermath of World War II, the country's educational system expanded dramatically, which had a tremendous effect. In the 1960s, nearly all children had access to education, the only exception being those living in remote areas. Nikita Khrushchev tried to make education more accessible, making it clear to children that education was closely linked to the needs of society. Education also became important in giving rise to the New Man.[206] Citizens directly entering the workforce had the constitutional right to a job and to free vocational training.
220
+
221
+ The education system was highly centralized and universally accessible to all citizens, with affirmative action for applicants from nations associated with cultural backwardness. However, as part of the general antisemitic policy, an unofficial Jewish quota was applied[when?] in the leading institutions of higher education by subjecting Jewish applicants to harsher entrance examinations.[207][208][209][210] The Brezhnev era also introduced a rule that required all university applicants to present a reference from the local Komsomol party secretary.[211] According to statistics from 1986, the number of higher education students per the population of 10,000 was 181 for the USSR, compared to 517 for the US.[212]
222
+
223
+ The Soviet Union was an ethnically diverse country, with more than 100 distinct ethnic groups. The total population was estimated at 293 million in 1991. According to a 1990 estimate, the majority were Russians (50.78%), followed by Ukrainians (15.45%) and Uzbeks (5.84%).[213]
224
+
225
+ All citizens of the USSR had their own ethnic affiliation. The ethnicity of a person was chosen at the age of sixteen[214] by the child's parents. If the parents did not agree, the child was automatically assigned the ethnicity of the father. Partly due to Soviet policies, some of the smaller minority ethnic groups were considered part of larger ones, such as the Mingrelians of Georgia, who were classified with the linguistically related Georgians.[215] Some ethnic groups voluntarily assimilated, while others were brought in by force. Russians, Belarusians, and Ukrainians shared close cultural ties, while other groups did not. With multiple nationalities living in the same territory, ethnic antagonisms developed over the years.[216][neutrality is disputed]
226
+
227
+ Members of various ethnicities participated in legislative bodies. Organs of power like the Politburo, the Secretariat of the Central Committee etc., were formally ethnically neutral, but in reality, ethnic Russians were overrepresented, although there were also non-Russian leaders in the Soviet leadership, such as Joseph Stalin, Grigory Zinoviev, Nikolai Podgorny or Andrei Gromyko. During the Soviet era, a significant number of ethnic Russians and Ukrainians migrated to other Soviet republics, and many of them settled there. According to the last census in 1989, the Russian "diaspora" in the Soviet republics had reached 25 million.[217]
228
+
229
+ Ethnographic map of the Soviet Union, 1941
230
+
231
+ Number and share of Ukrainians in the population of the regions of the RSFSR (1926 census)
232
+
233
+ Number and share of Ukrainians in the population of the regions of the RSFSR (1979 census)
234
+
235
+ In 1917, before the revolution, health conditions were significantly behind those of developed countries. As Lenin later noted, "Either the lice will defeat socialism, or socialism will defeat the lice".[218] The Soviet principle of health care was conceived by the People's Commissariat for Health in 1918. Health care was to be controlled by the state and would be provided to its citizens free of charge, a revolutionary concept at the time. Article 42 of the 1977 Soviet Constitution gave all citizens the right to health protection and free access to any health institutions in the USSR. Before Leonid Brezhnev became General Secretary, the Soviet healthcare system was held in high esteem by many foreign specialists. This changed, however, from Brezhnev's accession and Mikhail Gorbachev's tenure as leader, during which the health care system was heavily criticized for many basic faults, such as the quality of service and the unevenness in its provision.[219] Minister of Health Yevgeniy Chazov, during the 19th Congress of the Communist Party of the Soviet Union, while highlighting such successes as having the most doctors and hospitals in the world, recognized the system's areas for improvement and felt that billions of Soviet rubles were squandered.[220]
236
+
237
+ After the revolution, life expectancy for all age groups went up. This statistic in itself was seen by some that the socialist system was superior to the capitalist system. These improvements continued into the 1960s when statistics indicated that the life expectancy briefly surpassed that of the United States. Life expectancy started to decline in the 1970s, possibly because of alcohol abuse. At the same time, infant mortality began to rise. After 1974, the government stopped publishing statistics on the matter. This trend can be partly explained by the number of pregnancies rising drastically in the Asian part of the country where infant mortality was the highest while declining markedly in the more developed European part of the Soviet Union.[221]
238
+
239
+ Under Lenin, the government gave small language groups their own writing systems.[222] The development of these writing systems was highly successful, even though some flaws were detected. During the later days of the USSR, countries with the same multilingual situation implemented similar policies. A serious problem when creating these writing systems was that the languages differed dialectally greatly from each other.[223] When a language had been given a writing system and appeared in a notable publication, it would attain "official language" status. There were many minority languages which never received their own writing system; therefore, their speakers were forced to have a second language.[224] There are examples where the government retreated from this policy, most notably under Stalin where education was discontinued in languages that were not widespread. These languages were then assimilated into another language, mostly Russian.[225] During World War II, some minority languages were banned, and their speakers accused of collaborating with the enemy.[226]
240
+
241
+ As the most widely spoken of the Soviet Union's many languages, Russian de facto functioned as an official language, as the "language of interethnic communication" (Russian: язык межнационального общения), but only assumed the de jure status as the official national language in 1990.[227]
242
+
243
+ Christianity and Islam had the highest number of adherents among the religious citizens.[228] Eastern Christianity predominated among Christians, with Russia's traditional Russian Orthodox Church being the largest Christian denomination. About 90% of the Soviet Union's Muslims were Sunnis, with Shias being concentrated in the Azerbaijan SSR.[228] Smaller groups included Roman Catholics, Jews, Buddhists, and a variety of Protestant denominations (especially Baptists and Lutherans).[228]
244
+
245
+ Religious influence had been strong in the Russian Empire. The Russian Orthodox Church enjoyed a privileged status as the church of the monarchy and took part in carrying out official state functions.[229] The immediate period following the establishment of the Soviet state included a struggle against the Orthodox Church, which the revolutionaries considered an ally of the former ruling classes.[230]
246
+
247
+ In Soviet law, the "freedom to hold religious services" was constitutionally guaranteed, although the ruling Communist Party regarded religion as incompatible with the Marxist spirit of scientific materialism.[230] In practice, the Soviet system subscribed to a narrow interpretation of this right, and in fact utilized a range of official measures to discourage religion and curb the activities of religious groups.[230]
248
+
249
+ The 1918 Council of People's Commissars decree establishing the Russian SFSR as a secular state also decreed that "the teaching of religion in all [places] where subjects of general instruction are taught, is forbidden. Citizens may teach and may be taught religion privately."[231] Among further restrictions, those adopted in 1929 included express prohibitions on a range of church activities, including meetings for organized Bible study.[230] Both Christian and non-Christian establishments were shut down by the thousands in the 1920s and 1930s. By 1940, as many as 90% of the churches, synagogues, and mosques that had been operating in 1917 were closed.[232]
250
+
251
+ Under the doctrine of state atheism, there was a "government-sponsored program of forced conversion to atheism" conducted by the Communists.[233][234][235] The regime targeted religions based on state interests, and while most organized religions were never outlawed, religious property was confiscated, believers were harassed, and religion was ridiculed while atheism was propagated in schools.[236] In 1925, the government founded the League of Militant Atheists to intensify the propaganda campaign.[237] Accordingly, although personal expressions of religious faith were not explicitly banned, a strong sense of social stigma was imposed on them by the formal structures and mass media, and it was generally considered unacceptable for members of certain professions (teachers, state bureaucrats, soldiers) to be openly religious. As for the Russian Orthodox Church, Soviet authorities sought to control it and, in times of national crisis, to exploit it for the regime's own purposes; but their ultimate goal was to eliminate it. During the first five years of Soviet power, the Bolsheviks executed 28 Russian Orthodox bishops and over 1,200 Russian Orthodox priests. Many others were imprisoned or exiled. Believers were harassed and persecuted. Most seminaries were closed, and the publication of most religious material was prohibited. By 1941, only 500 churches remained open out of about 54,000 in existence before World War I.
252
+
253
+ Convinced that religious anti-Sovietism had become a thing of the past, and with the looming threat of war, the Stalin regime began shifting to a more moderate religion policy in the late 1930s.[238] Soviet religious establishments overwhelmingly rallied to support the war effort during World War II. Amid other accommodations to religious faith after the German invasion, churches were reopened. Radio Moscow began broadcasting a religious hour, and a historic meeting between Stalin and Orthodox Church leader Patriarch Sergius of Moscow was held in 1943. Stalin had the support of the majority of the religious people in the USSR even through the late 1980s.[238] The general tendency of this period was an increase in religious activity among believers of all faiths.[239]
254
+
255
+ Under Nikita Khrushchev, the state leadership clashed with the churches in 1958–1964, a period when atheism was emphasized in the educational curriculum, and numerous state publications promoted atheistic views.[238] During this period, the number of churches fell from 20,000 to 10,000 from 1959 to 1965, and the number of synagogues dropped from 500 to 97.[240] The number of working mosques also declined, falling from 1,500 to 500 within a decade.[240]
256
+
257
+ Religious institutions remained monitored by the Soviet government, but churches, synagogues, temples, and mosques were all given more leeway in the Brezhnev era.[241] Official relations between the Orthodox Church and the government again warmed to the point that the Brezhnev government twice honored Orthodox Patriarch Alexy I with the Order of the Red Banner of Labour.[242] A poll conducted by Soviet authorities in 1982 recorded 20% of the Soviet population as "active religious believers."[243]
258
+
259
+ The culture of the Soviet Union passed through several stages during the USSR's existence. During the first decade following the revolution, there was relative freedom and artists experimented with several different styles to find a distinctive Soviet style of art. Lenin wanted art to be accessible to the Russian people. On the other hand, hundreds of intellectuals, writers, and artists were exiled or executed, and their work banned, such as Nikolay Gumilyov who was shot for alleged conspiring against the Bolshevik regime, and Yevgeny Zamyatin.[244]
260
+
261
+ The government encouraged a variety of trends. In art and literature, numerous schools, some traditional and others radically experimental, proliferated. Communist writers Maxim Gorky and Vladimir Mayakovsky were active during this time. As a means of influencing a largely illiterate society, films received encouragement from the state, and much of director Sergei Eisenstein's best work dates from this period.
262
+
263
+ During Stalin's rule, the Soviet culture was characterized by the rise and domination of the government-imposed style of socialist realism, with all other trends being severely repressed, with rare exceptions, such as Mikhail Bulgakov's works. Many writers were imprisoned and killed.[245]
264
+
265
+ Following the Khrushchev Thaw, censorship was diminished. During this time, a distinctive period of Soviet culture developed, characterized by conformist public life and an intense focus on personal life. Greater experimentation in art forms was again permissible, resulting in the production of more sophisticated and subtly critical work. The regime loosened its emphasis on socialist realism; thus, for instance, many protagonists of the novels of author Yury Trifonov concerned themselves with problems of daily life rather than with building socialism. Underground dissident literature, known as samizdat, developed during this late period. In architecture, the Khrushchev era mostly focused on functional design as opposed to the highly decorated style of Stalin's epoch.
266
+
267
+ In the second half of the 1980s, Gorbachev's policies of perestroika and glasnost significantly expanded freedom of expression throughout the country in the media and the press.[246]
268
+
269
+ Founded on 20 July 1924 in Moscow, Sovetsky Sport was the first sports newspaper of the Soviet Union.
270
+
271
+ The Soviet Olympic Committee formed on 21 April 1951, and the IOC recognized the new body in its 45th session. In the same year, when the Soviet representative Konstantin Andrianov became an IOC member, the USSR officially joined the Olympic Movement. The 1952 Summer Olympics in Helsinki thus became first Olympic Games for Soviet athletes.
272
+
273
+ The Soviet Union national ice hockey team won nearly every world championship and Olympic tournament between 1954 and 1991 and never failed to medal in any International Ice Hockey Federation (IIHF) tournament in which they competed.
274
+
275
+ The advent[when?] of the state-sponsored "full-time amateur athlete" of the Eastern Bloc countries further eroded the ideology of the pure amateur, as it put the self-financed amateurs of the Western countries at a disadvantage. The Soviet Union entered teams of athletes who were all nominally students, soldiers, or working in a profession – in reality, the state paid many of these competitors to train on a full-time basis.[247] Nevertheless, the IOC held to the traditional rules regarding amateurism.[248]
276
+
277
+ A 1989 report by a committee of the Australian Senate claimed that "there is hardly a medal winner at the Moscow Games, certainly not a gold medal winner...who is not on one sort of drug or another: usually several kinds. The Moscow Games might well have been called the Chemists' Games".[249]
278
+
279
+ A member of the IOC Medical Commission, Manfred Donike, privately ran additional tests with a new technique for identifying abnormal levels of testosterone by measuring its ratio to epitestosterone in urine. Twenty percent of the specimens he tested, including those from sixteen gold medalists, would have resulted in disciplinary proceedings had the tests been official. The results of Donike's unofficial tests later convinced the IOC to add his new technique to their testing protocols.[250] The first documented case of "blood doping" occurred at the 1980 Summer Olympics when a runner[who?] was transfused with two pints of blood before winning medals in the 5000 m and 10,000 m.[251]
280
+
281
+ Documentation obtained in 2016 revealed the Soviet Union's plans for a statewide doping system in track and field in preparation for the 1984 Summer Olympics in Los Angeles. Dated before the decision to boycott the 1984 Games, the document detailed the existing steroids operations of the program, along with suggestions for further enhancements. Dr. Sergei Portugalov of the Institute for Physical Culture prepared the communication, directed to the Soviet Union's head of track and field. Portugalov later became one of the leading figures involved in the implementation of Russian doping before the 2016 Summer Olympics.[252]
282
+
283
+ Official Soviet environmental policy has always attached great importance to actions in which human beings actively improve nature. Lenin's quote "Communism is Soviet power and electrification of the country!" in many respects it summarizes the focus on modernization and industrial development. During the first five-year plan in 1928, Stalin proceeded to industrialize the country at all costs. Values such as environmental and nature protection have been completely ignored in the struggle to create a modern industrial society. After Stalin's death, they focused more on environmental issues, but the basic perception of the value of environmental protection remained the same.[253]
284
+
285
+ The Soviet media has always focused on the vast expanse of land and the virtually indestructible natural resources. This made it feel that contamination and looting of nature were not a problem. The Soviet state also firmly believed that scientific and technological progress would solve all the problems. Official ideology said that under socialism environmental problems could easily be overcome, unlike capitalist countries, where they seemingly could not be solved. The Soviet authorities had an almost unwavering belief that man could transcend nature. However, when the authorities had to admit that there were environmental problems in the USSR in the 1980s, they explained the problems in such a way that socialism had not yet been fully developed; pollution in socialist society was only a temporary anomaly that would have been resolved if socialism had developed.[citation needed]
286
+
287
+ The Chernobyl disaster in 1986 was the first major accident at a civilian nuclear power plant, unparalleled in the world, when a large number of radioactive isotopes were released into the atmosphere. Radioactive doses have scattered relatively far. The main health problem after the accident was 4,000 new cases of thyroid cancer, but this led to a relatively low number of deaths (WHO data, 2005). However, the long-term effects of the accident are unknown. Another major accident is the Kyshtym disaster.[254]
288
+
289
+ After the fall of the USSR, it was discovered that the environmental problems were greater than what the Soviet authorities admitted. The Kola Peninsula was one of the places with clear problems. Around the industrial cities of Monchegorsk and Norilsk, where nickel, for example, is mined, all forests have been killed by contamination, while the northern and other parts of Russia have been affected by emissions. During the 1990s, people in the West were also interested in the radioactive hazards of nuclear facilities, decommissioned nuclear submarines, and the processing of nuclear waste or spent nuclear fuel. It was also known in the early 1990s that the USSR had transported radioactive material to the Barents Sea and Kara Sea, which was later confirmed by the Russian parliament. The crash of the K-141 Kursk submarine in 2000 in the west further raised concerns.[255] In the past, there were accidents involving submarines K-19, K-8 or K-129.[citation needed]
290
+
291
+ 1918–1924  Turkestan3
292
+ 1918–1941  Volga German4
293
+ 1919–1990  Bashkir
294
+ 1920–1925  Kirghiz2
295
+ 1920–1990  Tatar
296
+ 1921–1990  Adjar
297
+ 1921–1945  Crimean
298
+ 1921–1991  Dagestan
299
+ 1921–1924  Mountain
300
+
301
+ 1921–1990  Nakhchivan
302
+ 1922–1991  Yakut
303
+ 1923–1990  Buryat1
304
+ 1923–1940  Karelian
305
+ 1924–1940  Moldavian
306
+ 1924–1929  Tajik
307
+ 1925–1992  Chuvash
308
+ 1925–1936  Kazak2
309
+ 1926–1936  Kirghiz
310
+
311
+ 1931–1991  Abkhaz
312
+ 1932–1992  Karakalpak
313
+ 1934–1990  Mordovian
314
+ 1934–1990  Udmurt
315
+ 1935–1943  Kalmyk
316
+ 1936–1944  Checheno-Ingush
317
+ 1936–1944  Kabardino-Balkar
318
+ 1936–1990  Komi
319
+ 1936–1990  Mari
320
+
321
+ 1936–1990  North Ossetian
322
+ 1944–1957  Kabardin
323
+ 1956–1991  Karelian
324
+ 1957–1990  Checheno-Ingush
325
+ 1957–1991  Kabardino-Balkar
326
+ 1958–1990  Kalmyk
327
+ 1961–1992  Tuva
328
+ 1990–1991  Gorno-Altai
329
+ 1991–1992  Crimean
en/5881.html.txt ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The Soviet Union,[d] officially the Union of Soviet Socialist Republics[e] (USSR),[f] was a federal socialist state in Northern Eurasia that existed from 1922 to 1991. Nominally a union of multiple national Soviet republics,[g] in practice its government and economy were highly centralized until its final years. It was a one-party state governed by the Communist Party, with Moscow as its capital in its largest republic, the Russian SFSR. Other major urban centers were Leningrad, Kiev, Minsk, Tashkent, Alma-Ata and Novosibirsk. It was the largest country in the world by surface area,[18] spanning over 10,000 kilometers (6,200 mi) east to west across 11 time zones and over 7,200 kilometers (4,500 mi) north to south. Its territory included much of Eastern Europe as well as part of Northern Europe and all of Northern and Central Asia. It had five climate zones such as tundra, taiga, steppes, desert, and mountains. Its diverse population was collectively known as Soviet people.
6
+
7
+ The Soviet Union had its roots in the October Revolution of 1917, when the Bolsheviks, headed by Vladimir Lenin, overthrew the Provisional Government that had earlier replaced the monarchy. They established the Russian Soviet Republic[h], beginning a civil war between the Bolshevik Red Army and many anti-Bolshevik forces across the former Empire, among whom the largest faction was the White Guard. The disastrous distractive effect of the war and the Bolshevik policies led to 5 million deaths during the 1921–1922 famine in the region of Povolzhye. The Red Army expanded and helped local Communists take power, establishing soviets, repressing their political opponents and rebellious peasants through the policies of Red Terror and War Communism. In 1922, the Communists were victorious, forming the Soviet Union with the unification of the Russian, Transcaucasian, Ukrainian and Byelorussian republics. The New Economic Policy (NEP) which was introduced by Lenin led to a partial return of a free market and private property, resulting in a period of economic recovery.
8
+
9
+ Following Lenin's death in 1924, a troika and a brief power struggle, Joseph Stalin came to power in the mid-1920s. Stalin suppressed all political opposition to his rule inside the Communist Party, committed the state ideology to Marxism–Leninism, ended the NEP, initiating a centrally planned economy. As a result, the country underwent a period of rapid industrialization and forced collectivization, which led to a significant economic growth, but also created a man-made famine of 1932–1933 and expanded the Gulag labour camp system founded back in 1918. Stalin also fomented political paranoia and conducted the Great Purge to remove opponents of his from the Party through the mass arbitrary arrest of many people (military leaders, Communist Party members and ordinary citizens alike) who were then sent to correctional labor camps or sentenced to death.
10
+
11
+ On 23 August 1939, after unsuccessful efforts to form an anti-fascist alliance with Western powers, the Soviets signed the non-aggression agreement with Nazi Germany. After the start of World War II, the formally neutral Soviets invaded and annexed territories of several Eastern European states, including eastern Poland and the Baltic states. In June 1941 the Germans invaded, opening the largest and bloodiest theater of war in history. Soviet war casualties accounted for the highest proportion of the conflict in the cost of acquiring the upper hand over Axis forces at intense battles such as Stalingrad. Soviet forces eventually captured Berlin and won World War II in Europe on 9 May 1945. The territory overtaken by the Red Army became satellite states of the Eastern Bloc. The Cold War emerged in 1947 as a result of a post-war Soviet dominance in Eastern Europe, where the Eastern Bloc confronted the Western Bloc that united in the North Atlantic Treaty Organization in 1949.
12
+
13
+ Following Stalin's death in 1953, a period known as de-Stalinization and Khrushchev Thaw occurred under the leadership of Nikita Khrushchev. The country developed rapidly, as millions of peasants were moved into industrialized cities. The USSR took an early lead in the Space Race with the first ever satellite and the first human spaceflight. In the 1970s, there was a brief détente of relations with the United States, but tensions resumed when the Soviet Union deployed troops in Afghanistan in 1979. The war drained economic resources and was matched by an escalation of American military aid to Mujahideen fighters.
14
+
15
+ In the mid-1980s, the last Soviet leader, Mikhail Gorbachev, sought to further reform and liberalize the economy through his policies of glasnost and perestroika. The goal was to preserve the Communist Party while reversing economic stagnation. The Cold War ended during his tenure, and in 1989 Soviet satellite countries in Eastern Europe overthrew their respective communist regimes. This led to the rise of strong nationalist and separatist movements inside the USSR as well. Central authorities initiated a referendum—boycotted by the Baltic republics, Armenia, Georgia, and Moldova—which resulted in the majority of participating citizens voting in favor of preserving the Union as a renewed federation. In August 1991, a coup d'état was attempted by Communist Party hardliners. It failed, with Russian President Boris Yeltsin playing a high-profile role in facing down the coup, resulting in the banning of the Communist Party. On 25 December 1991, Gorbachev resigned and the remaining twelve constituent republics emerged from the dissolution of the Soviet Union as independent post-Soviet states. The Russian Federation (formerly the Russian SFSR) assumed the Soviet Union's rights and obligations and is recognized as its continued legal personality.
16
+
17
+ The USSR produced many significant social and technological achievements and innovations of the 20th century, including the world's first ministry of health, first human-made satellite, the first humans in space and the first probe to land on another planet, Venus. The country had the world's second-largest economy and the largest standing military in the world.[19][20][21] The USSR was recognized as one of the five nuclear weapons states. It was a founding permanent member of the United Nations Security Council as well as a member of the Organization for Security and Co-operation in Europe, the World Federation of Trade Unions and the leading member of the Council for Mutual Economic Assistance and the Warsaw Pact.
18
+
19
+ The word soviet is derived from the Russian word sovet (Russian: совет), meaning "council", "assembly", "advice", "harmony", "concord",[note 1] ultimately deriving from the proto-Slavic verbal stem of vět-iti ("to inform"), related to Slavic věst ("news"), English "wise", the root in "ad-vis-or" (which came to English through French), or the Dutch weten ("to know"; cf. wetenschap meaning "science"). The word sovietnik means "councillor".[22]
20
+
21
+ Some organizations in Russian history were called council (Russian: совет). In the Russian Empire, the State Council which functioned from 1810 to 1917 was referred to as a Council of Ministers after the revolt of 1905.[22]
22
+
23
+ During the Georgian Affair, Vladimir Lenin envisioned an expression of Great Russian ethnic chauvinism by Joseph Stalin and his supporters, calling for these nation-states to join Russia as semi-independent parts of a greater union which he initially named as the Union of Soviet Republics of Europe and Asia (Russian: Союз Советских Республик Европы и Азии, tr. Soyuz Sovetskikh Respublik Evropy i Azii).[23] Stalin initially resisted the proposal but ultimately accepted it, although with Lenin's agreement changed the name to the Union of Soviet Socialist Republics (USSR), albeit all the republics began as socialist soviet and did not change to the other order until 1936. In addition, in the national languages of several republics, the word council or conciliar in the respective language was only quite late changed to an adaptation of the Russian soviet and never in others, e.g. Ukraine.
24
+
25
+ СССР (in Latin alphabet: SSSR) is the abbreviation of USSR in Russian. It is written in Cyrillic alphabets. The Soviets used the Cyrillic abbreviation so frequently that audiences worldwide became familiar with its meaning. Notably, both Cyrillic letters used have orthographically-similar (but transliterally distinct) letters in Latin alphabets. Because of widespread familiarity with the Cyrillic abbreviation, Latin alphabet users in particular almost always use the orthographically-similar Latin letters C and P (as opposed to the transliteral Latin letters S and R) when rendering the USSR's native abbreviation.
26
+
27
+ After СССР, the most common short form names for the Soviet state in Russian were Советский Союз (transliteration: Sovetskiy Soyuz) which literally means Soviet Union, and also Союз ССР (transliteration: Soyuz SSR) which, after compensating for grammatical differences, essentially translates to Union of SSR's in English.
28
+
29
+ In the English language media, the state was referred to as the Soviet Union or the USSR. In other European languages, the locally translated short forms and abbreviations are usually used such as Union soviétique and URSS in French, or Sowjetunion and UdSSR in German. In the English-speaking world, the Soviet Union was also informally called Russia and its citizens Russians,[24] although that was technically incorrect since Russia was only one of the republics.[25] Such misapplications of the linguistic equivalents to the term Russia and its derivatives were frequent in other languages as well.
30
+
31
+ With an area of 22,402,200 square kilometres (8,649,500 sq mi), the Soviet Union was the world's largest country, a status that is retained by the Russian Federation.[26] Covering a sixth of Earth's land surface, its size was comparable to that of North America.[27] Two other successor states, Kazakhstan and Ukraine, rank among the top 10 countries by land area, and the largest country entirely in Europe, respectively. The European portion accounted for a quarter of the country's area and was the cultural and economic center. The eastern part in Asia extended to the Pacific Ocean to the east and Afghanistan to the south, and, except some areas in Central Asia, was much less populous. It spanned over 10,000 kilometres (6,200 mi) east to west across 11 time zones, and over 7,200 kilometres (4,500 mi) north to south. It had five climate zones: tundra, taiga, steppes, desert and mountains.
32
+
33
+ The USSR had the world's longest border, like Russia, measuring over 60,000 kilometres (37,000 mi), or ​1 1⁄2 circumferences of Earth. Two-thirds of it was a coastline. Across the Bering Strait was the United States. The country bordered Afghanistan, China, Czechoslovakia, Finland, Hungary, Iran, Mongolia, North Korea, Norway, Poland, Romania, and Turkey from 1945 to 1991.
34
+
35
+ The country's highest mountain was Communism Peak (now Ismoil Somoni Peak) in Tajikistan, at 7,495 metres (24,590 ft). The USSR also included most of the world's largest lakes; the Caspian Sea (shared with Iran), and Lake Baikal, the world's largest (by volume) and deepest freshwater lake that is also an internal body of water in Russia.
36
+
37
+ Modern revolutionary activity in the Russian Empire began with the 1825 Decembrist revolt. Although serfdom was abolished in 1861, it was done on terms unfavorable to the peasants and served to encourage revolutionaries. A parliament—the State Duma—was established in 1906 after the Russian Revolution of 1905, but Tsar Nicholas II resisted attempts to move from absolute to a constitutional monarchy. Social unrest continued and was aggravated during World War I by military defeat and food shortages in major cities.
38
+
39
+ A spontaneous popular uprising in Petrograd, in response to the wartime decay of Russia's economy and morale, culminated in the February Revolution and the toppling of Nicholas II and the imperial government in March 1917. The tsarist autocracy was replaced by the Russian Provisional Government, which intended to conduct elections to the Russian Constituent Assembly and to continue fighting on the side of the Entente in World War I.
40
+
41
+ At the same time, workers' councils, known in Russian as "Soviets", sprang up across the country. The Bolsheviks, led by Vladimir Lenin, pushed for socialist revolution in the Soviets and on the streets. On 7 November 1917, the Red Guards stormed the Winter Palace in Petrograd, ending the rule of the Provisional Government and leaving all political power to the Soviets.[30] This event would later be officially known in Soviet bibliographies as the Great October Socialist Revolution. In December, the Bolsheviks signed an armistice with the Central Powers, though by February 1918, fighting had resumed. In March, the Soviets ended involvement in the war and signed the Treaty of Brest-Litovsk.
42
+
43
+ A long and bloody Civil War ensued between the Reds and the Whites, starting in 1917 and ending in 1923 with the Reds' victory. It included foreign intervention, the execution of the former tsar and his family, and the famine of 1921, which killed about five million people.[31] In March 1921, during a related conflict with Poland, the Peace of Riga was signed, splitting disputed territories in Belarus and Ukraine between the Republic of Poland and Soviet Russia. Soviet Russia had to resolve similar conflicts with the newly established republics of Finland, Estonia, Latvia, and Lithuania.
44
+
45
+ On 28 December 1922, a conference of plenipotentiary delegations from the Russian SFSR, the Transcaucasian SFSR, the Ukrainian SSR and the Byelorussian SSR approved the Treaty on the Creation of the USSR[32] and the Declaration of the Creation of the USSR, forming the Union of Soviet Socialist Republics.[33] These two documents were confirmed by the first Congress of Soviets of the USSR and signed by the heads of the delegations,[34] Mikhail Kalinin, Mikhail Tskhakaya, Mikhail Frunze, Grigory Petrovsky, and Alexander Chervyakov,[35] on 30 December 1922. The formal proclamation was made from the stage of the Bolshoi Theatre.
46
+
47
+ An intensive restructuring of the economy, industry and politics of the country began in the early days of Soviet power in 1917. A large part of this was done according to the Bolshevik Initial Decrees, government documents signed by Vladimir Lenin. One of the most prominent breakthroughs was the GOELRO plan, which envisioned a major restructuring of the Soviet economy based on total electrification of the country.[36] The plan became the prototype for subsequent Five-Year Plans and was fulfilled by 1931.[37] After the economic policy of "War communism" during the Russian Civil War, as a prelude to fully developing socialism in the country, the Soviet government permitted some private enterprise to coexist alongside nationalized industry in the 1920s, and total food requisition in the countryside was replaced by a food tax.
48
+
49
+ From its creation, the government in the Soviet Union was based on the one-party rule of the Communist Party (Bolsheviks).[38] The stated purpose was to prevent the return of capitalist exploitation, and that the principles of democratic centralism would be the most effective in representing the people's will in a practical manner. The debate over the future of the economy provided the background for a power struggle in the years after Lenin's death in 1924. Initially, Lenin was to be replaced by a "troika" consisting of Grigory Zinoviev of the Ukrainian SSR, Lev Kamenev of the Russian SFSR, and Joseph Stalin of the Transcaucasian SFSR.
50
+
51
+ On 1 February 1924, the USSR was recognized by the United Kingdom. The same year, a Soviet Constitution was approved, legitimizing the December 1922 union. Despite the foundation of the Soviet state as a federative entity of many constituent republics, each with its own political and administrative entities, the term "Soviet Russia" – strictly applicable only to the Russian Federative Socialist Republic – was often applied to the entire country by non-Soviet writers and politicians.
52
+
53
+ On 3 April 1922, Stalin was named the General Secretary of the Communist Party of the Soviet Union. Lenin had appointed Stalin the head of the Workers' and Peasants' Inspectorate, which gave Stalin considerable power. By gradually consolidating his influence and isolating and outmanoeuvring his rivals within the party, Stalin became the undisputed leader of the country and, by the end of the 1920s, established a totalitarian rule. In October 1927, Zinoviev and Leon Trotsky were expelled from the Central Committee and forced into exile.
54
+
55
+ In 1928, Stalin introduced the first five-year plan for building a socialist economy. In place of the internationalism expressed by Lenin throughout the Revolution, it aimed to build Socialism in One Country. In industry, the state assumed control over all existing enterprises and undertook an intensive program of industrialization. In agriculture, rather than adhering to the "lead by example" policy advocated by Lenin,[39] forced collectivization of farms was implemented all over the country.
56
+
57
+ Famines ensued as a result, causing deaths estimated at three to seven million; surviving kulaks were persecuted, and many were sent to Gulags to do forced labor.[40][41] Social upheaval continued in the mid-1930s. Despite the turmoil of the mid-to-late 1930s, the country developed a robust industrial economy in the years preceding World War II.
58
+
59
+ Closer cooperation between the USSR and the West developed in the early 1930s. From 1932 to 1934, the country participated in the World Disarmament Conference. In 1933, diplomatic relations between the United States and the USSR were established when in November, the newly elected President of the United States, Franklin D. Roosevelt, chose to recognize Stalin's Communist government formally and negotiated a new trade agreement between the two countries.[42] In September 1934, the country joined the League of Nations. After the Spanish Civil War broke out in 1936, the USSR actively supported the Republican forces against the Nationalists, who were supported by Fascist Italy and Nazi Germany.[43]
60
+
61
+ In December 1936, Stalin unveiled a new constitution that was praised by supporters around the world as the most democratic constitution imaginable, though there was some skepticism.[i] Stalin's Great Purge resulted in the detainment or execution of many "Old Bolsheviks" who had participated in the October Revolution with Lenin. According to declassified Soviet archives, the NKVD arrested more than one and a half million people in 1937 and 1938, of whom 681,692 were shot.[45] Over those two years, there were an average of over one thousand executions a day.[46][j]
62
+
63
+ In 1939, the Soviet Union made a dramatic shift toward Nazi Germany. Almost a year after Britain and France had concluded the Munich Agreement with Germany, the Soviet Union made agreements with Germany as well, both militarily and economically during extensive talks. The two countries concluded the Molotov–Ribbentrop Pact and the German–Soviet Commercial Agreement in August 1939. The former made possible the Soviet occupation of Lithuania, Latvia, Estonia, Bessarabia, northern Bukovina, and eastern Poland. In late November, unable to coerce the Republic of Finland by diplomatic means into moving its border 25 kilometres (16 mi) back from Leningrad, Stalin ordered the invasion of Finland. In the east, the Soviet military won several decisive victories during border clashes with the Empire of Japan in 1938 and 1939. However, in April 1941, the USSR signed the Soviet–Japanese Neutrality Pact with Japan, recognizing the territorial integrity of Manchukuo, a Japanese puppet state.
64
+
65
+ Germany broke the Molotov–Ribbentrop Pact and invaded the Soviet Union on 22 June 1941 starting what was known in the USSR as the Great Patriotic War. The Red Army stopped the seemingly invincible German Army at the Battle of Moscow, aided by an unusually harsh winter. The Battle of Stalingrad, which lasted from late 1942 to early 1943, dealt a severe blow to Germany from which they never fully recovered and became a turning point in the war. After Stalingrad, Soviet forces drove through Eastern Europe to Berlin before Germany surrendered in 1945. The German Army suffered 80% of its military deaths in the Eastern Front.[50] Harry Hopkins, a close foreign policy advisor to Franklin D. Roosevelt, spoke on 10 August 1943 of the USSR's decisive role in the war.[k]
66
+
67
+ In the same year, the USSR, in fulfilment of its agreement with the Allies at the Yalta Conference, denounced the Soviet–Japanese Neutrality Pact in April 1945[52] and invaded Manchukuo and other Japan-controlled territories on 9 August 1945.[53] This conflict ended with a decisive Soviet victory, contributing to the unconditional surrender of Japan and the end of World War II.
68
+
69
+ The USSR suffered greatly in the war, losing around 27 million people.[54] Approximately 2.8 million Soviet POWs died of starvation, mistreatment, or executions in just eight months of 1941–42.[55][56] During the war, the country together with the United States, the United Kingdom and China were considered the Big Four Allied powers,[57] and later became the Four Policemen that formed the basis of the United Nations Security Council.[58] It emerged as a superpower in the post-war period. Once denied diplomatic recognition by the Western world, the USSR had official relations with practically every country by the late 1940s. A member of the United Nations at its foundation in 1945, the country became one of the five permanent members of the United Nations Security Council, which gave it the right to veto any of its resolutions.
70
+
71
+ During the immediate post-war period, the Soviet Union rebuilt and expanded its economy, while maintaining its strictly centralized control. It took effective control over most of the countries of Eastern Europe (except Yugoslavia and later Albania), turning them into satellite states. The USSR bound its satellite states in a military alliance, the Warsaw Pact, in 1955, and an economic organization, Council for Mutual Economic Assistance or Comecon, a counterpart to the European Economic Community (EEC), from 1949 to 1991.[59] The USSR concentrated on its own recovery, seizing and transferring most of Germany's industrial plants, and it exacted war reparations from East Germany, Hungary, Romania, and Bulgaria using Soviet-dominated joint enterprises. It also instituted trading arrangements deliberately designed to favor the country. Moscow controlled the Communist parties that ruled the satellite states, and they followed orders from the Kremlin.[m] Later, the Comecon supplied aid to the eventually victorious Communist Party of China, and its influence grew elsewhere in the world. Fearing its ambitions, the Soviet Union's wartime allies, the United Kingdom and the United States, became its enemies. In the ensuing Cold War, the two sides clashed indirectly in proxy wars.
72
+
73
+ Stalin died on 5 March 1953. Without a mutually agreeable successor, the highest Communist Party officials initially opted to rule the Soviet Union jointly through a troika headed by Georgy Malenkov. This did not last, however, and Nikita Khrushchev eventually won the ensuing power struggle by the mid-1950s. In 1956, he denounced Stalin's use of repression and proceeded to ease controls over the party and society. This was known as de-Stalinization.
74
+
75
+ Moscow considered Eastern Europe to be a critically vital buffer zone for the forward defence of its western borders, in case of another major invasion such as the German invasion of 1941. For this reason, the USSR sought to cement its control of the region by transforming the Eastern European countries into satellite states, dependent upon and subservient to its leadership. Soviet military force was used to suppress anti-Stalinist uprisings in Hungary and Poland in 1956.
76
+
77
+ In the late 1950s, a confrontation with China regarding the Soviet rapprochement with the West, and what Mao Zedong perceived as Khrushchev's revisionism, led to the Sino–Soviet split. This resulted in a break throughout the global Marxist–Leninist movement, with the governments in Albania, Cambodia and Somalia choosing to ally with China.
78
+
79
+ During this period of the late 1950s and early 1960s, the USSR continued to realize scientific and technological exploits in the Space Race, rivaling the United States: launching the first artificial satellite, Sputnik 1 in 1957; a living dog named Laika in 1957; the first human being, Yuri Gagarin in 1961; the first woman in space, Valentina Tereshkova in 1963; Alexei Leonov, the first person to walk in space in 1965; the first soft landing on the Moon by spacecraft Luna 9 in 1966; and the first Moon rovers, Lunokhod 1 and Lunokhod 2.[61]
80
+
81
+ Khrushchev initiated "The Thaw", a complex shift in political, cultural and economic life in the country. This included some openness and contact with other nations and new social and economic policies with more emphasis on commodity goods, allowing a dramatic rise in living standards while maintaining high levels of economic growth. Censorship was relaxed as well. Khrushchev's reforms in agriculture and administration, however, were generally unproductive. In 1962, he precipitated a crisis with the United States over the Soviet deployment of nuclear missiles in Cuba. An agreement was made with the United States to remove nuclear missiles from both Cuba and Turkey, concluding the crisis. This event caused Khrushchev much embarrassment and loss of prestige, resulting in his removal from power in 1964.
82
+
83
+ Following the ousting of Khrushchev, another period of collective leadership ensued, consisting of Leonid Brezhnev as General Secretary, Alexei Kosygin as Premier and Nikolai Podgorny as Chairman of the Presidium, lasting until Brezhnev established himself in the early 1970s as the preeminent Soviet leader.
84
+
85
+ In 1968, the Soviet Union and Warsaw Pact allies invaded Czechoslovakia to halt the Prague Spring reforms. In the aftermath, Brezhnev justified the invasion along with the earlier invasions of Eastern European states by introducing the Brezhnev Doctrine, which claimed the right of the Soviet Union to violate the sovereignty of any country that attempted to replace Marxism–Leninism with capitalism.
86
+
87
+ Brezhnev presided throughout détente with the West that resulted in treaties on armament control (SALT I, SALT II, Anti-Ballistic Missile Treaty) while at the same time building up Soviet military might.
88
+
89
+ In October 1977, the third Soviet Constitution was unanimously adopted. The prevailing mood of the Soviet leadership at the time of Brezhnev's death in 1982 was one of aversion to change. The long period of Brezhnev's rule had come to be dubbed one of "standstill", with an ageing and ossified top political leadership. This period is also known as the Era of Stagnation, a period of adverse economic, political, and social effects in the country, which began during the rule of Brezhnev and continued under his successors Yuri Andropov and Konstantin Chernenko.
90
+
91
+ In late 1979, the Soviet Union's military intervened in the ongoing civil war in neighboring Afghanistan, effectively ending a détente with the West.
92
+
93
+ Two developments dominated the decade that followed: the increasingly apparent crumbling of the Soviet Union's economic and political structures, and the patchwork attempts at reforms to reverse that process. Kenneth S. Deffeyes argued in Beyond Oil that the Reagan administration encouraged Saudi Arabia to lower the price of oil to the point where the Soviets could not make a profit selling their oil, and resulted in the depletion of the country's hard currency reserves.[62]
94
+
95
+ Brezhnev's next two successors, transitional figures with deep roots in his tradition, did not last long. Yuri Andropov was 68 years old and Konstantin Chernenko 72 when they assumed power; both died in less than two years. In an attempt to avoid a third short-lived leader, in 1985, the Soviets turned to the next generation and selected Mikhail Gorbachev. He made significant changes in the economy and party leadership, called perestroika. His policy of glasnost freed public access to information after decades of heavy government censorship. Gorbachev also moved to end the Cold War. In 1988, the USSR abandoned its war in Afghanistan and began to withdraw its forces. In the following year, Gorbachev refused to interfere in the internal affairs of the Soviet satellite states, which paved the way for the Revolutions of 1989. With the tearing down of the Berlin Wall and with East and West Germany pursuing unification, the Iron Curtain between the West and Soviet-controlled regions came down.
96
+
97
+ At the same time, the Soviet republics started legal moves towards potentially declaring sovereignty over their territories, citing the freedom to secede in Article 72 of the USSR constitution.[63] On 7 April 1990, a law was passed allowing a republic to secede if more than two-thirds of its residents voted for it in a referendum.[64] Many held their first free elections in the Soviet era for their own national legislatures in 1990. Many of these legislatures proceeded to produce legislation contradicting the Union laws in what was known as the "War of Laws". In 1989, the Russian SFSR convened a newly elected Congress of People's Deputies. Boris Yeltsin was elected its chairman. On 12 June 1990, the Congress declared Russia's sovereignty over its territory and proceeded to pass laws that attempted to supersede some of the Soviet laws. After a landslide victory of Sąjūdis in Lithuania, that country declared its independence restored on 11 March 1990.
98
+
99
+ A referendum for the preservation of the USSR was held on 17 March 1991 in nine republics (the remainder having boycotted the vote), with the majority of the population in those republics voting for preservation of the Union. The referendum gave Gorbachev a minor boost. In the summer of 1991, the New Union Treaty, which would have turned the country into a much looser Union, was agreed upon by eight republics. The signing of the treaty, however, was interrupted by the August Coup—an attempted coup d'état by hardline members of the government and the KGB who sought to reverse Gorbachev's reforms and reassert the central government's control over the republics. After the coup collapsed, Yeltsin was seen as a hero for his decisive actions, while Gorbachev's power was effectively ended. The balance of power tipped significantly towards the republics. In August 1991, Latvia and Estonia immediately declared the restoration of their full independence (following Lithuania's 1990 example). Gorbachev resigned as general secretary in late August, and soon afterwards, the party's activities were indefinitely suspended—effectively ending its rule. By the fall, Gorbachev could no longer influence events outside Moscow, and he was being challenged even there by Yeltsin, who had been elected President of Russia in July 1991.
100
+
101
+ The remaining 12 republics continued discussing new, increasingly looser, models of the Union. However, by December all except Russia and Kazakhstan had formally declared independence. During this time, Yeltsin took over what remained of the Soviet government, including the Moscow Kremlin. The final blow was struck on 1 December when Ukraine, the second-most powerful republic, voted overwhelmingly for independence. Ukraine's secession ended any realistic chance of the country staying together even on a limited scale.
102
+
103
+ On 8 December 1991, the presidents of Russia, Ukraine and Belarus (formerly Byelorussia), signed the Belavezha Accords, which declared the Soviet Union dissolved and established the Commonwealth of Independent States (CIS) in its place. While doubts remained over the authority of the accords to do this, on 21 December 1991, the representatives of all Soviet republics except Georgia signed the Alma-Ata Protocol, which confirmed the accords. On 25 December 1991, Gorbachev resigned as the President of the USSR, declaring the office extinct. He turned the powers that had been vested in the presidency over to Yeltsin. That night, the Soviet flag was lowered for the last time, and the Russian tricolor was raised in its place.
104
+
105
+ The following day, the Supreme Soviet, the highest governmental body, voted both itself and the country out of existence. This is generally recognized as marking the official, final dissolution of the Soviet Union as a functioning state, and the end of the Cold War.[65] The Soviet Army initially remained under overall CIS command but was soon absorbed into the different military forces of the newly independent states. The few remaining Soviet institutions that had not been taken over by Russia ceased to function by the end of 1991.
106
+
107
+ Following the dissolution, Russia was internationally recognized[66] as its legal successor on the international stage. To that end, Russia voluntarily accepted all Soviet foreign debt and claimed Soviet overseas properties as its own. Under the 1992 Lisbon Protocol, Russia also agreed to receive all nuclear weapons remaining in the territory of other former Soviet republics. Since then, the Russian Federation has assumed the Soviet Union's rights and obligations. Ukraine has refused to recognize exclusive Russian claims to succession of the USSR and claimed such status for Ukraine as well, which was codified in Articles 7 and 8 of its 1991 law On Legal Succession of Ukraine. Since its independence in 1991, Ukraine has continued to pursue claims against Russia in foreign courts, seeking to recover its share of the foreign property that was owned by the USSR.
108
+
109
+ The dissolution was followed by a severe drop in economic and social conditions in post-Soviet states,[67][68] including a rapid increase in poverty,[69][70][71][72] crime,[73][74] corruption,[75][76] unemployment,[77] homelessness,[78][79] rates of disease,[80][81][82] demographic losses,[83] income inequality and the rise of an oligarchical class,[84][69] along with decreases in calorie intake, life expectancy, adult literacy, and income.[85] Between 1988/1989 and 1993/1995, the Gini ratio increased by an average of 9 points for all former socialist countries.[69] The economic shocks that accompanied wholesale privatization were associated with sharp increases in mortality. Data shows Russia, Kazakhstan, Latvia, Lithuania and Estonia saw a tripling of unemployment and a 42% increase in male death rates between 1991 and 1994.[86][87] In the following decades, only five or six of the post-communist states are on a path to joining the wealthy capitalist West while most are falling behind, some to such an extent that it will take over fifty years to catch up to where they were before the fall of the Soviet Bloc.[88][89]
110
+
111
+ In summing up the international ramifications of these events, Vladislav Zubok stated: "The collapse of the Soviet empire was an event of epochal geopolitical, military, ideological, and economic significance."[90] Before the dissolution, the country had maintained its status as one of the world's two superpowers for four decades after World War II through its hegemony in Eastern Europe, military strength, economic strength, aid to developing countries, and scientific research, especially in space technology and weaponry.[91]
112
+
113
+ The analysis of the succession of states for the 15 post-Soviet states is complex. The Russian Federation is seen as the legal continuator state and is for most purposes the heir to the Soviet Union. It retained ownership of all former Soviet embassy properties, as well as the old Soviet UN membership and permanent membership on the Security Council.
114
+
115
+ Of the two other co-founding states of the USSR at the time of the dissolution, Ukraine was the only one that had passed similar to Russia's laws that it is a state-successor of both the Ukrainian SSR and the USSR.[92] Soviet treaties laid groundwork for Ukraine's future foreign agreements as well as they led to Ukraine agreeing to undertake 16.37% of debts of the Soviet Union for which it was going to receive its share of USSR's foreign property. Although it had a tough position at the time, due to Russia's position as a "single continuation of the USSR" that became widely accepted in the West as well as a constant pressure from the Western countries, allowed Russia to dispose state property of USSR abroad and conceal information about it. Due to that Ukraine never ratified "zero option" agreement that Russian Federation had signed with other former Soviet republics, as it denied disclosing of information about Soviet Gold Reserves and its Diamond Fund.[93][94] Dispute over former Soviet property and assets between two former republics is still ongoing:
116
+
117
+ The conflict is unsolvable. We can continue to poke Kiev handouts in the calculation of "solve the problem", only it won't be solved. Going to a trial is also pointless: for a number of European countries this is a political issue, and they will make a decision clearly in whose favor. What to do in this situation is an open question. Search for non-trivial solutions. But we must remember that in 2014, with the filing of the then Ukrainian Prime Minister Yatsenyuk, litigation with Russia resumed in 32 countries.
118
+
119
+ Similar situation occurred with restitution of cultural property. Although on 14 February 1992 Russia and other former Soviet republics signed agreement "On the return of cultural and historic property to the origin states" in Minsk, it was halted by Russian State Duma that had eventually passed "Federal Law on Cultural Valuables Displaced to the USSR as a Result of the Second World War and Located on the Territory of the Russian Federation" which made restitution currently impossible.[96]
120
+
121
+ There are additionally four states that claim independence from the other internationally recognised post-Soviet states but possess limited international recognition: Abkhazia, Nagorno-Karabakh, South Ossetia and Transnistria. The Chechen separatist movement of the Chechen Republic of Ichkeria lacks any international recognition.
122
+
123
+ During his rule, Stalin always made the final policy decisions. Otherwise, Soviet foreign policy was set by the Commission on the Foreign Policy of the Central Committee of the Communist Party of the Soviet Union, or by the party's highest body the Politburo. Operations were handled by the separate Ministry of Foreign Affairs. It was known as the People's Commissariat for Foreign Affairs (or Narkomindel), until 1946. The most influential spokesmen were Georgy Chicherin (1872–1936), Maxim Litvinov (1876–1951), Vyacheslav Molotov (1890–1986), Andrey Vyshinsky (1883–1954) and Andrei Gromyko (1909–1989). Intellectuals were based in the Moscow State Institute of International Relations.[97]
124
+
125
+ The Communist leadership of the Soviet Union intensely debated foreign policy issues and change directions several times. Even after Stalin assumed dictatorial control in the late 1920s, there were debates, and he frequently changed positions.[106]
126
+
127
+ During the country's early period, it was assumed that Communist revolutions would break out soon in every major industrial country, and it was the Soviet responsibility to assist them. The Comintern was the weapon of choice. A few revolutions did break out, but they were quickly suppressed (the longest lasting one was in Hungary)—the Hungarian Soviet Republic—lasted only from 21 March 1919 to 1 August 1919. The Russian Bolsheviks were in no position to give any help.
128
+
129
+ By 1921, Lenin, Trotsky, and Stalin realized that capitalism had stabilized itself in Europe and there would not be any widespread revolutions anytime soon. It became the duty of the Russian Bolsheviks to protect what they had in Russia, and avoid military confrontations that might destroy their bridgehead. Russia was now a pariah state, along with Germany. The two came to terms in 1922 with the Treaty of Rapallo that settled long-standing grievances. At the same time, the two countries secretly set up training programs for the illegal German army and air force operations at hidden camps in the USSR.[107]
130
+
131
+ Moscow eventually stopped threatening other states, and instead worked to open peaceful relationships in terms of trade, and diplomatic recognition. The United Kingdom dismissed the warnings of Winston Churchill and a few others about a continuing communist threat, and opened trade relations and de facto diplomatic recognition in 1922. There was hope for a settlement of the pre-war tsarist debts, but it was repeatedly postponed. Formal recognition came when the new Labour Party came to power in 1924.[108] All the other countries followed suit in opening trade relations. Henry Ford opened large-scale business relations with the Soviets in the late 1920s, hoping that it would lead to long-term peace. Finally, in 1933, the United States officially recognized the USSR, a decision backed by the public opinion and especially by US business interests that expected an opening of a new profitable market.[109]
132
+
133
+ In the late 1920s and early 1930s, Stalin ordered Communist parties across the world to strongly oppose non-communist political parties, labor unions or other organizations on the left. Stalin reversed himself in 1934 with the Popular Front program that called on all Communist parties to join together with all anti-Fascist political, labor, and organizational forces that were opposed to fascism, especially of the Nazi variety.[110][111]
134
+
135
+ In 1939, half a year after the Munich Agreement, the USSR attempted to form an anti-Nazi alliance with France and Britain.[112] Adolf Hitler proposed a better deal, which would give the USSR control over much of Eastern Europe through the Molotov–Ribbentrop Pact. In September, Germany invaded Poland, and the USSR also invaded later that month, resulting in the partition of Poland. In response, Britain and France declared war on Germany, marking the beginning of World War II.[113]
136
+
137
+ There were three power hierarchies in the Soviet Union: the legislature represented by the Supreme Soviet of the Soviet Union, the government represented by the Council of Ministers, and the Communist Party of the Soviet Union (CPSU), the only legal party and the final policymaker in the country.[114]
138
+
139
+ At the top of the Communist Party was the Central Committee, elected at Party Congresses and Conferences. In turn, the Central Committee voted for a Politburo (called the Presidium between 1952–1966), Secretariat and the General Secretary (First Secretary from 1953 to 1966), the de facto highest office in the Soviet Union.[115] Depending on the degree of power consolidation, it was either the Politburo as a collective body or the General Secretary, who always was one of the Politburo members, that effectively led the party and the country[116] (except for the period of the highly personalized authority of Stalin, exercised directly through his position in the Council of Ministers rather than the Politburo after 1941).[117] They were not controlled by the general party membership, as the key principle of the party organization was democratic centralism, demanding strict subordination to higher bodies, and elections went uncontested, endorsing the candidates proposed from above.[118]
140
+
141
+ The Communist Party maintained its dominance over the state mainly through its control over the system of appointments. All senior government officials and most deputies of the Supreme Soviet were members of the CPSU. Of the party heads themselves, Stalin (1941–1953) and Khrushchev (1958–1964) were Premiers. Upon the forced retirement of Khrushchev, the party leader was prohibited from this kind of double membership,[119] but the later General Secretaries for at least some part of their tenure occupied the mostly ceremonial position of Chairman of the Presidium of the Supreme Soviet, the nominal head of state. The institutions at lower levels were overseen and at times supplanted by primary party organizations.[120]
142
+
143
+ However, in practice the degree of control the party was able to exercise over the state bureaucracy, particularly after the death of Stalin, was far from total, with the bureaucracy pursuing different interests that were at times in conflict with the party.[121] Nor was the party itself monolithic from top to bottom, although factions were officially banned.[122]
144
+
145
+ The Supreme Soviet (successor of the Congress of Soviets and Central Executive Committee) was nominally the highest state body for most of the Soviet history,[123] at first acting as a rubber stamp institution, approving and implementing all decisions made by the party. However, its powers and functions were extended in the late 1950s, 1960s and 1970s, including the creation of new state commissions and committees. It gained additional powers relating to the approval of the Five-Year Plans and the government budget.[124] The Supreme Soviet elected a Presidium to wield its power between plenary sessions,[125] ordinarily held twice a year, and appointed the Supreme Court,[126] the Procurator General[127] and the Council of Ministers (known before 1946 as the Council of People's Commissars), headed by the Chairman (Premier) and managing an enormous bureaucracy responsible for the administration of the economy and society.[125] State and party structures of the constituent republics largely emulated the structure of the central institutions, although the Russian SFSR, unlike the other constituent republics, for most of its history had no republican branch of the CPSU, being ruled directly by the union-wide party until 1990. Local authorities were organized likewise into party committees, local Soviets and executive committees. While the state system was nominally federal, the party was unitary.[128]
146
+
147
+ The state security police (the KGB and its predecessor agencies) played an important role in Soviet politics. It was instrumental in the Great Purge,[129] but was brought under strict party control after Stalin's death. Under Yuri Andropov, the KGB engaged in the suppression of political dissent and maintained an extensive network of informers, reasserting itself as a political actor to some extent independent of the party-state structure,[130] culminating in the anti-corruption campaign targeting high-ranking party officials in the late 1970s and early 1980s.[131]
148
+
149
+ The constitution, which was promulgated in 1918, 1924, 1936 and 1977,[132] did not limit state power. No formal separation of powers existed between the Party, Supreme Soviet and Council of Ministers[133] that represented executive and legislative branches of the government. The system was governed less by statute than by informal conventions, and no settled mechanism of leadership succession existed. Bitter and at times deadly power struggles took place in the Politburo after the deaths of Lenin[134] and Stalin,[135] as well as after Khrushchev's dismissal,[136] itself due to a decision by both the Politburo and the Central Committee.[137] All leaders of the Communist Party before Gorbachev died in office, except Georgy Malenkov[138] and Khrushchev, both dismissed from the party leadership amid internal struggle within the party.[137]
150
+
151
+ Between 1988 and 1990, facing considerable opposition, Mikhail Gorbachev enacted reforms shifting power away from the highest bodies of the party and making the Supreme Soviet less dependent on them. The Congress of People's Deputies was established, the majority of whose members were directly elected in competitive elections held in March 1989. The Congress now elected the Supreme Soviet, which became a full-time parliament, and much stronger than before. For the first time since the 1920s, it refused to rubber stamp proposals from the party and Council of Ministers.[139] In 1990, Gorbachev introduced and assumed the position of the President of the Soviet Union, concentrated power in his executive office, independent of the party, and subordinated the government,[140] now renamed the Cabinet of Ministers of the USSR, to himself.[141]
152
+
153
+ Tensions grew between the Union-wide authorities under Gorbachev, reformists led in Russia by Boris Yeltsin and controlling the newly elected Supreme Soviet of the Russian SFSR, and communist hardliners. On 19–21 August 1991, a group of hardliners staged a coup attempt. The coup failed, and the State Council of the Soviet Union became the highest organ of state power "in the period of transition".[142] Gorbachev resigned as General Secretary, only remaining President for the final months of the existence of the USSR.[143]
154
+
155
+ The judiciary was not independent of the other branches of government. The Supreme Court supervised the lower courts (People's Court) and applied the law as established by the constitution or as interpreted by the Supreme Soviet. The Constitutional Oversight Committee reviewed the constitutionality of laws and acts. The Soviet Union used the inquisitorial system of Roman law, where the judge, procurator, and defence attorney collaborate to establish the truth.[144]
156
+
157
+ Constitutionally, the USSR was a federation of constituent Union Republics, which were either unitary states, such as Ukraine or Byelorussia (SSRs), or federations, such as Russia or Transcaucasia (SFSRs),[114] all four being the founding republics who signed the Treaty on the Creation of the USSR in December 1922. In 1924, during the national delimitation in Central Asia, Uzbekistan and Turkmenistan were formed from parts of Russia's Turkestan ASSR and two Soviet dependencies, the Khorezm and Bukharan SSRs. In 1929, Tajikistan was split off from the Uzbekistan SSR. With the constitution of 1936, the Transcaucasian SFSR was dissolved, resulting in its constituent republics of Armenia, Georgia and Azerbaijan being elevated to Union Republics, while Kazakhstan and Kirghizia were split off from Russian SFSR, resulting in the same status.[145] In August 1940, Moldavia was formed from parts of Ukraine and Bessarabia and northern Bukovina. Estonia, Latvia and Lithuania (SSRs) were also admitted into the union which was not recognized by most of the international community and was considered an illegal occupation. Karelia was split off from Russia as a Union Republic in March 1940 and was reabsorbed in 1956. Between July 1956 and September 1991, there were 15 union republics (see map below).[146]
158
+
159
+ While nominally a union of equals, in practice the Soviet Union was dominated by Russians. The domination was so absolute that for most of its existence, the country was commonly (but incorrectly) referred to as "Russia". While the RSFSR was technically only one republic within the larger union, it was by far the largest (both in terms of population and area), most powerful, most developed, and the industrial center of the Soviet Union. Historian Matthew White wrote that it was an open secret that the country's federal structure was "window dressing" for Russian dominance. For that reason, the people of the USSR were usually called "Russians", not "Soviets", since "everyone knew who really ran the show".[147]
160
+
161
+ Under the Military Law of September 1925, the Soviet Armed Forces consisted of three components, namely the Land Forces, the Air Force, the Navy, Joint State Political Directorate (OGPU), and the Internal Troops.[148] The OGPU later became independent and in 1934 joined the NKVD, and so its internal troops were under the joint leadership of the defense and internal commissariats. After World War II, Strategic Missile Forces (1959), Air Defense Forces (1948) and National Civil Defense Forces (1970) were formed, which ranked first, third, and sixth in the official Soviet system of importance (ground forces were second, Air Force Fourth, and Navy Fifth).
162
+
163
+ The army had the greatest political influence. In 1989, there served two million soldiers divided between 150 motorized and 52 armored divisions. Until the early 1960s, the Soviet navy was a rather small military branch, but after the Caribbean crisis, under the leadership of Sergei Gorshkov, it expanded significantly. It became known for battlecruisers and submarines. In 1989 there served 500 000 men. The Soviet Air Force focused on a fleet of strategic bombers and during war situation was to eradicate enemy infrastructure and nuclear capacity. The air force also had a number of fighters and tactical bombers to support the army in the war. Strategic missile forces had more than 1,400 intercontinental ballistic missiles (ICBMs), deployed between 28 bases and 300 command centers.
164
+
165
+ In the post-war period, the Soviet Army was directly involved in several military operations abroad. These included the suppression of the uprising in East Germany (1953), Hungarian revolution (1956) and the invasion of Czechoslovakia (1968). The Soviet Union also participated in the war in Afghanistan between 1979 and 1989.
166
+
167
+ In the Soviet Union, general conscription applied.
168
+
169
+ At the end of the 1950s, with the help of engineers and technologies captured and imported from defeated Nazi Germany, the Soviets constructed the first satellite - Sputnik 1 and thus overtook the United States. This was followed by other successful satellites and experimental dogs were sent. On April 12, 1961, the first cosmonaut, Yuri Gagarin, was sent to the space. He once flew around the Earth and successfully landed in the Kazakh steppe. At that time, the first plans for space shuttles and orbital stations were drawn up in Soviet design offices, but in the end personal disputes between designers and management prevented this.
170
+
171
+ The first big fiasco for the USSR was the landing on the moon by the Americans, when the Russians were not able to respond to the Americans in time with the same project. In the 1970s, more specific proposals for the design of the space shuttle began to emerge, but shortcomings, especially in the electronics industry (rapid overheating of electronics), postponed the program until the end of the 1980s. The first shuttle, the Buran, flew in 1988, but without a human crew. Another shuttle, Ptichka, eventually ended up under construction, as the shuttle project was canceled in 1991. For their launch into space, there is today an unused superpower rocket, Energia, which is the most powerful in the world.
172
+
173
+ In the late 1980s, the Soviet Union managed to build the Mir orbital station. It was built on the construction of Salyut stations and its tasks were purely civilian and research. In the 1990s, when the US Skylab was shut down due to lack of funds, it was the only orbital station in operation. Gradually, other modules were added to it, including American ones. However, the technical condition of the station deteriorated rapidly, especially after the fire, so in 2001 it was decided to bring it into the atmosphere where it burned down.
174
+
175
+ The Soviet Union adopted a command economy, whereby production and distribution of goods were centralized and directed by the government. The first Bolshevik experience with a command economy was the policy of War communism, which involved the nationalization of industry, centralized distribution of output, coercive requisition of agricultural production, and attempts to eliminate money circulation, private enterprises and free trade. After the severe economic collapse, Lenin replaced war communism by the New Economic Policy (NEP) in 1921, legalizing free trade and private ownership of small businesses. The economy quickly recovered as a result.[149]
176
+
177
+ After a long debate among the members of Politburo about the course of economic development, by 1928–1929, upon gaining control of the country, Stalin abandoned the NEP and pushed for full central planning, starting forced collectivization of agriculture and enacting draconian labor legislation. Resources were mobilized for rapid industrialization, which significantly expanded Soviet capacity in heavy industry and capital goods during the 1930s.[149] The primary motivation for industrialization was preparation for war, mostly due to distrust of the outside capitalist world.[150] As a result, the USSR was transformed from a largely agrarian economy into a great industrial power, leading the way for its emergence as a superpower after World War II.[151] The war caused extensive devastation of the Soviet economy and infrastructure, which required massive reconstruction.[152]
178
+
179
+ By the early 1940s, the Soviet economy had become relatively self-sufficient; for most of the period until the creation of Comecon, only a tiny share of domestic products was traded internationally.[153] After the creation of the Eastern Bloc, external trade rose rapidly. However, the influence of the world economy on the USSR was limited by fixed domestic prices and a state monopoly on foreign trade.[154] Grain and sophisticated consumer manufactures became major import articles from around the 1960s.[153] During the arms race of the Cold War, the Soviet economy was burdened by military expenditures, heavily lobbied for by a powerful bureaucracy dependent on the arms industry. At the same time, the USSR became the largest arms exporter to the Third World. Significant amounts of Soviet resources during the Cold War were allocated in aid to the other socialist states.[153]
180
+
181
+ From the 1930s until its dissolution in late 1991, the way the Soviet economy operated remained essentially unchanged. The economy was formally directed by central planning, carried out by Gosplan and organized in five-year plans. However, in practice, the plans were highly aggregated and provisional, subject to ad hoc intervention by superiors. All critical economic decisions were taken by the political leadership. Allocated resources and plan targets were usually denominated in rubles rather than in physical goods. Credit was discouraged, but widespread. The final allocation of output was achieved through relatively decentralized, unplanned contracting. Although in theory prices were legally set from above, in practice they were often negotiated, and informal horizontal links (e.g. between producer factories) were widespread.[149]
182
+
183
+ A number of basic services were state-funded, such as education and health care. In the manufacturing sector, heavy industry and defence were prioritized over consumer goods.[155] Consumer goods, particularly outside large cities, were often scarce, of poor quality and limited variety. Under the command economy, consumers had almost no influence on production, and the changing demands of a population with growing incomes could not be satisfied by supplies at rigidly fixed prices.[156] A massive unplanned second economy grew up at low levels alongside the planned one, providing some of the goods and services that the planners could not. The legalization of some elements of the decentralized economy was attempted with the reform of 1965.[149]
184
+
185
+ Although statistics of the Soviet economy are notoriously unreliable and its economic growth difficult to estimate precisely,[157][158] by most accounts, the economy continued to expand until the mid-1980s. During the 1950s and 1960s, it had comparatively high growth and was catching up to the West.[159] However, after 1970, the growth, while still positive, steadily declined much more quickly and consistently than in other countries, despite a rapid increase in the capital stock (the rate of capital increase was only surpassed by Japan).[149]
186
+
187
+ Overall, the growth rate of per capita income in the Soviet Union between 1960 and 1989 was slightly above the world average (based on 102 countries).[citation needed] According to Stanley Fischer and William Easterly, growth could have been faster. By their calculation, per capita income in 1989 should have been twice higher than it was, considering the amount of investment, education and population. The authors attribute this poor performance to the low productivity of capital.[160] Steven Rosenfielde states that the standard of living declined due to Stalin's despotism. While there was a brief improvement after his death, it lapsed into stagnation.[161]
188
+
189
+ In 1987, Mikhail Gorbachev attempted to reform and revitalize the economy with his program of perestroika. His policies relaxed state control over enterprises but did not replace it by market incentives, resulting in a sharp decline in output. The economy, already suffering from reduced petroleum export revenues, started to collapse. Prices were still fixed, and the property was still largely state-owned until after the country's dissolution.[149][156] For most of the period after World War II until its collapse, Soviet GDP (PPP) was the second-largest in the world, and third during the second half of the 1980s,[162] although on a per-capita basis, it was behind that of First World countries.[163] Compared to countries with similar per-capita GDP in 1928, the Soviet Union experienced significant growth.[164]
190
+
191
+ In 1990, the country had a Human Development Index of 0.920, placing it in the "high" category of human development. It was the third-highest in the Eastern Bloc, behind Czechoslovakia and East Germany, and the 25th in the world of 130 countries.[165]
192
+
193
+ The need for fuel declined in the Soviet Union from the 1970s to the 1980s,[166] both per ruble of gross social product and per ruble of industrial product. At the start, this decline grew very rapidly but gradually slowed down between 1970 and 1975. From 1975 and 1980, it grew even slower,[clarification needed] only 2.6%.[167] David Wilson, a historian, believed that the gas industry would account for 40% of Soviet fuel production by the end of the century. His theory did not come to fruition because of the USSR's collapse.[168] The USSR, in theory, would have continued to have an economic growth rate of 2–2.5% during the 1990s because of Soviet energy fields.[clarification needed][169] However, the energy sector faced many difficulties, among them the country's high military expenditure and hostile relations with the First World.[170]
194
+
195
+ In 1991, the Soviet Union had a pipeline network of 82,000 kilometres (51,000 mi) for crude oil and another 206,500 kilometres (128,300 mi) for natural gas.[171] Petroleum and petroleum-based products, natural gas, metals, wood, agricultural products, and a variety of manufactured goods, primarily machinery, arms and military equipment, were exported.[172] In the 1970s and 1980s, the USSR heavily relied on fossil fuel exports to earn hard currency.[153] At its peak in 1988, it was the largest producer and second-largest exporter of crude oil, surpassed only by Saudi Arabia.[173]
196
+
197
+ The Soviet Union placed great emphasis on science and technology within its economy,[174] however, the most remarkable Soviet successes in technology, such as producing the world's first space satellite, typically were the responsibility of the military.[155] Lenin believed that the USSR would never overtake the developed world if it remained as technologically backward as it was upon its founding. Soviet authorities proved their commitment to Lenin's belief by developing massive networks, research and development organizations. In the early 1960s, the Soviets awarded 40% of chemistry PhDs to women, compared to only 5% in the United States.[175] By 1989, Soviet scientists were among the world's best-trained specialists in several areas, such as energy physics, selected areas of medicine, mathematics, welding and military technologies. Due to rigid state planning and bureaucracy, the Soviets remained far behind technologically in chemistry, biology, and computers when compared to the First World.
198
+
199
+ Under the Reagan administration, Project Socrates determined that the Soviet Union addressed the acquisition of science and technology in a manner that was radically different from what the US was using. In the case of the US, economic prioritization was being used for indigenous research and development as the means to acquire science and technology in both the private and public sectors. In contrast, the USSR was offensively and defensively maneuvering in the acquisition and utilization of the worldwide technology, to increase the competitive advantage that they acquired from the technology while preventing the US from acquiring a competitive advantage. However, technology-based planning was executed in a centralized, government-centric manner that greatly hindered its flexibility. This was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.[176][177][178]
200
+
201
+ Transport was a vital component of the country's economy. The economic centralization of the late 1920s and 1930s led to the development of infrastructure on a massive scale, most notably the establishment of Aeroflot, an aviation enterprise.[179] The country had a wide variety of modes of transport by land, water and air.[171] However, due to inadequate maintenance, much of the road, water and Soviet civil aviation transport were outdated and technologically backward compared to the First World.[180]
202
+
203
+ Soviet rail transport was the largest and most intensively used in the world;[180] it was also better developed than most of its Western counterparts.[181] By the late 1970s and early 1980s, Soviet economists were calling for the construction of more roads to alleviate some of the burdens from the railways and to improve the Soviet government budget.[182] The street network and automotive industry[183] remained underdeveloped,[184] and dirt roads were common outside major cities.[185] Soviet maintenance projects proved unable to take care of even the few roads the country had. By the early-to-mid-1980s, the Soviet authorities tried to solve the road problem by ordering the construction of new ones.[185] Meanwhile, the automobile industry was growing at a faster rate than road construction.[186] The underdeveloped road network led to a growing demand for public transport.[187]
204
+
205
+ Despite improvements, several aspects of the transport sector were still[when?] riddled with problems due to outdated infrastructure, lack of investment, corruption and bad decision-making. Soviet authorities were unable to meet the growing demand for transport infrastructure and services.
206
+
207
+ The Soviet merchant navy was one of the largest in the world.[171]
208
+
209
+ Excess deaths throughout World War I and the Russian Civil War (including the postwar famine) amounted to a combined total of 18 million,[188] some 10 million in the 1930s,[47] and more than 26 million in 1941–5. The postwar Soviet population was 45 to 50 million smaller than it would have been if pre-war demographic growth had continued.[54] According to Catherine Merridale, "... reasonable estimate would place the total number of excess deaths for the whole period somewhere around 60 million."[189]
210
+
211
+ The birth rate of the USSR decreased from 44.0 per thousand in 1926 to 18.0 in 1974, mainly due to increasing urbanization and the rising average age of marriages. The mortality rate demonstrated a gradual decrease as well – from 23.7 per thousand in 1926 to 8.7 in 1974. In general, the birth rates of the southern republics in Transcaucasia and Central Asia were considerably higher than those in the northern parts of the Soviet Union, and in some cases even increased in the post–World War II period, a phenomenon partly attributed to slower rates of urbanistion and traditionally earlier marriages in the southern republics.[190] Soviet Europe moved towards sub-replacement fertility, while Soviet Central Asia continued to exhibit population growth well above replacement-level fertility.[191]
212
+
213
+ The late 1960s and the 1970s witnessed a reversal of the declining trajectory of the rate of mortality in the USSR, and was especially notable among men of working age, but was also prevalent in Russia and other predominantly Slavic areas of the country.[192] An analysis of the official data from the late 1980s showed that after worsening in the late-1970s and the early 1980s, adult mortality began to improve again.[193] The infant mortality rate increased from 24.7 in 1970 to 27.9 in 1974. Some researchers regarded the rise as mostly real, a consequence of worsening health conditions and services.[194] The rises in both adult and infant mortality were not explained or defended by Soviet officials, and the Soviet government stopped publishing all mortality statistics for ten years. Soviet demographers and health specialists remained silent about the mortality increases until the late-1980s, when the publication of mortality data resumed, and researchers could delve into the real causes.[195]
214
+
215
+ Under Lenin, the state made explicit commitments to promote the equality of men and women. Many early Russian feminists and ordinary Russian working women actively participated in the Revolution, and many more were affected by the events of that period and the new policies. Beginning in October 1918, the Lenin's government liberalized divorce and abortion laws, decriminalized homosexuality (re-criminalized in the 1930s), permitted cohabitation, and ushered in a host of reforms.[196] However, without birth control, the new system produced many broken marriages, as well as countless out-of-wedlock children.[197] The epidemic of divorces and extramarital affairs created social hardships when Soviet leaders wanted people to concentrate their efforts on growing the economy. Giving women control over their fertility also led to a precipitous decline in the birth rate, perceived as a threat to their country's military power. By 1936, Stalin reversed most of the liberal laws, ushering in a pronatalist era that lasted for decades.[198]
216
+
217
+ By 1917, Russia became the first great power to grant women the right to vote.[199] After heavy casualties in World War I and II, women outnumbered men in Russia by a 4:3 ratio.[200] This contributed to the larger role women played in Russian society compared to other great powers at the time.
218
+
219
+ Anatoly Lunacharsky became the first People's Commissar for Education of Soviet Russia. In the beginning, the Soviet authorities placed great emphasis on the elimination of illiteracy. All left-handed kids were forced to write with their right hand in the Soviet school system.[201][202][203][204] Literate people were automatically hired as teachers.[citation needed] For a short period, quality was sacrificed for quantity. By 1940, Stalin could announce that illiteracy had been eliminated. Throughout the 1930s, social mobility rose sharply, which has been attributed to reforms in education.[205] In the aftermath of World War II, the country's educational system expanded dramatically, which had a tremendous effect. In the 1960s, nearly all children had access to education, the only exception being those living in remote areas. Nikita Khrushchev tried to make education more accessible, making it clear to children that education was closely linked to the needs of society. Education also became important in giving rise to the New Man.[206] Citizens directly entering the workforce had the constitutional right to a job and to free vocational training.
220
+
221
+ The education system was highly centralized and universally accessible to all citizens, with affirmative action for applicants from nations associated with cultural backwardness. However, as part of the general antisemitic policy, an unofficial Jewish quota was applied[when?] in the leading institutions of higher education by subjecting Jewish applicants to harsher entrance examinations.[207][208][209][210] The Brezhnev era also introduced a rule that required all university applicants to present a reference from the local Komsomol party secretary.[211] According to statistics from 1986, the number of higher education students per the population of 10,000 was 181 for the USSR, compared to 517 for the US.[212]
222
+
223
+ The Soviet Union was an ethnically diverse country, with more than 100 distinct ethnic groups. The total population was estimated at 293 million in 1991. According to a 1990 estimate, the majority were Russians (50.78%), followed by Ukrainians (15.45%) and Uzbeks (5.84%).[213]
224
+
225
+ All citizens of the USSR had their own ethnic affiliation. The ethnicity of a person was chosen at the age of sixteen[214] by the child's parents. If the parents did not agree, the child was automatically assigned the ethnicity of the father. Partly due to Soviet policies, some of the smaller minority ethnic groups were considered part of larger ones, such as the Mingrelians of Georgia, who were classified with the linguistically related Georgians.[215] Some ethnic groups voluntarily assimilated, while others were brought in by force. Russians, Belarusians, and Ukrainians shared close cultural ties, while other groups did not. With multiple nationalities living in the same territory, ethnic antagonisms developed over the years.[216][neutrality is disputed]
226
+
227
+ Members of various ethnicities participated in legislative bodies. Organs of power like the Politburo, the Secretariat of the Central Committee etc., were formally ethnically neutral, but in reality, ethnic Russians were overrepresented, although there were also non-Russian leaders in the Soviet leadership, such as Joseph Stalin, Grigory Zinoviev, Nikolai Podgorny or Andrei Gromyko. During the Soviet era, a significant number of ethnic Russians and Ukrainians migrated to other Soviet republics, and many of them settled there. According to the last census in 1989, the Russian "diaspora" in the Soviet republics had reached 25 million.[217]
228
+
229
+ Ethnographic map of the Soviet Union, 1941
230
+
231
+ Number and share of Ukrainians in the population of the regions of the RSFSR (1926 census)
232
+
233
+ Number and share of Ukrainians in the population of the regions of the RSFSR (1979 census)
234
+
235
+ In 1917, before the revolution, health conditions were significantly behind those of developed countries. As Lenin later noted, "Either the lice will defeat socialism, or socialism will defeat the lice".[218] The Soviet principle of health care was conceived by the People's Commissariat for Health in 1918. Health care was to be controlled by the state and would be provided to its citizens free of charge, a revolutionary concept at the time. Article 42 of the 1977 Soviet Constitution gave all citizens the right to health protection and free access to any health institutions in the USSR. Before Leonid Brezhnev became General Secretary, the Soviet healthcare system was held in high esteem by many foreign specialists. This changed, however, from Brezhnev's accession and Mikhail Gorbachev's tenure as leader, during which the health care system was heavily criticized for many basic faults, such as the quality of service and the unevenness in its provision.[219] Minister of Health Yevgeniy Chazov, during the 19th Congress of the Communist Party of the Soviet Union, while highlighting such successes as having the most doctors and hospitals in the world, recognized the system's areas for improvement and felt that billions of Soviet rubles were squandered.[220]
236
+
237
+ After the revolution, life expectancy for all age groups went up. This statistic in itself was seen by some that the socialist system was superior to the capitalist system. These improvements continued into the 1960s when statistics indicated that the life expectancy briefly surpassed that of the United States. Life expectancy started to decline in the 1970s, possibly because of alcohol abuse. At the same time, infant mortality began to rise. After 1974, the government stopped publishing statistics on the matter. This trend can be partly explained by the number of pregnancies rising drastically in the Asian part of the country where infant mortality was the highest while declining markedly in the more developed European part of the Soviet Union.[221]
238
+
239
+ Under Lenin, the government gave small language groups their own writing systems.[222] The development of these writing systems was highly successful, even though some flaws were detected. During the later days of the USSR, countries with the same multilingual situation implemented similar policies. A serious problem when creating these writing systems was that the languages differed dialectally greatly from each other.[223] When a language had been given a writing system and appeared in a notable publication, it would attain "official language" status. There were many minority languages which never received their own writing system; therefore, their speakers were forced to have a second language.[224] There are examples where the government retreated from this policy, most notably under Stalin where education was discontinued in languages that were not widespread. These languages were then assimilated into another language, mostly Russian.[225] During World War II, some minority languages were banned, and their speakers accused of collaborating with the enemy.[226]
240
+
241
+ As the most widely spoken of the Soviet Union's many languages, Russian de facto functioned as an official language, as the "language of interethnic communication" (Russian: язык межнационального общения), but only assumed the de jure status as the official national language in 1990.[227]
242
+
243
+ Christianity and Islam had the highest number of adherents among the religious citizens.[228] Eastern Christianity predominated among Christians, with Russia's traditional Russian Orthodox Church being the largest Christian denomination. About 90% of the Soviet Union's Muslims were Sunnis, with Shias being concentrated in the Azerbaijan SSR.[228] Smaller groups included Roman Catholics, Jews, Buddhists, and a variety of Protestant denominations (especially Baptists and Lutherans).[228]
244
+
245
+ Religious influence had been strong in the Russian Empire. The Russian Orthodox Church enjoyed a privileged status as the church of the monarchy and took part in carrying out official state functions.[229] The immediate period following the establishment of the Soviet state included a struggle against the Orthodox Church, which the revolutionaries considered an ally of the former ruling classes.[230]
246
+
247
+ In Soviet law, the "freedom to hold religious services" was constitutionally guaranteed, although the ruling Communist Party regarded religion as incompatible with the Marxist spirit of scientific materialism.[230] In practice, the Soviet system subscribed to a narrow interpretation of this right, and in fact utilized a range of official measures to discourage religion and curb the activities of religious groups.[230]
248
+
249
+ The 1918 Council of People's Commissars decree establishing the Russian SFSR as a secular state also decreed that "the teaching of religion in all [places] where subjects of general instruction are taught, is forbidden. Citizens may teach and may be taught religion privately."[231] Among further restrictions, those adopted in 1929 included express prohibitions on a range of church activities, including meetings for organized Bible study.[230] Both Christian and non-Christian establishments were shut down by the thousands in the 1920s and 1930s. By 1940, as many as 90% of the churches, synagogues, and mosques that had been operating in 1917 were closed.[232]
250
+
251
+ Under the doctrine of state atheism, there was a "government-sponsored program of forced conversion to atheism" conducted by the Communists.[233][234][235] The regime targeted religions based on state interests, and while most organized religions were never outlawed, religious property was confiscated, believers were harassed, and religion was ridiculed while atheism was propagated in schools.[236] In 1925, the government founded the League of Militant Atheists to intensify the propaganda campaign.[237] Accordingly, although personal expressions of religious faith were not explicitly banned, a strong sense of social stigma was imposed on them by the formal structures and mass media, and it was generally considered unacceptable for members of certain professions (teachers, state bureaucrats, soldiers) to be openly religious. As for the Russian Orthodox Church, Soviet authorities sought to control it and, in times of national crisis, to exploit it for the regime's own purposes; but their ultimate goal was to eliminate it. During the first five years of Soviet power, the Bolsheviks executed 28 Russian Orthodox bishops and over 1,200 Russian Orthodox priests. Many others were imprisoned or exiled. Believers were harassed and persecuted. Most seminaries were closed, and the publication of most religious material was prohibited. By 1941, only 500 churches remained open out of about 54,000 in existence before World War I.
252
+
253
+ Convinced that religious anti-Sovietism had become a thing of the past, and with the looming threat of war, the Stalin regime began shifting to a more moderate religion policy in the late 1930s.[238] Soviet religious establishments overwhelmingly rallied to support the war effort during World War II. Amid other accommodations to religious faith after the German invasion, churches were reopened. Radio Moscow began broadcasting a religious hour, and a historic meeting between Stalin and Orthodox Church leader Patriarch Sergius of Moscow was held in 1943. Stalin had the support of the majority of the religious people in the USSR even through the late 1980s.[238] The general tendency of this period was an increase in religious activity among believers of all faiths.[239]
254
+
255
+ Under Nikita Khrushchev, the state leadership clashed with the churches in 1958–1964, a period when atheism was emphasized in the educational curriculum, and numerous state publications promoted atheistic views.[238] During this period, the number of churches fell from 20,000 to 10,000 from 1959 to 1965, and the number of synagogues dropped from 500 to 97.[240] The number of working mosques also declined, falling from 1,500 to 500 within a decade.[240]
256
+
257
+ Religious institutions remained monitored by the Soviet government, but churches, synagogues, temples, and mosques were all given more leeway in the Brezhnev era.[241] Official relations between the Orthodox Church and the government again warmed to the point that the Brezhnev government twice honored Orthodox Patriarch Alexy I with the Order of the Red Banner of Labour.[242] A poll conducted by Soviet authorities in 1982 recorded 20% of the Soviet population as "active religious believers."[243]
258
+
259
+ The culture of the Soviet Union passed through several stages during the USSR's existence. During the first decade following the revolution, there was relative freedom and artists experimented with several different styles to find a distinctive Soviet style of art. Lenin wanted art to be accessible to the Russian people. On the other hand, hundreds of intellectuals, writers, and artists were exiled or executed, and their work banned, such as Nikolay Gumilyov who was shot for alleged conspiring against the Bolshevik regime, and Yevgeny Zamyatin.[244]
260
+
261
+ The government encouraged a variety of trends. In art and literature, numerous schools, some traditional and others radically experimental, proliferated. Communist writers Maxim Gorky and Vladimir Mayakovsky were active during this time. As a means of influencing a largely illiterate society, films received encouragement from the state, and much of director Sergei Eisenstein's best work dates from this period.
262
+
263
+ During Stalin's rule, the Soviet culture was characterized by the rise and domination of the government-imposed style of socialist realism, with all other trends being severely repressed, with rare exceptions, such as Mikhail Bulgakov's works. Many writers were imprisoned and killed.[245]
264
+
265
+ Following the Khrushchev Thaw, censorship was diminished. During this time, a distinctive period of Soviet culture developed, characterized by conformist public life and an intense focus on personal life. Greater experimentation in art forms was again permissible, resulting in the production of more sophisticated and subtly critical work. The regime loosened its emphasis on socialist realism; thus, for instance, many protagonists of the novels of author Yury Trifonov concerned themselves with problems of daily life rather than with building socialism. Underground dissident literature, known as samizdat, developed during this late period. In architecture, the Khrushchev era mostly focused on functional design as opposed to the highly decorated style of Stalin's epoch.
266
+
267
+ In the second half of the 1980s, Gorbachev's policies of perestroika and glasnost significantly expanded freedom of expression throughout the country in the media and the press.[246]
268
+
269
+ Founded on 20 July 1924 in Moscow, Sovetsky Sport was the first sports newspaper of the Soviet Union.
270
+
271
+ The Soviet Olympic Committee formed on 21 April 1951, and the IOC recognized the new body in its 45th session. In the same year, when the Soviet representative Konstantin Andrianov became an IOC member, the USSR officially joined the Olympic Movement. The 1952 Summer Olympics in Helsinki thus became first Olympic Games for Soviet athletes.
272
+
273
+ The Soviet Union national ice hockey team won nearly every world championship and Olympic tournament between 1954 and 1991 and never failed to medal in any International Ice Hockey Federation (IIHF) tournament in which they competed.
274
+
275
+ The advent[when?] of the state-sponsored "full-time amateur athlete" of the Eastern Bloc countries further eroded the ideology of the pure amateur, as it put the self-financed amateurs of the Western countries at a disadvantage. The Soviet Union entered teams of athletes who were all nominally students, soldiers, or working in a profession – in reality, the state paid many of these competitors to train on a full-time basis.[247] Nevertheless, the IOC held to the traditional rules regarding amateurism.[248]
276
+
277
+ A 1989 report by a committee of the Australian Senate claimed that "there is hardly a medal winner at the Moscow Games, certainly not a gold medal winner...who is not on one sort of drug or another: usually several kinds. The Moscow Games might well have been called the Chemists' Games".[249]
278
+
279
+ A member of the IOC Medical Commission, Manfred Donike, privately ran additional tests with a new technique for identifying abnormal levels of testosterone by measuring its ratio to epitestosterone in urine. Twenty percent of the specimens he tested, including those from sixteen gold medalists, would have resulted in disciplinary proceedings had the tests been official. The results of Donike's unofficial tests later convinced the IOC to add his new technique to their testing protocols.[250] The first documented case of "blood doping" occurred at the 1980 Summer Olympics when a runner[who?] was transfused with two pints of blood before winning medals in the 5000 m and 10,000 m.[251]
280
+
281
+ Documentation obtained in 2016 revealed the Soviet Union's plans for a statewide doping system in track and field in preparation for the 1984 Summer Olympics in Los Angeles. Dated before the decision to boycott the 1984 Games, the document detailed the existing steroids operations of the program, along with suggestions for further enhancements. Dr. Sergei Portugalov of the Institute for Physical Culture prepared the communication, directed to the Soviet Union's head of track and field. Portugalov later became one of the leading figures involved in the implementation of Russian doping before the 2016 Summer Olympics.[252]
282
+
283
+ Official Soviet environmental policy has always attached great importance to actions in which human beings actively improve nature. Lenin's quote "Communism is Soviet power and electrification of the country!" in many respects it summarizes the focus on modernization and industrial development. During the first five-year plan in 1928, Stalin proceeded to industrialize the country at all costs. Values such as environmental and nature protection have been completely ignored in the struggle to create a modern industrial society. After Stalin's death, they focused more on environmental issues, but the basic perception of the value of environmental protection remained the same.[253]
284
+
285
+ The Soviet media has always focused on the vast expanse of land and the virtually indestructible natural resources. This made it feel that contamination and looting of nature were not a problem. The Soviet state also firmly believed that scientific and technological progress would solve all the problems. Official ideology said that under socialism environmental problems could easily be overcome, unlike capitalist countries, where they seemingly could not be solved. The Soviet authorities had an almost unwavering belief that man could transcend nature. However, when the authorities had to admit that there were environmental problems in the USSR in the 1980s, they explained the problems in such a way that socialism had not yet been fully developed; pollution in socialist society was only a temporary anomaly that would have been resolved if socialism had developed.[citation needed]
286
+
287
+ The Chernobyl disaster in 1986 was the first major accident at a civilian nuclear power plant, unparalleled in the world, when a large number of radioactive isotopes were released into the atmosphere. Radioactive doses have scattered relatively far. The main health problem after the accident was 4,000 new cases of thyroid cancer, but this led to a relatively low number of deaths (WHO data, 2005). However, the long-term effects of the accident are unknown. Another major accident is the Kyshtym disaster.[254]
288
+
289
+ After the fall of the USSR, it was discovered that the environmental problems were greater than what the Soviet authorities admitted. The Kola Peninsula was one of the places with clear problems. Around the industrial cities of Monchegorsk and Norilsk, where nickel, for example, is mined, all forests have been killed by contamination, while the northern and other parts of Russia have been affected by emissions. During the 1990s, people in the West were also interested in the radioactive hazards of nuclear facilities, decommissioned nuclear submarines, and the processing of nuclear waste or spent nuclear fuel. It was also known in the early 1990s that the USSR had transported radioactive material to the Barents Sea and Kara Sea, which was later confirmed by the Russian parliament. The crash of the K-141 Kursk submarine in 2000 in the west further raised concerns.[255] In the past, there were accidents involving submarines K-19, K-8 or K-129.[citation needed]
290
+
291
+ 1918–1924  Turkestan3
292
+ 1918–1941  Volga German4
293
+ 1919–1990  Bashkir
294
+ 1920–1925  Kirghiz2
295
+ 1920–1990  Tatar
296
+ 1921–1990  Adjar
297
+ 1921–1945  Crimean
298
+ 1921–1991  Dagestan
299
+ 1921–1924  Mountain
300
+
301
+ 1921–1990  Nakhchivan
302
+ 1922–1991  Yakut
303
+ 1923–1990  Buryat1
304
+ 1923–1940  Karelian
305
+ 1924–1940  Moldavian
306
+ 1924–1929  Tajik
307
+ 1925–1992  Chuvash
308
+ 1925–1936  Kazak2
309
+ 1926–1936  Kirghiz
310
+
311
+ 1931–1991  Abkhaz
312
+ 1932–1992  Karakalpak
313
+ 1934–1990  Mordovian
314
+ 1934–1990  Udmurt
315
+ 1935–1943  Kalmyk
316
+ 1936–1944  Checheno-Ingush
317
+ 1936–1944  Kabardino-Balkar
318
+ 1936–1990  Komi
319
+ 1936–1990  Mari
320
+
321
+ 1936–1990  North Ossetian
322
+ 1944–1957  Kabardin
323
+ 1956–1991  Karelian
324
+ 1957–1990  Checheno-Ingush
325
+ 1957–1991  Kabardino-Balkar
326
+ 1958–1990  Kalmyk
327
+ 1961–1992  Tuva
328
+ 1990–1991  Gorno-Altai
329
+ 1991–1992  Crimean
en/5882.html.txt ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ in South America (grey)
4
+
5
+ Uruguay (/ˈjʊərəɡwaɪ/ (listen);[8] Spanish: [uɾuˈɣwaj] (listen)), officially the Oriental Republic of Uruguay (Spanish: República Oriental del Uruguay; Portuguese: República Oriental do Uruguai), is a country in the southeastern region of South America. It borders Argentina to its west and southwest and Brazil to its north and east, with the Río de la Plata (River of Silver) to the south and the Atlantic Ocean to the southeast. Uruguay is home to an estimated 3.51 million people, of whom 1.8 million live in the metropolitan area of its capital and largest city, Montevideo. With an area of approximately 176,000 square kilometers (68,000 sq mi), Uruguay is geographically the second-smallest nation in South America,[9] after Suriname.
6
+
7
+ Uruguay was inhabited by the Charrúa people for approximately 4,000 years[10] before the Portuguese established Colónia do Sacramento in 1680; Uruguay was colonized by Europeans relatively late compared with neighboring countries. Montevideo was founded as a military stronghold by the Spanish in the early 18th century, signifying the competing claims over the region. Uruguay won its independence between 1811 and 1828, following a four-way struggle between Portugal and Spain, and later Argentina and Brazil. It remained subject to foreign influence and intervention throughout the 19th century, with the military playing a recurring role in domestic politics.
8
+
9
+ A series of economic crises put an end to a democratic period that had begun in the early 20th century, culminating in a 1973 coup, which established a civic-military dictatorship. The military government persecuted leftists, socialists, and political opponents, resulting in several deaths and numerous instances of torture by the military; the military relinquished power to a civilian government in 1985. Uruguay is today a democratic constitutional republic, with a president who serves as both head of state and head of government.
10
+
11
+ Uruguay is ranked first in Latin America in democracy, peace, low perception of corruption,[11] e-government,[12] and is first in South America when it comes to press freedom, size of the middle class and prosperity.[11] On a per-capita basis, Uruguay contributes more troops to United Nations peacekeeping missions than any other country.[11] It tops the rank of absence of terrorism, a unique position within South America. It ranks second in the region on economic freedom, income equality, per-capita income and inflows of FDI.[11] Uruguay is the third-best country on the continent in terms of HDI, GDP growth,[13] innovation and infrastructure.[11] It is regarded as a high-income country by the UN.[12] Uruguay was also ranked the third-best in the world in e-Participation in 2014.[12] Uruguay is an important global exporter of combed wool, rice, soybeans, frozen beef, malt and milk.[11] Nearly 95% of Uruguay's electricity comes from renewable energy, mostly hydroelectric facilities and wind parks.[14] Uruguay is a founding member of the United Nations, OAS, Mercosur and the Non-Aligned Movement.
12
+
13
+ Uruguay is regarded as one of the most socially advanced countries in Latin America.[15] It ranks high on global measures of personal rights, tolerance, and inclusion issues.[16] The Economist named Uruguay "country of the year" in 2013,[17] acknowledging the policy of legalizing the production, sale and consumption of cannabis. Same-sex marriage and abortion are also legal.
14
+
15
+ The name of the namesake river comes from the Spanish pronunciation of the regional Guarani word for it. There are several interpretations, including "bird-river" ("the river of the urú", via Charruan, urú being a common noun of any wild fowl).[18][19]
16
+ The name could also refer to a river snail called uruguá (Pomella megastoma) that was plentiful in the water.[20]
17
+
18
+ In Spanish colonial times, and for some time thereafter, Uruguay and some neighbouring territories were called the Cisplatina and Banda Oriental [del Uruguay] ("East Bank [of the Uruguay River]"), then for a few years the "Eastern Province". Since its independence, the country has been known as la República Oriental del Uruguay, which literally means "the eastern republic of the Uruguay [River]". However, it is commonly translated either as the "Oriental Republic of Uruguay"[1][21] or the "Eastern Republic of Uruguay".[22]
19
+
20
+ The documented inhabitants of Uruguay before European colonization of the area were the Charrúa, a small tribe driven south by the Guarani of Paraguay.[23][failed verification] It is estimated that there were about 9,000 Charrúa and 6,000 Chaná and Guaraní at the time of contact with Europeans in the 1500s.[24] Fructuoso Rivera – Uruguay's first president – organized the Charruas' genocide.[25]
21
+
22
+ The Portuguese were the first Europeans to enter the region of present-day Uruguay in 1512.[26][27] The Spanish arrived in present-day Uruguay in 1516.[23] The indigenous peoples' fierce resistance to conquest, combined with the absence of gold and silver, limited their settlement in the region during the 16th and 17th centuries.[23] Uruguay then became a zone of contention between the Spanish and Portuguese empires. In 1603, the Spanish began to introduce cattle, which became a source of wealth in the region. The first permanent Spanish settlement was founded in 1624 at Soriano on the Río Negro. In 1669–71, the Portuguese built a fort at Colonia del Sacramento.
23
+
24
+ Montevideo was founded by the Spanish in the early 18th century as a military stronghold in the country. Its natural harbor soon developed into a commercial area competing with Río de la Plata's capital, Buenos Aires.[23] Uruguay's early 19th century history was shaped by ongoing fights for dominance in the Platine region,[23] between British, Spanish, Portuguese and other colonial forces. In 1806 and 1807, the British army attempted to seize Buenos Aires and Montevideo as part of the Napoleonic Wars. Montevideo was occupied by a British force from February to September 1807.
25
+
26
+ In 1811, José Gervasio Artigas, who became Uruguay's national hero, launched a successful revolt against the Spanish authorities, defeating them on 18 May at the Battle of Las Piedras.[23]
27
+
28
+ In 1813, the new government in Buenos Aires convened a constituent assembly where Artigas emerged as a champion of federalism, demanding political and economic autonomy for each area, and for the Banda Oriental in particular.[28] The assembly refused to seat the delegates from the Banda Oriental, however, and Buenos Aires pursued a system based on unitary centralism.[28]
29
+
30
+ As a result, Artigas broke with Buenos Aires and besieged Montevideo, taking the city in early 1815.[28] Once the troops from Buenos Aires had withdrawn, the Banda Oriental appointed its first autonomous government.[28] Artigas organized the Federal League under his protection, consisting of six provinces, four of which later became part of Argentina.[28]
31
+
32
+ In 1816, a force of 10,000 Portuguese troops invaded the Banda Oriental from Brazil; they took Montevideo in January 1817.[28] After nearly four more years of struggle, the Portuguese Kingdom of Brazil annexed the Banda Oriental as a province under the name of "Cisplatina".[28] The Brazilian Empire became independent of Portugal in 1822. In response to the annexation, the Thirty-Three Orientals, led by Juan Antonio Lavalleja, declared independence on 25 August 1825 supported by the United Provinces of the Río de la Plata (present-day Argentina).[23] This led to the 500-day-long Cisplatine War. Neither side gained the upper hand and in 1828 the Treaty of Montevideo, fostered by the United Kingdom through the diplomatic efforts of Viscount John Ponsonby, gave birth to Uruguay as an independent state. 25 August is celebrated as Independence Day, a national holiday.[29] The nation's first constitution was adopted on 18 July 1830.[23]
33
+
34
+ At the time of independence, Uruguay had an estimated population of just under 75,000.[30] The era from independence until 1904 was marked by regular military conflicts and civil wars between the Blanco and Colorado Parties. The political scene in Uruguay became split between two parties: the conservative Blancos (Whites) headed by the second President Manuel Oribe, representing the agricultural interests of the countryside; and the liberal Colorados (Reds) led by the first President Fructuoso Rivera, representing the business interests of Montevideo. The Uruguayan parties received support from warring political factions in neighbouring Argentina, which became involved in Uruguayan affairs.
35
+
36
+ The Colorados favored the exiled Argentine liberal Unitarios, many of whom had taken refuge in Montevideo while the Blanco president Manuel Oribe was a close friend of the Argentine ruler Manuel de Rosas. On 15 June 1838, an army led by the Colorado leader Rivera overthrew President Oribe, who fled to Argentina.[30] Rivera declared war on Rosas in 1839. The conflict would last 13 years and become known as the Guerra Grande (the Great War).[30]
37
+
38
+ In 1843, an Argentine army overran Uruguay on Oribe's behalf but failed to take the capital. The siege of Montevideo, which began in February 1843, would last nine years.[31] The besieged Uruguayans called on resident foreigners for help, which led to a French and an Italian legion being formed, the latter led by the exiled Giuseppe Garibaldi.[31]
39
+
40
+ In 1845, Britain and France intervened against Rosas to restore commerce to normal levels in the region. Their efforts proved ineffective and, by 1849, tired of the war, both withdrew after signing a treaty favorable to Rosas.[31] It appeared that Montevideo would finally fall when an uprising against Rosas, led by Justo José de Urquiza, governor of Argentina's Entre Ríos Province, began. The Brazilian intervention in May 1851 on behalf of the Colorados, combined with the uprising, changed the situation and Oribe was defeated. The siege of Montevideo was lifted and the Guerra Grande finally came to an end.[31] Montevideo rewarded Brazil's support by signing treaties that confirmed Brazil's right to intervene in Uruguay's internal affairs.[31]
41
+
42
+ In accordance with the 1851 treaties, Brazil intervened militarily in Uruguay as often as it deemed necessary.[32] In 1865, the Triple Alliance was formed by the emperor of Brazil, the president of Argentina, and the Colorado general Venancio Flores, the Uruguayan head of government whom they both had helped to gain power. The Triple Alliance declared war on the Paraguayan leader Francisco Solano López[32] and the resulting Paraguayan War ended with the invasion of Paraguay and its defeat by the armies of the three countries. Montevideo, which was used as a supply station by the Brazilian navy, experienced a period of prosperity and relative calm during the war.[32]
43
+
44
+ The constitutional government of General Lorenzo Batlle y Grau (1868–72) suppressed the Revolution of the Lances by the Blancos.[33] After two years of struggle, a peace agreement was signed in 1872 that gave the Blancos a share in the emoluments and functions of government, through control of four of the departments of Uruguay.[33]
45
+
46
+ This establishment of the policy of co-participation represented the search for a new formula of compromise, based on the coexistence of the party in power and the party in opposition.[33]
47
+
48
+ Despite this agreement, Colorado rule was threatened by the failed Tricolor Revolution in 1875 and Revolution of the Quebracho in 1886.
49
+
50
+ The Colorado effort to reduce Blancos to only three departments caused a Blanco uprising of 1897, which ended with the creation of 16 departments, of which the Blancos now had control over six. Blancos were given ⅓ of seats in Congress.[34] This division of power lasted until the President Jose Batlle y Ordonez instituted his political reforms which caused the last uprising by Blancos in 1904 that ended with the Battle of Masoller and the death of Blanco leader Aparicio Saravia.
51
+
52
+ Between 1875 and 1890, the military became the center of power.[35] During this authoritarian period, the government took steps toward the organization of the country as a modern state, encouraging its economic and social transformation. Pressure groups (consisting mainly of businessmen, hacendados, and industrialists) were organized and had a strong influence on government.[35] A transition period (1886–90) followed, during which politicians began recovering lost ground and some civilian participation in government occurred.[35]
53
+
54
+ After the Guerra Grande, there was a sharp rise in the number of immigrants, primarily from Italy and Spain. By 1879, the total population of the country was over 438,500.[36] The economy reflected a steep upswing (if demonstrated graphically, above all other related economic determinants), in livestock raising and exports.[36] Montevideo became a major economic center of the region and an entrepôt for goods from Argentina, Brazil and Paraguay.[36]
55
+
56
+ The Colorado leader José Batlle y Ordóñez was elected president in 1903.[37] The following year, the Blancos led a rural revolt and eight bloody months of fighting ensued before their leader, Aparicio Saravia, was killed in battle. Government forces emerged victorious, leading to the end of the co-participation politics that had begun in 1872.[37] Batlle had two terms (1903–07 and 1911–15) during which, taking advantage of the nation's stability and growing economic prosperity, he instituted major reforms, such as a welfare program, government participation in many facets of the economy, and a plural executive.[23]
57
+
58
+ Gabriel Terra became president in March 1931. His inauguration coincided with the effects of the Great Depression,[38] and the social climate became tense as a result of the lack of jobs. There were confrontations in which police and leftists died.[38] In 1933, Terra organized a coup d'état, dissolving the General Assembly and governing by decree.[38] A new constitution was promulgated in 1934, transferring powers to the president.[38] In general, the Terra government weakened or neutralized economic nationalism and social reform.[38]
59
+
60
+ In 1938, general elections were held and Terra's brother-in-law, General Alfredo Baldomir, was elected president. Under pressure from organized labor and the National Party, Baldomir advocated free elections, freedom of the press, and a new constitution.[39] Although Baldomir declared Uruguay neutral in 1939, British warships and the German ship Admiral Graf Spee fought a battle not far off Uruguay's coast.[39] The Admiral Graf Spee took refuge in Montevideo, claiming sanctuary in a neutral port, but was later ordered out.[39]
61
+
62
+ In the late 1950s, partly because of a worldwide decrease in demand for Uruguyan agricultural products, Uruguayans suffered from a steep drop in their standard of living, which led to student militancy and labor unrest. An armed group, known as the Tupamaros emerged in the 1960s, engaging in activities such as bank robbery, kidnapping and assassination, in addition to attempting an overthrow of the government.
63
+
64
+ President Jorge Pacheco declared a state of emergency in 1968, followed by a further suspension of civil liberties in 1972. In 1973, amid increasing economic and political turmoil, the armed forces, asked by the President Juan María Bordaberry, closed the Congress and established a civilian-military regime.[23] Uruguay was on the receiving end of Operation Condor, a CIA-backed campaign of political repression and state terror involving intelligence operations and assassination of opponents.[40] According to one source, around 200 Uruguayans are known to have been killed and disappeared, with hundreds more illegally detained and tortured during the 12-year civil-military rule of 1973 to 1985.[41] Most were killed in Argentina and other neighboring countries, with 36 of them having been killed in Uruguay.[42] According to Edy Kaufman (cited by Dr. David Altman[43]), Uruguay at the time had the highest per capita number of political prisoners in the world. "Kaufman, who spoke at the U.S. Congressional Hearings of 1976 on behalf of Amnesty International, estimated that one in every five Uruguayans went into exile, one in fifty were detained, and one in five hundred went to prison (most of them tortured)."
65
+
66
+ A new constitution, drafted by the military, was rejected in a November 1980 referendum.[23] Following the referendum, the armed forces announced a plan for the return to civilian rule, and national elections were held in 1984.[23] Colorado Party leader Julio María Sanguinetti won the presidency and served from 1985 to 1990. The first Sanguinetti administration implemented economic reforms and consolidated democracy following the country's years under military rule.[23]
67
+
68
+ The National Party's Luis Alberto Lacalle won the 1989 presidential election and amnesty for human rights abusers was endorsed by referendum. Sanguinetti was then re-elected in 1994.[44] Both presidents continued the economic structural reforms initiated after the reinstatement of democracy and other important reforms were aimed at improving the electoral system, social security, education, and public safety.
69
+
70
+ The 1999 national elections were held under a new electoral system established by a 1996 constitutional amendment. Colorado Party candidate Jorge Batlle, aided by the support of the National Party, defeated Broad Front candidate Tabaré Vázquez. The formal coalition ended in November 2002, when the Blancos withdrew their ministers from the cabinet,[23] although the Blancos continued to support the Colorados on most issues. Low commodity prices and economic difficulties in Uruguay's main export markets (starting in Brazil with the devaluation of the real, then in Argentina in 2002), caused a severe recession; the economy contracted by 11%, unemployment climbed to 21%, and the percentage of Uruguayans in poverty rose to over 30%.[45]
71
+ In 2004, Uruguayans elected Tabaré Vázquez as president, while giving the Broad Front a majority in both houses of Parliament. Vázquez stuck to economic orthodoxy. As commodity prices soared and the economy recovered from the recession, he tripled foreign investment, cut poverty and unemployment, cut public debt from 79% of GDP to 60%, and kept inflation steady.[46]
72
+
73
+ In 2009, José Mujica, a former left-wing guerrilla leader (Tupamaros) who spent almost 15 years in prison during the country's military rule, emerged as the new President as the Broad Front won the election for a second time.[47] Abortion was legalized in 2012, followed by same-sex marriage and cannabis in the following year.
74
+
75
+ In 2014, Tabaré Vázquez was elected to a non-consecutive second presidential term, which began on 1 March 2015. In 2020, he was succeeded by Luis Alberto Lacalle Pou, member of the National Party, as the 42nd President of Uruguay.
76
+
77
+ With 176,214 km2 (68,037 sq mi) of continental land and 142,199 km2 (54,903 sq mi) of jurisdictional water and small river islands,[48] Uruguay is the second smallest sovereign nation in South America (after Suriname) and the third smallest territory (French Guiana is the smallest).[1] The landscape features mostly rolling plains and low hill ranges (cuchillas) with a fertile coastal lowland.[1] Uruguay has 660 km (410 mi) of coastline.[1]
78
+
79
+ A dense fluvial network covers the country, consisting of four river basins, or deltas: the Río de la Plata Basin, the Uruguay River, the Laguna Merín and the Río Negro. The major internal river is the Río Negro ('Black River'). Several lagoons are found along the Atlantic coast.
80
+
81
+ The highest point in the country is the Cerro Catedral, whose peak reaches 514 metres (1,686 ft) AMSL in the Sierra Carapé hill range. To the southwest is the Río de la Plata, the estuary of the Uruguay River (which river forms the country's western border).
82
+
83
+ Montevideo is the southernmost capital city in the Americas, and the third most southerly in the world (only Canberra and Wellington are further south).
84
+
85
+ There are ten national parks in Uruguay: Five in the wetland areas of the east, three in the central hill country, and one in the west along the Rio Uruguay.
86
+
87
+ It is the only country in South America situated entirely south of the Tropic of Capricorn.
88
+
89
+ Located entirely within a temperate zone, Uruguay has a climate that is relatively mild and fairly uniform nationwide.[49] According to the Köppen Climate Classification, most of the country has a humid subtropical climate (Cfa). Only in some spots of the Atlantic Coast and at the summit of the highest hills of the Cuchilla Grande, the climate is oceanic (Cfb). Seasonal variations are pronounced, but extremes in temperature are rare.[49] As would be expected with its abundance of water, high humidity and fog are common.[49] The absence of mountains, which act as weather barriers, makes all locations vulnerable to high winds and rapid changes in weather as fronts or storms sweep across the country.[49] Both summer and winter weather may vary from day to day with the passing of storm fronts, where a hot northerly wind may occasionally be followed by a cold wind (pampero) from the Argentine Pampas.[21]
90
+
91
+ Uruguay has a largely uniform temperature throughout the year, with summers being tempered by winds off the Atlantic; severe cold in winter is unknown.[49][50] The heaviest precipitation occurs during the autumn months, although more frequent rainy spells occur in winter.[21] The mean annual precipitation is generally greater than 40 inches (1,000 mm), decreasing with distance from the sea coast, and is relatively evenly distributed throughout the year.[21]
92
+
93
+ The average temperature for the midwinter month of July varies from 12 °C (54 °F) at Salto in the northern interior to 9 °C (48 °F) at Montevideo in the south.[21] The midsummer month of January varies from a warm average of 26 °C (79 °F) at Salto to 22 °C (72 °F) at Montevideo.[21] National extreme temperatures at sea level are, Paysandú city 44 °C (111 °F) (20 January 1943) and Melo city −11.0 °C (12.2 °F) (14 June 1967).[51]
94
+
95
+ Uruguay is a representative democratic republic with a presidential system.[52] The members of government are elected for a five-year term by a universal suffrage system.[52] Uruguay is a unitary state: justice, education, health, security, foreign policy and defense are all administered nationwide.[52] The Executive Power is exercised by the president and a cabinet of 13 ministers.[52]
96
+
97
+ The legislative power is constituted by the General Assembly, composed of two chambers: the Chamber of Representatives, consisting of 99 members representing the 19 departments, elected based on proportional representation; and the Chamber of Senators, consisting of 31 members, 30 of whom are elected for a five-year term by proportional representation and the Vice-President, who presides over the chamber.[52]
98
+
99
+ The judicial arm is exercised by the Supreme Court, the Bench and Judges nationwide. The members of the Supreme Court are elected by the General Assembly; the members of the Bench are selected by the Supreme Court with the consent of the Senate, and the judges are directly assigned by the Supreme Court.[52]
100
+
101
+ Uruguay adopted its current constitution in 1967.[53][54] Many of its provisions were suspended in 1973, but re-established in 1985. Drawing on Switzerland and its use of the initiative, the Uruguayan Constitution also allows citizens to repeal laws or to change the constitution by popular initiative, which culminates in a nationwide referendum. This method has been used several times over the past 15 years: to confirm a law renouncing prosecution of members of the military who violated human rights during the military regime (1973–1985); to stop privatization of public utilities companies; to defend pensioners' incomes; and to protect water resources.[55]
102
+
103
+ For most of Uruguay's history, the Partido Colorado has been in government.[56][citation needed] However, in the 2004 Uruguayan general election, the Broad Front won an absolute majority in Parliamentary elections, and in 2009, José Mujica of the Broad Front defeated Luis Alberto Lacalle of the Blancos to win the presidency.
104
+
105
+ A 2010 Latinobarómetro poll found that, within Latin America, Uruguayans are among the most supportive of democracy and by far the most satisfied with the way democracy works in their country.[57] Uruguay ranked 27th in the Freedom House "Freedom in the World" index. According to the Economist Intelligence Unit in 2012, Uruguay scored an 8.17 in the Democracy Index and ranked equal 18th amongst the 25 countries considered to be full democracies in the world.[58] Uruguay ranks 18th in the World Corruption Perceptions Index composed by Transparency International.
106
+
107
+ Uruguay is divided into 19 departments whose local administrations replicate the division of the executive and legislative powers.[52] Each department elects its own authorities through a universal suffrage system.[52] The departmental executive authority resides in a superintendent and the legislative authority in a departmental board.[52]
108
+
109
+ Note:
110
+
111
+ Argentina and Brazil are Uruguay's most important trading partners: Argentina accounted for 20% of total imports in 2009.[1] Since bilateral relations with Argentina are considered a priority, Uruguay denies clearance to British naval vessels bound for the Falkland Islands, and prevents them from calling in at Uruguayan territories and ports for supplies and fuel.[60] A rivalry between the port of Montevideo and the port of Buenos Aires, dating back to the times of the Spanish Empire, has been described as a "port war". Officials of both countries emphasized the need to end this rivalry in the name of regional integration in 2010.[61]
112
+
113
+ Construction of a controversial pulp paper mill in 2007, on the Uruguayan side of the Uruguay River, caused protests in Argentina over fears that it would pollute the environment and lead to diplomatic tensions between the two countries.[62] The ensuing dispute remained a subject of controversy into 2010, particularly after ongoing reports of increased water contamination in the area were later proven to be from sewage discharge from the town of Gualeguaychú in Argentina.[63][64] In November 2010, Uruguay and Argentina announced they had reached a final agreement for joint environmental monitoring of the pulp mill.[65]
114
+
115
+ Brazil and Uruguay have signed cooperation agreements on defence, science, technology, energy, river transportation and fishing, with the hope of accelerating political and economic integration between these two neighbouring countries.[66] Uruguay has two uncontested boundary disputes with Brazil, over Isla Brasilera and the 235 km2 (91 sq mi) Invernada River region near Masoller. The two countries disagree on which tributary represents the legitimate source of the Quaraí/Cuareim River, which would define the border in the latter disputed section, according to the 1851 border treaty between the two countries.[1] However, these border disputes have not prevented both countries from having friendly diplomatic relations and strong economic ties. So far, the disputed areas remain de facto under Brazilian control, with little to no actual effort by Uruguay to assert its claims.
116
+
117
+ Uruguay has enjoyed friendly relations with the United States since its transition back to democracy.[45] Commercial ties between the two countries have expanded substantially in recent years, with the signing of a bilateral investment treaty in 2004 and a Trade and Investment Framework Agreement in January 2007.[45] The United States and Uruguay have also cooperated on military matters, with both countries playing significant roles in the United Nations Stabilization Mission in Haiti.[45]
118
+
119
+ President Mujica backed Venezuela's bid to join Mercosur. Venezuela has a deal to sell Uruguay up to 40,000 barrels of oil a day under preferential terms.[67]
120
+
121
+ On 15 March 2011, Uruguay became the seventh South American nation to officially recognize a Palestinian state,[68] although there was no specification for the Palestinian state's borders as part of the recognition. In statements, the Uruguayan government indicated its firm commitment to the Middle East peace process, but refused to specify borders "to avoid interfering in an issue that would require a bilateral agreement".[68]
122
+
123
+ The Uruguayan armed forces are constitutionally subordinate to the president, through the minister of defense.[23] Armed forces personnel number about 14,000 for the Army, 6,000 for the Navy, and 3,000 for the Air Force.[23] Enlistment is voluntary in peacetime, but the government has the authority to conscript in emergencies.[1]
124
+
125
+ Since May 2009, homosexuals have been allowed to serve openly in the military after the defence minister signed a decree stating that military recruitment policy would no longer discriminate on the basis of sexual orientation.[69] In the fiscal year 2010, the United States provided Uruguay with $1.7 million in military assistance, including $1 million in Foreign Military Financing and $480,000 in International Military Education and Training.[45]
126
+
127
+ Uruguay ranks first in the world on a per capita basis for its contributions to the United Nations peacekeeping forces, with 2,513 soldiers and officers in 10 UN peacekeeping missions.[23] As of February 2010, Uruguay had 1,136 military personnel deployed to Haiti in support of MINUSTAH and 1,360 deployed in support of MONUC in the Congo.[23] In December 2010, Uruguayan Major General Gloodtdofsky, was appointed Chief Military Observer and head of the United Nations Military Observer Group in India and Pakistan.[70]
128
+
129
+ In 2017, Uruguay signed the UN treaty on the Prohibition of Nuclear Weapons.[71]
130
+
131
+ Uruguay experienced a major economic and financial crisis between 1999 and 2002, principally a spillover effect from the economic problems of Argentina.[45] The economy contracted by 11%, and unemployment climbed to 21%.[45] Despite the severity of the trade shocks, Uruguay's financial indicators remained more stable than those of its neighbours, a reflection of its solid reputation among investors and its investment-grade sovereign bond rating, one of only two in South America.[72][needs update]
132
+
133
+ In 2004, the Batlle government signed a three-year $1.1 billion stand-by arrangement with the International Monetary Fund (IMF), committing the country to a substantial primary fiscal surplus, low inflation, considerable reductions in external debt, and several structural reforms designed to improve competitiveness and attract foreign investment.[45] Uruguay terminated the agreement in 2006 following the early repayment of its debt but maintained a number of the policy commitments.[45]
134
+
135
+ Vázquez, who assumed the government in March 2005, created the "Ministry of Social Development" and sought to reduce the country's poverty rate with a $240 million National Plan to Address the Social Emergency (PANES), which provided a monthly conditional cash transfer of approximately $75 to over 100,000 households in extreme poverty. In exchange, those receiving the benefits were required to participate in community work, ensure that their children attended school daily, and had regular health check-ups.[45]
136
+
137
+ Following the 2001 Argentine credit default, prices in the Uruguayan economy made a variety of services, including information technology and architectural expertise, once too expensive in many foreign markets, exportable.[73] The Frente Amplio government, while continuing payments on Uruguay's external debt,[74] also undertook an emergency plan to attack the widespread problems of poverty and unemployment.[75] The economy grew at an annual rate of 6.7% during the 2004–2008 period.[76] Uruguay's exports markets have been diversified in order to reduce dependency on Argentina and Brazil.[76] Poverty was reduced from 33% in 2002 to 21.7% in July 2008, while extreme poverty dropped from 3.3% to 1.7%.[76]
138
+
139
+ Between the years 2007 and 2009, Uruguay was the only country in the Americas that did not technically experience a recession (two consecutive downward quarters).[77] Unemployment reached a record low of 5.4% in December 2010 before rising to 6.1% in January 2011.[78] While unemployment is still at a low level, the IMF observed a rise in inflationary pressures,[79] and Uruguay's GDP expanded by 10.4% for the first half of 2010.[80]
140
+
141
+ According to IMF estimates, Uruguay was likely to achieve growth in real GDP of between 8% and 8.5% in 2010, followed by 5% growth in 2011 and 4% in subsequent years.[79] Gross public sector debt contracted in the second quarter of 2010, after five consecutive periods of sustained increase, reaching $21.885 billion US dollars, equivalent to 59.5% of the GDP.[81]
142
+
143
+ The growth, use, and sale of cannabis was legalized on 11 December 2013,[82] making Uruguay the first country in the world to fully legalize marijuana. The law was voted at the Uruguayan Senate on the same date with 16 votes to approve it and 13 against.
144
+
145
+ In 2010, Uruguay's export-oriented agricultural sector contributed to 9.3% of the GDP and employed 13% of the workforce.[1] Official statistics from Uruguay's Agriculture and Livestock Ministry indicate that meat and sheep farming in Uruguay occupies 59.6% of the land. The percentage further increases to 82.4% when cattle breeding is linked to other farm activities such as dairy, forage, and rotation with crops such as rice.[83]
146
+
147
+ According to FAOSTAT, Uruguay is one of the world's largest producers of soybeans (9th), greasy wool (12th), horse meat (14th), beeswax (14th), and quinces (17th). Most farms (25,500 out of 39,120) are family-managed; beef and wool represent the main activities and main source of income for 65% of them, followed by vegetable farming at 12%, dairy farming at 11%, hogs at 2%, and poultry also at 2%.[83] Beef is the main export commodity of the country, totaling over $1 billion US dollars in 2006.[83]
148
+
149
+ In 2007, Uruguay had cattle herds totalling 12 million head, making it the country with the highest number of cattle per capita at 3.8.[83] However, 54% is in the hands of 11% of farmers, who have a minimum of 500 head. At the other extreme, 38% of farmers exploit small lots and have herds averaging below one hundred head.[83]
150
+
151
+ The tourism industry in Uruguay is an important part of its economy. In 2012 the sector was estimated to account for 97,000 jobs and (directly and indirectly) 9% of GDP.[84]
152
+
153
+ In 2013, 2.8 million tourists entered Uruguay, of whom 59% came from Argentina and 14% from Brazil, with Chileans, Paraguayans, North Americans and Europeans accounting for most of the remainder.[84]
154
+
155
+ Cultural experiences in Uruguay include exploring the country's colonial heritage, as found in Colonia del Sacramento. Montevideo, the country's capital, houses the most diverse selection of cultural activities. Historical monuments such as Torres Garcia Museum as well as Estadio Centenario, which housed the first world cup in history, are examples. However simply walking the streets allows tourists to experience the city's colorful culture.
156
+
157
+ One of the main natural attractions in Uruguay is Punta del Este. Punta del Este is situated on a small peninsula off the southeast coast of Uruguay. Its beaches are divided into Mansa, or tame (river) side and Brava, or rugged (ocean) side. The Mansa is more suited for sunbathing, snorkeling, & other low-key recreational opportunities, while the Brava is more suited for adventurous sports, such as surfing. Punta del Este adjoins the city of Maldonado, while to its northeast along the coast are found the smaller resorts of La Barra and José Ignacio.[85]
158
+
159
+ The Port of Montevideo, handling over 1.1 million containers annually, is the most advanced container terminal in South America.[86] Its quay can handle 14-metre draught (46 ft) vessels. Nine straddle cranes allow for 80 to 100 movements per hour.[86] The port of Nueva Palmira is a major regional merchandise transfer point and houses both private and government-run terminals.[87]
160
+
161
+ Carrasco International Airport was initially inaugurated in 1947 and in 2009, Puerta del Sur, the airport owner and operator, with an investment of $165 million, commissioned Rafael Viñoly Architects to expand and modernize the existing facilities with a spacious new passenger terminal to increase capacity and spur commercial growth and tourism in the region.[88][89] The London-based magazine Frontier chose the Carrasco International Airport, serving Montevideo, as one of the best four airports in the world in its 27th edition. The airport can handle up to 4.5 million users per year.[88] PLUNA was the flag carrier of Uruguay, and was headquartered in Carrasco.[90][91]
162
+
163
+ The Punta del Este International Airport, located 15 kilometres (9.3 mi) from Punta del Este in the Maldonado Department, is the second busiest air terminal in Uruguay, built by the Uruguayan architect Carlos Ott it was inaugurated in 1997.[87]
164
+
165
+ The Administración de Ferrocarriles del Estado is the autonomous agency in charge of rail transport and the maintenance of the railroad network. Uruguay has about 1,200 km (750 mi) of operational railroad track.[1] Until 1947, about 90% of the railroad system was British-owned.[92] In 1949, the government nationalized the railways, along with the electric trams and the Montevideo Waterworks Company.[92] However, in 1985 the "National Transport Plan" suggested passenger trains were too costly to repair and maintain.[92] Cargo trains would continue for loads more than 120 tons, but bus transportation became the "economic" alternative for travellers.[92] Passenger service was then discontinued in 1988.[92] However, rail passenger commuter service into Montevideo was restarted in 1993, and now comprises three suburban lines.
166
+
167
+ Surfaced roads connect Montevideo to the other urban centers in the country, the main highways leading to the border and neighboring cities. Numerous unpaved roads connect farms and small towns. Overland trade has increased markedly since Mercosur (Southern Common Market) was formed in the 1990s and again in the later 2000s.[93] Most of the country's domestic freight and passenger service is by road rather than rail.
168
+
169
+ The country has several international bus services[94] connecting the capital and frontier localities to neighboring countries.[95] Namely, 17 destinations in Argentina[note 1]; 12 destinations in Brazil[note 3] and the capital cities of Chile and Paraguay.[96]
170
+
171
+ The Telecommunications industry is more developed than in most other Latin American countries, being the first country in the Americas to achieve complete digital telephony coverage in 1997. The telephone system is completely digitized and has very good coverage over all the country. The system is government owned, and there have been controversial proposals to partially privatize since the 1990s.[97]
172
+
173
+ The mobile phone market is shared by the state-owned ANTEL and two private companies, Movistar and Claro.
174
+
175
+ More than 97%[98] of Uruguay's electricity comes from renewable energy. The dramatic shift, taking less than ten years and without government funding, lowered electricity costs and slashed the country's carbon footprint.[99][100] Most of the electricity comes from hydroelectric facilities and wind parks. Uruguay no longer imports electricity.[14] Uruguay will be potentially among the main winners after the global transition to renewable energy is completed and is ranked no. 6 out of 156 countries in the index of geopolitical gains and losses after energy transition (GeGaLo Index).[101]
176
+
177
+ Uruguayans are of predominantly European origin, with over 87.7% of the population claiming European descent in the 2011 census.[102]
178
+ Most Uruguayans of European ancestry are descendants of 19th and 20th century immigrants from Spain and Italy (about one-quarter of the population is of Italian origin),[23] and to a lesser degree Britain, France and Germany.[21] Earlier settlers had migrated from Argentina.[21] People of African descent make up an even smaller proportion of the total.[21] Overall, the ethnic composition is similar to neighbouring Argentine provinces as well as Southern Brazil.[103]
179
+
180
+ From 1963 to 1985, an estimated 320,000 Uruguayans emigrated.[104] The most popular destinations for Uruguayan emigrants are Argentina, followed by the United States, Australia, Canada, Spain, Italy and France.[104] In 2009, for the first time in 44 years, the country saw an overall positive influx when comparing immigration to emigration. 3,825 residence permits were awarded in 2009, compared with 1,216 in 2005.[105] 50% of new legal residents come from Argentina and Brazil. A migration law passed in 2008 gives immigrants the same rights and opportunities that nationals have, with the requisite of proving a monthly income of $650.[105]
181
+
182
+ Uruguay's rate of population growth is much lower than in other Latin American countries.[21] Its median age is 35.3 years, is higher than the global average[23] due to its low birth rate, high life expectancy, and relatively high rate of emigration among younger people. A quarter of the population is less than 15 years old and about a sixth are aged 60 and older.[21] In 2017 the average total fertility rate (TFR) across Uruguay was 1.70 children born per woman, below the replacement rate of 2.1, it remains considerably below the high of 5.76 children born per woman in 1882.[106]
183
+
184
+ Metropolitan Montevideo is the only large city, with around 1.9 million inhabitants, or more than half the country's total population. The rest of the urban population lives in about 30 towns.[23]
185
+
186
+ A 2017 IADB report on labor conditions for Latin American nations, ranked Uruguay as the region's leader overall and in all but one subindexes, including gender, age, income, formality and labor participation.[107]
187
+
188
+ Uruguay has no official religion; church and state are officially separated,[23] and religious freedom is guaranteed. A 2008 survey by the INE of Uruguay showed Catholicism as the main religion, with 45.7% of the population; 9.0% are non-Catholic Christians, 0.6% are Animists or Umbandists (an Afro-Brazilian religion), and 0.4% Jewish. 30.1% reported believing in a god, but not belonging to any religion, while 14% were atheist or agnostic.[110] Among the sizeable Armenian community in Montevideo, the dominant religion is Christianity, specifically Armenian Apostolic.[111]
189
+
190
+ Political observers consider Uruguay the most secular country in the Americas.[112] Uruguay's secularization began with the relatively minor role of the church in the colonial era, compared with other parts of the Spanish Empire. The small numbers of Uruguay's indigenous peoples and their fierce resistance to proselytism reduced the influence of the ecclesiastical authorities.[113]
191
+
192
+ After independence, anti-clerical ideas spread to Uruguay, particularly from France, further eroding the influence of the church.[114] In 1837 civil marriage was recognized, and in 1861 the state took over the running of public cemeteries. In 1907 divorce was legalized and, in 1909 all religious instruction was banned from state schools.[113] Under the influence of the innovative Colorado reformer José Batlle y Ordóñez (1903–1911), complete separation of church and state was introduced with the new constitution of 1917.[113]
193
+
194
+ Uruguay's capital has 12 synagogues, and a community of 20,000 Jews by 2011. With a peak of 50,000 during the mid-1960s, Uruguay has the world's highest rate of aliyah as a percentage of the Jewish population.[115]
195
+
196
+ Uruguayan Spanish, as is the case with neighboring Argentina, employs both voseo and yeísmo (with [ʃ] or [ʒ]). English is common in the business world and its study has risen significantly in recent years, especially among the young. Uruguayan Portuguese is spoken as a native language by 15% of the Uruguayan population, in northern regions near the Brazilian border,[117] making it the second most spoken language of the country. As few native people exist in the population, no indigenous languages are thought to remain in Uruguay.[118]
197
+ Another spoken dialect was the Patois, which is an Occitan dialect. The dialect was spoken mainly in the Colonia Department, where the first pilgrims settled, in the city called La Paz. Today it is considered a dead tongue, although some elders at the aforementioned location still practice it. There are still written tracts of the language in the Waldensians Library (Biblioteca Valdense) in the town of Colonia Valdense, Colonia Department.
198
+ Patois speakers arrived to Uruguay from the Piedmont. Originally they were Vaudois, who become Waldensians, giving their name to the city Colonia Valdense, which translated from the Spanish means "Waldensian Colony."[119]
199
+
200
+ Education in Uruguay is secular, free,[120] and compulsory for 14 years, starting at the age of 4.[121] The system is divided into six levels of education: early childhood (3–5 years); primary (6–11 years); basic secondary (12–14 years); upper secondary (15–17 years); higher education (18 and up); and post-graduate education.[121]
201
+
202
+ Public education is the primary responsibility of three institutions: the Ministry of Education and Culture, which coordinates education policies, the National Public Education Administration, which formulates and implements policies on early to secondary education, and the University of the Republic, responsible for higher education.[121] In 2009, the government planned to invest 4.5% of GDP in education.[120]
203
+
204
+ Uruguay ranks high on standardised tests such as PISA at a regional level, but compares unfavourably to the OECD average, and is also below some countries with similar levels of income.[120] In the 2006 PISA test, Uruguay had one of the greatest standard deviations among schools, suggesting significant variability by socio-economic level.[120]
205
+
206
+ Uruguay is part of the One Laptop per Child project, and in 2009 became the first country in the world to provide a laptop for every primary school student,[122] as part of the Plan Ceibal.[123] Over the 2007–2009 period, 362,000 pupils and 18,000 teachers were involved in the scheme; around 70% of the laptops were given to children who did not have computers at home.[123] The OLPC programme represents less than 5% of the country's education budget.[123]
207
+
208
+ Uruguayan culture is strongly European and its influences from southern Europe are particularly important.[21] The tradition of the gaucho has been an important element in the art and folklore of both Uruguay and Argentina.[21]
209
+
210
+ Abstract painter and sculptor Carlos Páez Vilaró was a prominent Uruguayan artist. He drew from both Timbuktu and Mykonos to create his best-known work: his home, hotel and atelier Casapueblo near Punta del Este. Casapueblo is a "livable sculpture" and draws thousands of visitors from around the world. The 19th-century painter Juan Manuel Blanes, whose works depict historical events, was the first Uruguayan artist to gain widespread recognition.[21] The Post-Impressionist painter Pedro Figari achieved international renown for his pastel studies of subjects in Montevideo and the countryside. Blending elements of art and nature the work of the landscape architect Leandro Silva Delgado [es] has also earned international prominence.[21]
211
+
212
+ Uruguay has a small but growing film industry, and movies such as Whisky by Juan Pablo Rebella and Pablo Stoll (2004), Marcelo Bertalmío's Los días con Ana (2000; "Days with Ana") and Ana Díez's Paisito (2008), about the 1973 military coup, have earned international honours.[21]
213
+
214
+ The folk and popular music of Uruguay shares not only its gaucho roots with Argentina, but also those of the tango.[21] One of the most famous tangos, "La cumparsita" (1917), was written by the Uruguayan composer Gerardo Matos Rodríguez.[21] The candombe is a folk dance performed at Carnival, especially Uruguayan Carnival, mainly by Uruguayans of African ancestry.[21] The guitar is the preferred musical instrument, and in a popular traditional contest called the payada two singers, each with a guitar, take turns improvising verses to the same tune.[21]
215
+
216
+ Folk music is called canto popular and includes some guitar players and singers such as Alfredo Zitarrosa, José Carbajal "El Sabalero", Daniel Viglietti, Los Olimareños, and Numa Moraes.
217
+
218
+ Numerous radio stations and musical events reflect the popularity of rock music and the Caribbean genres, known as música tropical ("tropical music").[21] Early classical music in Uruguay showed heavy Spanish and Italian influence, but since the 20th century a number of composers of classical music, including Eduardo Fabini, Vicente Ascone [es], and Héctor Tosar, have made use of Latin American musical idioms.[21]
219
+
220
+ Tango has also affected Uruguayan culture, especially during the 20th century, particularly the '30s and '40s with Uruguayan singers such as Julio Sosa from Las Piedras.[124] When the famous tango singer Carlos Gardel was 29 years old he changed his nationality to be Uruguayan, saying he was born in Tacuarembó, but this subterfuge was probably done to keep French authorities from arresting him for failing to register in the French Army for World War I. Gardel was born in France and was raised in Buenos Aires. He never lived in Uruguay.[125] Nevertheless, a Carlos Gardel museum was established in 1999 in Valle Edén, near Tacuarembó.[126]
221
+
222
+ Rock and roll first broke into Uruguayan audiences with the arrival of the Beatles and other British bands in the early 1960s. A wave of bands appeared in Montevideo, including Los Shakers, Los Mockers, Los Iracundos, Los Moonlights, and Los Malditos, who became major figures in the so-called Uruguayan Invasion of Argentina.[127] Popular bands of the Uruguayan Invasion sang in English.
223
+
224
+ Popular Uruguayan rock bands include La Vela Puerca, No Te Va Gustar, El Cuarteto de Nos, Once Tiros, La Trampa, Chalamadre, Snake, Buitres, and Cursi. In 2004, the Uruguayan musician and actor Jorge Drexler won an Academy Award for composing the song "Al otro lado del río" from the movie The Motorcycle Diaries, which narrated the life of Che Guevara. Other Uruguayan famous songwriters are Jaime Roos, Eduardo Mateo, Rubén Rada, Pablo Sciuto, Daniel Viglietti, among others.
225
+
226
+ José Enrique Rodó (1871–1917), a modernist, is considered Uruguay's most significant literary figure.[21] His book Ariel (1900) deals with the need to maintain spiritual values while pursuing material and technical progress.[21] Besides stressing the importance of upholding spiritual over materialistic values, it also stresses resisting cultural dominance by Europe and the United States.[21] The book continues to influence young writers.[21] Notable amongst Latin American playwrights is Florencio Sánchez (1875–1910), who wrote plays about contemporary social problems that are still performed today.[21]
227
+
228
+ From about the same period came the romantic poetry of Juan Zorrilla de San Martín (1855–1931), who wrote epic poems about Uruguayan history. Also notable are Juana de Ibarbourou (1895–1979), Delmira Agustini (1866–1914), Idea Vilariño (1920–2009), and the short stories of Horacio Quiroga and Juan José Morosoli (1899–1959).[21] The psychological stories of Juan Carlos Onetti (such as "No Man's Land" and "The Shipyard") have earned widespread critical praise, as have the writings of Mario Benedetti.[21]
229
+
230
+ Uruguay's best-known contemporary writer is Eduardo Galeano, author of Las venas abiertas de América Latina (1971; "Open Veins of Latin America") and the trilogy Memoria del fuego (1982–87; "Memory of Fire").[21] Other modern Uruguayan writers include Mario Levrero, Sylvia Lago, Jorge Majfud, and Jesús Moraes.[21] Uruguayans of many classes and backgrounds enjoy reading historietas, comic books that often blend humour and fantasy with thinly veiled social criticism.[21]
231
+
232
+ The Reporters Without Borders worldwide press freedom index has ranked Uruguay as 19th of 180 reported countries in 2019.[128] Freedom of speech and media are guaranteed by the constitution, with qualifications for inciting violence or "insulting the nation".[75] Uruguayans have access to more than 100 private daily and weekly newspapers, more than 100 radio stations, and some 20 terrestrial television channels, and cable TV is widely available.[75]
233
+
234
+ Uruguay's long tradition of freedom of the press was severely curtailed during the years of military dictatorship. On his first day in office in March 1985, Sanguinetti re-established complete freedom of the press.[129] Consequently, Montevideo's newspapers, which account for all of Uruguay's principal daily newspapers, greatly expanded their circulations.[129]
235
+
236
+ State-run radio and TV are operated by the official broadcasting service SODRE.[75] Some newspapers are owned by, or linked to, the main political parties.[75] El Día was the nation's most prestigious paper until its demise in the early 1990s, founded in 1886 by the Colorado party leader and (later) president José Batlle y Ordóñez. El País, the paper of the rival Blanco Party, has the largest circulation.[21] Búsqueda is Uruguay's most important weekly news magazine and serves as an important forum for political and economic analysis.[129] Although it sells only about 16,000 copies a week, its estimated readership exceeds 50,000.[129] MercoPress is an independent news agency focusing on news related to Mercosur and is based in Montevideo.[130]
237
+
238
+ Football is the most popular sport in Uruguay. The first international match outside the British Isles was played between Uruguay and Argentina in Montevideo in July 1902.[131] Uruguay won gold at the 1924 Paris Olympic Games[132] and again in 1928 in Amsterdam.[133]
239
+
240
+ The Uruguay national football team has won the FIFA World Cup on two occasions. Uruguay won the inaugural tournament on home soil in 1930 and again in 1950, famously defeating home favourites Brazil in the final match.[134] Uruguay has won the Copa América (an international tournament for South American nations and guests) more than any other country, their victory in 2011 making a total of 15 Copa Américas won. Uruguay has by far the smallest population of any country that has won a World Cup.[134] Despite their early success, they missed three World Cups in four attempts from 1994 to 2006.[134] Uruguay performed very creditably in the 2010 FIFA World Cup, having reached the semi-final for the first time in 40 years. Diego Forlán was presented with the Golden Ball award as the best player of the 2010 tournament.[135] In the rankings for June 2012, Uruguay were ranked the second best team in the world, according to the FIFA world rankings, their highest ever point in football history, falling short of the first spot to the Spain national football team.[136]
241
+
242
+ Uruguay exported 1,414 football players during the 2000s, almost as many players as Brazil and Argentina.[137] In 2010, the Uruguayan government enacted measures intended to retain players in the country.[137]
243
+
244
+ Football was taken to Uruguay by English sailors and labourers in the late 19th century. Less successfully, they introduced rugby and cricket. There are two Montevideo-based football clubs, Nacional and Peñarol, who are successful in domestic and South American tournaments and have won three Intercontinental Cups each.
245
+
246
+ Besides football, the most popular sport in Uruguay is basketball.[138] Its national team qualified for the Basketball World Cup 7 times, more often than other countries in South America, except Brazil and Argentina. Uruguay hosted the official Basketball World Cup for the 1967 FIBA World Championship and the official Americas Basketball Championship in 1988, 1997 and is a host of the 2017 FIBA AmeriCup.
en/5883.html.txt ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Coordinates: 40°N 100°W / 40°N 100°W / 40; -100
4
+
5
+ The United States of America (USA), commonly known as the United States (U.S. or US) or America, is a country mostly located in central North America, between Canada and Mexico. It consists of 50 states, a federal district, five major self-governing territories, and various possessions.[i] At 3.8 million square miles (9.8 million km2), it is the world's third- or fourth-largest country by total area.[e] With a 2019 estimated population of over 328 million,[7] the U.S. is the third most populous country in the world. The Americans are a racially and ethnically diverse population that has been shaped through centuries of immigration. The capital is Washington, D.C., and the most populous city is New York City.
6
+
7
+ Paleo-Indians migrated from Siberia to the North American mainland at least 12,000 years ago,[19] and European colonization began in the 16th century. The United States emerged from the thirteen British colonies established along the East Coast. Numerous disputes between Great Britain and the colonies led to the American Revolutionary War lasting between 1775 and 1783, leading to independence.[20] Beginning in the late 18th century, the United States vigorously expanded across North America, gradually acquiring new territories,[21] killing and displacing Native Americans, and admitting new states. By 1848, the United States spanned the continent.[21]
8
+ Slavery was legal in much of the United States until the second half of the 19th century, when the American Civil War led to its abolition.[22][23]
9
+
10
+ The Spanish–American War and World War I entrenched the U.S. as a world power, a status confirmed by the outcome of World War II. It was the first country to develop nuclear weapons and is the only country to have used them in warfare. During the Cold War, the United States and the Soviet Union competed in the Space Race, culminating with the 1969 Apollo 11 mission, the spaceflight that first landed humans on the Moon. The end of the Cold War and collapse of the Soviet Union in 1991 left the United States as the world's sole superpower.[24]
11
+
12
+ The United States is a federal republic and a representative democracy. It is a founding member of the United Nations, World Bank, International Monetary Fund, Organization of American States (OAS), NATO, and other international organizations. It is a permanent member of the United Nations Security Council.
13
+
14
+ A highly developed country, the United States is the world's largest economy and accounts for approximately a quarter of global gross domestic product (GDP).[25] The United States is the world's largest importer and the second-largest exporter of goods, by value.[26][27] Although its population is only 4.3% of the world total,[28] it holds 29.4% of the total wealth in the world, the largest share held by any country.[29] Despite income and wealth disparities, the United States continues to rank high in measures of socioeconomic performance, including average wage, median income, median wealth, human development, per capita GDP, and worker productivity.[30][31] It is the foremost military power in the world, making up more than a third of global military spending,[32] and is a leading political, cultural, and scientific force internationally.[33]
15
+
16
+ The first known use of the name "America" dates back to 1507, when it appeared on a world map created by the German cartographer Martin Waldseemüller. On this map, the name applied to South America in honor of the Italian explorer Amerigo Vespucci.[34] After returning from his expeditions, Vespucci first postulated that the West Indies did not represent Asia's eastern limit, as initially thought by Christopher Columbus, but instead were part of an entirely separate landmass thus far unknown to the Europeans.[35] In 1538, the Flemish cartographer Gerardus Mercator used the name "America" on his own world map, applying it to the entire Western Hemisphere.[36]
17
+
18
+ The first documentary evidence of the phrase "United States of America" dates from a January 2, 1776 letter written by Stephen Moylan, Esq., to Lt. Col. Joseph Reed, George Washington's aide-de-camp and Muster-Master General of the Continental Army. Moylan expressed his wish to go "with full and ample powers from the United States of America to Spain" to seek assistance in the revolutionary war effort.[37][38][39] The first known publication of the phrase "United States of America" was in an anonymous essay in The Virginia Gazette newspaper in Williamsburg, Virginia, on April 6, 1776.[40]
19
+
20
+ The second draft of the Articles of Confederation, prepared by John Dickinson and completed no later than June 17, 1776, declared "The name of this Confederation shall be the 'United States of America'".[41] The final version of the Articles sent to the states for ratification in late 1777 contains the sentence "The Stile of this Confederacy shall be 'The United States of America'".[42] In June 1776, Thomas Jefferson wrote the phrase "UNITED STATES OF AMERICA" in all capitalized letters in the headline of his "original Rough draught" of the Declaration of Independence.[41] This draft of the document did not surface until June 21, 1776, and it is unclear whether it was written before or after Dickinson used the term in his June 17 draft of the Articles of Confederation.[41]
21
+
22
+ The short form "United States" is also standard. Other common forms are the "U.S.," the "USA," and "America." Colloquial names are the "U.S. of A." and, internationally, the "States." "Columbia," a name popular in poetry and songs of the late 18th century, derives its origin from Christopher Columbus; it appears in the name "District of Columbia." Many landmarks and institutions in the Western Hemisphere bear his name, including the country of Colombia.[43]
23
+
24
+ The phrase "United States" was originally plural, a description of a collection of independent states—e.g., "the United States are"—including in the Thirteenth Amendment to the United States Constitution, ratified in 1865.[44] The singular form—e.g., "the United States is"—became popular after the end of the Civil War. The singular form is now standard; the plural form is retained in the idiom "these United States." The difference is more significant than usage; it is a difference between a collection of states and a unit.[45]
25
+
26
+ A citizen of the United States is an "American." "United States," "American" and "U.S." refer to the country adjectivally ("American values," "U.S. forces"). In English, the word "American" rarely refers to topics or subjects not directly connected with the United States.[46]
27
+
28
+ It has been generally accepted that the first inhabitants of North America migrated from Siberia by way of the Bering land bridge and arrived at least 12,000 years ago; however, increasing evidence suggests an even earlier arrival.[19][47][48] After crossing the land bridge, the Paleo-Indians moved southward along the Pacific coast[49] and through an interior ice-free corridor.[50] The Clovis culture, which appeared around 11,000 BC, was initially believed to represent the first wave of human settlement of the Americas.[51][52] It is likely these represent the first of three major waves of migration into North America.[53]
29
+
30
+ Over time, indigenous cultures in North America grew increasingly complex, and some, such as the pre-Columbian Mississippian culture in the southeast, developed advanced agriculture, grand architecture, and state-level societies.[54] The Mississippian culture flourished in the south from 800 to 1600 AD, extending from the Mexican border down through Florida.[55] Its city state Cahokia is the largest, most complex pre-Columbian archaeological site in the modern-day United States.[56] In the Four Corners region, Ancestral Puebloan culture developed from centuries of agricultural experimentation.[57]
31
+
32
+ Three UNESCO World Heritage Sites in the United States are credited to the Pueblos: Mesa Verde National Park, Chaco Culture National Historical Park, and Taos Pueblo.[58][59] The earthworks constructed by Native Americans of the Poverty Point culture have also been designated a UNESCO World Heritage site. In the southern Great Lakes region, the Iroquois Confederacy was established at some point between the twelfth and fifteenth centuries.[60] Most prominent along the Atlantic coast were the Algonquian tribes, who practiced hunting and trapping, along with limited cultivation.
33
+
34
+ With the progress of European colonization in the territories of the contemporary United States, the Native Americans were often conquered and displaced.[61] The native population of America declined after European arrival for various reasons,[62][63] primarily diseases such as smallpox and measles.[64][65]
35
+
36
+ Estimating the native population of North America at the time of European contact is difficult.[66][67] Douglas H. Ubelaker of the Smithsonian Institution estimated that there was a population of 92,916 in the south Atlantic states and a population of 473,616 in the Gulf states,[68] but most academics regard this figure as too low.[66] Anthropologist Henry F. Dobyns believed the populations were much higher, suggesting 1,100,000 along the shores of the gulf of Mexico, 2,211,000 people living between Florida and Massachusetts, 5,250,000 in the Mississippi Valley and tributaries and 697,000 people in the Florida peninsula.[66][67]
37
+
38
+ In the early days of colonization, many European settlers were subject to food shortages, disease, and attacks from Native Americans. Native Americans were also often at war with neighboring tribes and allied with Europeans in their colonial wars. In many cases, however, natives and settlers came to depend on each other. Settlers traded for food and animal pelts; natives for guns, ammunition and other European goods.[69] Natives taught many settlers to cultivate corn, beans, and squash. European missionaries and others felt it was important to "civilize" the Native Americans and urged them to adopt European agricultural techniques and lifestyles.[70][71]
39
+
40
+ With the advancement of European colonization in North America, the Native Americans were often conquered and displaced.[72] The first Europeans to arrive in the contiguous United States were Spanish conquistadors such as Juan Ponce de León, who made his first visit to Florida in 1513. Even earlier, Christopher Columbus landed in Puerto Rico on his 1493 voyage. The Spanish set up the first settlements in Florida and New Mexico such as Saint Augustine[73] and Santa Fe. The French established their own as well along the Mississippi River. Successful English settlement on the eastern coast of North America began with the Virginia Colony in 1607 at Jamestown and with the Pilgrims' Plymouth Colony in 1620. Many settlers were dissenting Christian groups who came seeking religious freedom. The continent's first elected legislative assembly, Virginia's House of Burgesses, was created in 1619. The Mayflower Compact, signed by the Pilgrims before disembarking, and the Fundamental Orders of Connecticut, established precedents for the pattern of representative self-government and constitutionalism that would develop throughout the American colonies.[74][75]
41
+
42
+ Most settlers in every colony were small farmers, though other industries were formed. Cash crops included tobacco, rice, and wheat. Extraction industries grew up in furs, fishing and lumber. Manufacturers produced rum and ships, and by the late colonial period, Americans were producing one-seventh of the world's iron supply.[76] Cities eventually dotted the coast to support local economies and serve as trade hubs. English colonists were supplemented by waves of Scotch-Irish immigrants and other groups. As coastal land grew more expensive, freed indentured servants claimed lands further west.[77]
43
+
44
+ A large-scale slave trade with English privateers began.[78] Because of less disease and better food and treatment, the life expectancy of slaves was much higher in North America than further south, leading to a rapid increase in the numbers of slaves.[79][80] Colonial society was largely divided over the religious and moral implications of slavery, and colonies passed acts for and against the practice.[81][82] But by the turn of the 18th century, African slaves were replacing indentured servants for cash crop labor, especially in the South.[83]
45
+
46
+ With the establishment of the Province of Georgia in 1732, the 13 colonies that would become the United States of America were administered by the British as overseas dependencies.[84] All nonetheless had local governments with elections open to most free men.[85] With extremely high birth rates, low death rates, and steady settlement, the colonial population grew rapidly. Relatively small Native American populations were eclipsed.[86] The Christian revivalist movement of the 1730s and 1740s known as the Great Awakening fueled interest both in religion and in religious liberty.[87]
47
+
48
+ During the Seven Years' War (known in the United States as the French and Indian War), British forces seized Canada from the French, but the francophone population remained politically isolated from the southern colonies. Excluding the Native Americans, who were being conquered and displaced, the 13 British colonies had a population of over 2.1 million in 1770, about a third that of Britain. Despite continuing, new arrivals, the rate of natural increase was such that by the 1770s only a small minority of Americans had been born overseas.[88] The colonies' distance from Britain had allowed the development of self-government, but their unprecedented success motivated monarchs to periodically seek to reassert royal authority.[89]
49
+
50
+ In 1774, the Spanish Navy ship Santiago, under Juan Pérez, entered and anchored in an inlet of Nootka Sound, Vancouver Island, in present-day British Columbia. Although the Spanish did not land, natives paddled to the ship to trade furs for abalone shells from California.[90] At the time, the Spanish were able to monopolize the trade between Asia and North America, granting limited licenses to the Portuguese. When the Russians began establishing a growing fur trading system in Alaska, the Spanish began to challenge the Russians, with Pérez's voyage being the first of many to the Pacific Northwest.[91][j]
51
+
52
+ During his third and final voyage, Captain James Cook became the first European to begin formal contact with Hawaii.[93] Captain Cook's last voyage included sailing along the coast of North America and Alaska searching for a Northwest Passage for approximately nine months.[94]
53
+
54
+ The American Revolutionary War was the first successful colonial war of independence against a European power. Americans had developed an ideology of "republicanism" asserting that government rested on the will of the people as expressed in their local legislatures. They demanded their rights as Englishmen and "no taxation without representation". The British insisted on administering the empire through Parliament, and the conflict escalated into war.[95]
55
+
56
+ The Second Continental Congress unanimously adopted the Declaration of Independence, which asserted that Great Britain was not protecting Americans' unalienable rights. July 4 is celebrated annually as Independence Day.[96] In 1777, the Articles of Confederation established a decentralized government that operated until 1789.[96]
57
+
58
+ Following the decisive Franco-American victory at Yorktown in 1781,[97] Britain signed the peace treaty of 1783, and American sovereignty was internationally recognized and the country was granted all lands east of the Mississippi River. Nationalists led the Philadelphia Convention of 1787 in writing the United States Constitution, ratified in state conventions in 1788. The federal government was reorganized into three branches, on the principle of creating salutary checks and balances, in 1789. George Washington, who had led the Continental Army to victory, was the first president elected under the new constitution. The Bill of Rights, forbidding federal restriction of personal freedoms and guaranteeing a range of legal protections, was adopted in 1791.[98]
59
+
60
+ Although the federal government criminalized the international slave trade in 1808, after 1820, cultivation of the highly profitable cotton crop exploded in the Deep South, and along with it, the slave population.[99][100][101] The Second Great Awakening, especially 1800–1840, converted millions to evangelical Protestantism. In the North, it energized multiple social reform movements, including abolitionism;[102] in the South, Methodists and Baptists proselytized among slave populations.[103]
61
+
62
+ Americans' eagerness to expand westward prompted a long series of American Indian Wars.[104] The Louisiana Purchase of French-claimed territory in 1803 almost doubled the nation's area.[105] The War of 1812, declared against Britain over various grievances and fought to a draw, strengthened U.S. nationalism.[106] A series of military incursions into Florida led Spain to cede it and other Gulf Coast territory in 1819.[107] The expansion was aided by steam power, when steamboats began traveling along America's large water systems, many of which were connected by new canals, such as the Erie and the I&M; then, even faster railroads began their stretch across the nation's land.[108]
63
+
64
+ From 1820 to 1850, Jacksonian democracy began a set of reforms which included wider white male suffrage; it led to the rise of the Second Party System of Democrats and Whigs as the dominant parties from 1828 to 1854. The Trail of Tears in the 1830s exemplified the Indian removal policy that forcibly resettled Indians into the west on Indian reservations. The U.S. annexed the Republic of Texas in 1845 during a period of expansionist Manifest destiny.[109] The 1846 Oregon Treaty with Britain led to U.S. control of the present-day American Northwest.[110] Victory in the Mexican–American War resulted in the 1848 Mexican Cession of California and much of the present-day American Southwest.[111]
65
+ The California Gold Rush of 1848–49 spurred migration to the Pacific coast, which led to the California Genocide[112][113][114][115] and the creation of additional western states.[116] After the Civil War, new transcontinental railways made relocation easier for settlers, expanded internal trade and increased conflicts with Native Americans.[117] In 1869, a new Peace Policy nominally promised to protect Native Americans from abuses, avoid further war, and secure their eventual U.S. citizenship. Nonetheless, large-scale conflicts continued throughout the West into the 1900s.
66
+
67
+ Irreconcilable sectional conflict regarding the slavery of Africans and African Americans ultimately led to the American Civil War.[118] Initially, states entering the Union had alternated between slave and free states, keeping a sectional balance in the Senate, while free states outstripped slave states in population and in the House of Representatives. But with additional western territory and more free-soil states, tensions between slave and free states mounted with arguments over federalism and disposition of the territories, as well as whether to expand or restrict slavery.[119]
68
+
69
+ With the 1860 election of Republican Abraham Lincoln, conventions in thirteen slave states ultimately declared secession and formed the Confederate States of America (the "South" or the "Confederacy"), while the federal government (the "Union") maintained that secession was illegal.[119] In order to bring about this secession, military action was initiated by the secessionists, and the Union responded in kind. The ensuing war would become the deadliest military conflict in American history, resulting in the deaths of approximately 618,000 soldiers as well as many civilians.[120] The Union initially simply fought to keep the country united. Nevertheless, as casualties mounted after 1863 and Lincoln delivered his Emancipation Proclamation, the main purpose of the war from the Union's viewpoint became the abolition of slavery. Indeed, when the Union ultimately won the war in April 1865, each of the states in the defeated South was required to ratify the Thirteenth Amendment, which prohibited slavery.
70
+
71
+ The government enacted three constitutional amendments in the years after the war: the aforementioned Thirteenth as well as the Fourteenth Amendment providing citizenship to the nearly four million African Americans who had been slaves,[121] and the Fifteenth Amendment ensuring in theory that African Americans had the right to vote. The war and its resolution led to a substantial increase in federal power[122] aimed at reintegrating and rebuilding the South while guaranteeing the rights of the newly freed slaves.
72
+
73
+ Reconstruction began in earnest following the war. While President Lincoln attempted to foster friendship and forgiveness between the Union and the former Confederacy, his assassination on April 14, 1865, drove a wedge between North and South again. Republicans in the federal government made it their goal to oversee the rebuilding of the South and to ensure the rights of African Americans. They persisted until the Compromise of 1877 when the Republicans agreed to cease protecting the rights of African Americans in the South in order for Democrats to concede the presidential election of 1876.
74
+
75
+ Southern white Democrats, calling themselves "Redeemers," took control of the South after the end of Reconstruction. From 1890 to 1910 the Redeemers established so-called Jim Crow laws, disenfranchising most blacks and some poor whites throughout the region. Blacks faced racial segregation, especially in the South.[123] They also occasionally experienced vigilante violence, including lynching.[124]
76
+
77
+ In the North, urbanization and an unprecedented influx of immigrants from Southern and Eastern Europe supplied a surplus of labor for the country's industrialization and transformed its culture.[126] National infrastructure including telegraph and transcontinental railroads spurred economic growth and greater settlement and development of the American Old West. The later invention of electric light and the telephone would also affect communication and urban life.[127]
78
+
79
+ The United States fought Indian Wars west of the Mississippi River from 1810 to at least 1890.[128] Most of these conflicts ended with the cession of Native American territory and their confinement to Indian reservations. This further expanded acreage under mechanical cultivation, increasing surpluses for international markets.[129] Mainland expansion also included the purchase of Alaska from Russia in 1867.[130] In 1893, pro-American elements in Hawaii overthrew the monarchy and formed the Republic of Hawaii, which the U.S. annexed in 1898. Puerto Rico, Guam, and the Philippines were ceded by Spain in the same year, following the Spanish–American War.[131] American Samoa was acquired by the United States in 1900 after the end of the Second Samoan Civil War.[132] The U.S. Virgin Islands were purchased from Denmark in 1917.[133]
80
+
81
+ Rapid economic development during the late 19th and early 20th centuries fostered the rise of many prominent industrialists. Tycoons like Cornelius Vanderbilt, John D. Rockefeller, and Andrew Carnegie led the nation's progress in railroad, petroleum, and steel industries. Banking became a major part of the economy, with J. P. Morgan playing a notable role. The American economy boomed, becoming the world's largest, and the United States achieved great power status.[134] These dramatic changes were accompanied by social unrest and the rise of populist, socialist, and anarchist movements.[135] This period eventually ended with the advent of the Progressive Era, which saw significant reforms including women's suffrage, alcohol prohibition, regulation of consumer goods, greater antitrust measures to ensure competition and attention to worker conditions.[136][137][138]
82
+
83
+ The United States remained neutral from the outbreak of World War I in 1914 until 1917, when it joined the war as an "associated power," alongside the formal Allies of World War I, helping to turn the tide against the Central Powers. In 1919, President Woodrow Wilson took a leading diplomatic role at the Paris Peace Conference and advocated strongly for the U.S. to join the League of Nations. However, the Senate refused to approve this and did not ratify the Treaty of Versailles that established the League of Nations.[139]
84
+
85
+ In 1920, the women's rights movement won passage of a constitutional amendment granting women's suffrage.[140] The 1920s and 1930s saw the rise of radio for mass communication and the invention of early television.[141] The prosperity of the Roaring Twenties ended with the Wall Street Crash of 1929 and the onset of the Great Depression. After his election as president in 1932, Franklin D. Roosevelt responded with the New Deal.[142] The Great Migration of millions of African Americans out of the American South began before World War I and extended through the 1960s;[143] whereas the Dust Bowl of the mid-1930s impoverished many farming communities and spurred a new wave of western migration.[144]
86
+
87
+ At first effectively neutral during World War II, the United States began supplying materiel to the Allies in March 1941 through the Lend-Lease program. On December 7, 1941, the Empire of Japan launched a surprise attack on Pearl Harbor, prompting the United States to join the Allies against the Axis powers.[145] Although Japan attacked the United States first, the U.S. nonetheless pursued a "Europe first" defense policy.[146] The United States thus left its vast Asian colony, the Philippines, isolated and fighting a losing struggle against Japanese invasion and occupation, as military resources were devoted to the European theater. During the war, the United States was referred to as one of the "Four Policemen"[147] of Allies power who met to plan the postwar world, along with Britain, the Soviet Union and China.[148][149] Although the nation lost around 400,000 military personnel,[150] it emerged relatively undamaged from the war with even greater economic and military influence.[151]
88
+
89
+ The United States played a leading role in the Bretton Woods and Yalta conferences with the United Kingdom, the Soviet Union, and other Allies, which signed agreements on new international financial institutions and Europe's postwar reorganization. As an Allied victory was won in Europe, a 1945 international conference held in San Francisco produced the United Nations Charter, which became active after the war.[152] The United States and Japan then fought each other in the largest naval battle in history, the Battle of Leyte Gulf.[153][154] The United States eventually developed the first nuclear weapons and used them on Japan in the cities of Hiroshima and Nagasaki; the Japanese surrendered on September 2, ending World War II.[155][156]
90
+
91
+ After World War II, the United States and the Soviet Union competed for power, influence, and prestige during what became known as the Cold War, driven by an ideological divide between capitalism and communism.[157] They dominated the military affairs of Europe, with the U.S. and its NATO allies on one side and the USSR and its Warsaw Pact allies on the other. The U.S. developed a policy of containment towards the expansion of communist influence. While the U.S. and Soviet Union engaged in proxy wars and developed powerful nuclear arsenals, the two countries avoided direct military conflict.[citation needed]
92
+
93
+ The United States often opposed Third World movements that it viewed as Soviet-sponsored, and occasionally pursued direct action for regime change against left-wing governments, even supporting right-wing authoritarian governments at times.[158] American troops fought communist Chinese and North Korean forces in the Korean War of 1950–53.[159] The Soviet Union's 1957 launch of the first artificial satellite and its 1961 launch of the first manned spaceflight initiated a "Space Race" in which the United States became the first nation to land a man on the moon in 1969.[159] A proxy war in Southeast Asia eventually evolved into full American participation, as the Vietnam War.[citation needed]
94
+
95
+ At home, the U.S. experienced sustained economic expansion and a rapid growth of its population and middle class. Construction of an Interstate Highway System transformed the nation's infrastructure over the following decades. Millions moved from farms and inner cities to large suburban housing developments.[160][161] In 1959 Hawaii became the 50th and last U.S. state added to the country.[162] The growing Civil Rights Movement used nonviolence to confront segregation and discrimination, with Martin Luther King Jr. becoming a prominent leader and figurehead. A combination of court decisions and legislation, culminating in the Civil Rights Act of 1968, sought to end racial discrimination.[163][164][165] Meanwhile, a counterculture movement grew which was fueled by opposition to the Vietnam war, black nationalism, and the sexual revolution.
96
+
97
+ The launch of a "War on Poverty" expanded entitlements and welfare spending, including the creation of Medicare and Medicaid, two programs that provide health coverage to the elderly and poor, respectively, and the means-tested Food Stamp Program and Aid to Families with Dependent Children.[166]
98
+
99
+ The 1970s and early 1980s saw the onset of stagflation. After his election in 1980, President Ronald Reagan responded to economic stagnation with free-market oriented reforms. Following the collapse of détente, he abandoned "containment" and initiated the more aggressive "rollback" strategy towards the USSR.[167][168][169][170][171] After a surge in female labor participation over the previous decade, by 1985 the majority of women aged 16 and over were employed.[172]
100
+
101
+ The late 1980s brought a "thaw" in relations with the USSR, and its collapse in 1991 finally ended the Cold War.[173][174][175][176] This brought about unipolarity[177] with the U.S. unchallenged as the world's dominant superpower. The concept of Pax Americana, which had appeared in the post-World War II period, gained wide popularity as a term for the post-Cold War new world order.[citation needed]
102
+
103
+ After the Cold War, the conflict in the Middle East triggered a crisis in 1990, when Iraq under Saddam Hussein invaded and attempted to annex Kuwait, an ally of the United States. Fearing the instability would spread to other regions, President George H. W. Bush launched Operation Desert Shield, a defensive force buildup in Saudi Arabia, and Operation Desert Storm, in a staging titled the Gulf War; waged by coalition forces from 34 nations, led by the United States against Iraq ending in the expulsion of Iraqi forces from Kuwait and restoration of the monarchy.[178]
104
+
105
+ Originating within U.S. military defense networks, the Internet spread to international academic platforms and then to the public in the 1990s, greatly affecting the global economy, society, and culture.[179] Due to the dot-com boom, stable monetary policy, and reduced social welfare spending, the 1990s saw the longest economic expansion in modern U.S. history.[180] Beginning in 1994, the U.S. entered into the North American Free Trade Agreement (NAFTA), prompting trade among the U.S., Canada, and Mexico to soar.[181]
106
+
107
+ On September 11, 2001, Al-Qaeda terrorists struck the World Trade Center in New York City and the Pentagon near Washington, D.C., killing nearly 3,000 people.[182] In response, the United States launched the War on Terror, which included a war in Afghanistan and the 2003–11 Iraq War.[183][184]
108
+
109
+ Government policy designed to promote affordable housing,[185] widespread failures in corporate and regulatory governance,[186] and historically low interest rates set by the Federal Reserve[187] led to the mid-2000s housing bubble, which culminated with the 2008 financial crisis, the nation's largest economic contraction since the Great Depression.[188] Barack Obama, the first African-American[189] and multiracial[190] president, was elected in 2008 amid the crisis,[191] and subsequently passed stimulus measures and the Dodd–Frank Act in an attempt to mitigate its negative effects and ensure there would not be a repeat of the crisis. In 2010, the Obama administration passed the Affordable Care Act, which made the most sweeping reforms to the nation's healthcare system in nearly five decades, including mandates, subsidies and insurance exchanges.[citation needed]
110
+
111
+ American forces in Iraq were withdrawn in large numbers in 2009 and 2010, and the war in the region was declared formally over in December 2011.[192] But months earlier, Operation Neptune Spear led to the death of the leader of Al-Qaeda in Pakistan.[193] In the presidential election of 2016, Republican Donald Trump was elected as the 45th president of the United States. On January 20, 2020, the first case of COVID-19 in the United States was confirmed.[194] As of July 2020, the United States has over 4 million COVID-19 cases and over 145,000 deaths.[195] The United States is, by far, the country with the most cases of COVID-19 since April 11, 2020.[196]
112
+
113
+ The 48 contiguous states and the District of Columbia occupy a combined area of 3,119,884.69 square miles (8,080,464.3 km2). Of this area, 2,959,064.44 square miles (7,663,941.7 km2) is contiguous land, composing 83.65% of total U.S. land area.[197][198] Hawaii, occupying an archipelago in the central Pacific, southwest of North America, is 10,931 square miles (28,311 km2) in area. The populated territories of Puerto Rico, American Samoa, Guam, Northern Mariana Islands, and U.S. Virgin Islands together cover 9,185 square miles (23,789 km2).[199] Measured by only land area, the United States is third in size behind Russia and China, just ahead of Canada.[200]
114
+
115
+ The United States is the world's third- or fourth-largest nation by total area (land and water), ranking behind Russia and Canada and nearly equal to China. The ranking varies depending on how two territories disputed by China and India are counted, and how the total size of the United States is measured.[e][201][202]
116
+
117
+ The coastal plain of the Atlantic seaboard gives way further inland to deciduous forests and the rolling hills of the Piedmont.[203] The Appalachian Mountains divide the eastern seaboard from the Great Lakes and the grasslands of the Midwest.[204] The Mississippi–Missouri River, the world's fourth longest river system, runs mainly north–south through the heart of the country. The flat, fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast.[204]
118
+
119
+ The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking around 14,000 feet (4,300 m) in Colorado.[205] Farther west are the rocky Great Basin and deserts such as the Chihuahua and Mojave.[206] The Sierra Nevada and Cascade mountain ranges run close to the Pacific coast, both ranges reaching altitudes higher than 14,000 feet (4,300 m). The lowest and highest points in the contiguous United States are in the state of California,[207] and only about 84 miles (135 km) apart.[208] At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali is the highest peak in the country and in North America.[209] Active volcanoes are common throughout Alaska's Alexander and Aleutian Islands, and Hawaii consists of volcanic islands. The supervolcano underlying Yellowstone National Park in the Rockies is the continent's largest volcanic feature.[210]
120
+
121
+ The United States, with its large size and geographic variety, includes most climate types. To the east of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south.[211] The Great Plains west of the 100th meridian are semi-arid. Much of the Western mountains have an alpine climate. The climate is arid in the Great Basin, desert in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon and Washington and southern Alaska. Most of Alaska is subarctic or polar. Hawaii and the southern tip of Florida are tropical, as well as its territories in the Caribbean and the Pacific.[212] States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley areas in the Midwest and South.[213] Overall, the United States has the world's most violent weather, receiving more high-impact extreme weather incidents than any other country in the world.[214]
122
+
123
+ The U.S. ecology is megadiverse: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and more than 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland.[216] The United States is home to 428 mammal species, 784 bird species, 311 reptile species, and 295 amphibian species,[217] as well as about 91,000 insect species.[218]
124
+
125
+ There are 62 national parks and hundreds of other federally managed parks, forests, and wilderness areas.[219] Altogether, the government owns about 28% of the country's land area,[220] mostly in the western states.[221] Most of this land is protected, though some is leased for oil and gas drilling, mining, logging, or cattle ranching, and about .86% is used for military purposes.[222][223]
126
+
127
+ Environmental issues include debates on oil and nuclear energy, dealing with air and water pollution, the economic costs of protecting wildlife, logging and deforestation,[224][225] and international responses to global warming.[226][227] The most prominent environmental agency is the Environmental Protection Agency (EPA), created by presidential order in 1970.[228] The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act.[229] The Endangered Species Act of 1973 is intended to protect threatened and endangered species and their habitats, which are monitored by the United States Fish and Wildlife Service.[230]
128
+
129
+ The U.S. Census Bureau officially estimated the country's population to be 328,239,523 as of July 1, 2019.[231] In addition, the Census Bureau provides a continuously updated U.S. Population Clock that approximates the latest population of the 50 states and District of Columbia based on the Bureau's most recent demographic trends.[234] According to the clock, on May 23, 2020, the U.S. population exceeded 329 million residents, with a net gain of one person every 19 seconds, or about 4,547 people per day. The United States is the third most populous nation in the world, after China and India. In 2018 the median age of the United States population was 38.1 years.[235]
130
+
131
+ In 2018, there were almost 90 million immigrants and U.S.-born children of immigrants (second-generation Americans) in the United States, accounting for 28% of the overall U.S. population.[236] The United States has a very diverse population; 37 ancestry groups have more than one million members.[237] German Americans are the largest ethnic group (more than 50 million)—followed by Irish Americans (circa 37 million), Mexican Americans (circa 31 million) and English Americans (circa 28 million).[238][239]
132
+
133
+ White Americans (mostly European ancestry) are the largest racial group at 73.1% of the population; African Americans are the nation's largest racial minority and third-largest ancestry group.[237] Asian Americans are the country's second-largest racial minority; the three largest Asian American ethnic groups are Chinese Americans, Filipino Americans, and Indian Americans.[237] The largest American community with European ancestry is German Americans, which consists of more than 14% of the total population.[240] In 2010, the U.S. population included an estimated 5.2 million people with some American Indian or Alaska Native ancestry (2.9 million exclusively of such ancestry) and 1.2 million with some native Hawaiian or Pacific island ancestry (0.5 million exclusively).[241] The census counted more than 19 million people of "Some Other Race" who were "unable to identify with any" of its five official race categories in 2010, more than 18.5 million (97%) of whom are of Hispanic ethnicity.[241]
134
+
135
+ In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents (including many eligible to become citizens), 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants.[242] Among current living immigrants to the U.S., the top five countries of birth are Mexico, China, India, the Philippines and El Salvador. Until 2017 and 2018, the United States led the world in refugee resettlement for decades, admitted more refugees than the rest of the world combined.[243] From fiscal year 1980 until 2017, 55% of refugees came from Asia, 27% from Europe, 13% from Africa, and 4% from Latin America.[243]
136
+
137
+ A 2017 United Nations report projected that the U.S. would be one of nine countries in which world population growth through 2050 would be concentrated.[244] A 2020 U.S. Census Bureau report projected the population of the country could be anywhere between 320 million and 447 million by 2060, depending on the rate of in-migration; in all projected scenarios, a lower fertility rate and increases in life expectancy would result in an aging population.[245] The United States has an annual birth rate of 13 per 1,000, which is five births per 1,000 below the world average.[246] Its population growth rate is positive at 0.7%, higher than that of many developed nations.[247]
138
+
139
+ About 82% of Americans live in urban areas (including suburbs);[202] about half of those reside in cities with populations over 50,000.[248] In 2008, 273 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities had over two million (namely New York, Los Angeles, Chicago, and Houston).[249] Estimates for the year 2018 show that 53 metropolitan areas have populations greater than one million. Many metros in the South, Southwest and West grew significantly between 2010 and 2018. The Dallas and Houston metros increased by more than a million people, while the Washington, D.C., Miami, Atlanta, and Phoenix metros all grew by more than 500,000 people.
140
+
141
+ English (specifically, American English) is the de facto national language of the United States. Although there is no official language at the federal level, some laws—such as U.S. naturalization requirements—standardize English. In 2010, about 230 million, or 80% of the population aged five years and older, spoke only English at home. 12% of the population speaks Spanish at home, making it the second most common language. Spanish is also the most widely taught second language.[250][251]
142
+
143
+ Both Hawaiian and English are official languages in Hawaii.[252] In addition to English, Alaska recognizes twenty official Native languages,[253][k] and South Dakota recognizes Sioux.[254] While neither has an official language, New Mexico has laws providing for the use of both English and Spanish, as Louisiana does for English and French.[255] Other states, such as California, mandate the publication of Spanish versions of certain government documents including court forms.[256]
144
+
145
+ Several insular territories grant official recognition to their native languages, along with English: Samoan[257] is officially recognized by American Samoa and Chamorro[258] is an official language of Guam. Both Carolinian and Chamorro have official recognition in the Northern Mariana Islands.[259]
146
+ Spanish is an official language of Puerto Rico and is more widely spoken than English there.[260]
147
+
148
+ The most widely taught foreign languages in the United States, in terms of enrollment numbers from kindergarten through university undergraduate education, are Spanish (around 7.2 million students), French (1.5 million), and German (500,000). Other commonly taught languages include Latin, Japanese, ASL, Italian, and Chinese.[261][262] 18% of all Americans claim to speak both English and another language.[263]
149
+
150
+ Religion in the United States (2017)[266]
151
+
152
+ The First Amendment of the U.S. Constitution guarantees the free exercise of religion and forbids Congress from passing laws respecting its establishment.
153
+
154
+ In a 2013 survey, 56% of Americans said religion played a "very important role in their lives," a far higher figure than that of any other Western nation.[267] In a 2009 Gallup poll, 42% of Americans said they attended church weekly or almost weekly; the figures ranged from a low of 23% in Vermont to a high of 63% in Mississippi.[268]
155
+
156
+ In a 2014 survey, 70.6% of adults in the United States identified themselves as Christians;[269] Protestants accounted for 46.5%, while Roman Catholics, at 20.8%, formed the largest single Christian group.[270] In 2014, 5.9% of the U.S. adult population claimed a non-Christian religion.[271] These include Judaism (1.9%), Islam (0.9%), Hinduism (0.7%), and Buddhism (0.7%).[271] The survey also reported that 22.8% of Americans described themselves as agnostic, atheist or simply having no religion—up from 8.2% in 1990.[270][272][273] There are also Unitarian Universalist, Scientologist, Baha'i, Sikh, Jain, Shinto, Zoroastrian, Confucian, Satanist, Taoist, Druid, Native American, Afro-American, traditional African, Wiccan, Gnostic, humanist and deist communities.[274][275]
157
+
158
+ Protestantism is the largest Christian religious grouping in the United States, accounting for almost half of all Americans. Baptists collectively form the largest branch of Protestantism at 15.4%,[276] and the Southern Baptist Convention is the largest individual Protestant denomination at 5.3% of the U.S. population.[276] Apart from Baptists, other Protestant categories include nondenominational Protestants, Methodists, Pentecostals, unspecified Protestants, Lutherans, Presbyterians, Congregationalists, other Reformed, Episcopalians/Anglicans, Quakers, Adventists, Holiness, Christian fundamentalists, Anabaptists, Pietists, and multiple others.[276]
159
+
160
+ As with other Western countries, the U.S. is becoming less religious. Irreligion is growing rapidly among Americans under 30.[277] Polls show that overall American confidence in organized religion has been declining since the mid to late 1980s,[278] and that younger Americans, in particular, are becoming increasingly irreligious.[271][279] In a 2012 study, the Protestant share of the U.S. population had dropped to 48%, thus ending its status as religious category of the majority for the first time.[280][281] Americans with no religion have 1.7 children compared to 2.2 among Christians. The unaffiliated are less likely to marry with 37% marrying compared to 52% of Christians.[282]
161
+
162
+ The Bible Belt is an informal term for a region in the Southern United States in which socially conservative evangelical Protestantism is a significant part of the culture and Christian church attendance across the denominations is generally higher than the nation's average. By contrast, religion plays the least important role in New England and in the Western United States.[268]
163
+
164
+ As of 2018[update], 52% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 32% had never been married.[283] Women now work mostly outside the home and receive the majority of bachelor's degrees.[284]
165
+
166
+ The U.S. teenage pregnancy rate is 26.5 per 1,000 women. The rate has declined by 57% since 1991.[285] Abortion is legal throughout the country. Abortion rates, currently 241 per 1,000 live births and 15 per 1,000 women aged 15–44, are falling but remain higher than most Western nations.[286] In 2013, the average age at first birth was 26 and 41% of births were to unmarried women.[287]
167
+
168
+ The total fertility rate in 2016 was 1820.5 births per 1000 women.[288] Adoption in the United States is common and relatively easy from a legal point of view (compared to other Western countries).[289] As of 2001[update], with more than 127,000 adoptions, the U.S. accounted for nearly half of the total number of adoptions worldwide.[needs update][290] Same-sex marriage is legal nationwide, and it is legal for same-sex couples to adopt. Polygamy is illegal throughout the U.S.[291]
169
+
170
+ In 2019, the U.S. had the world's highest rate of children living in single-parent households.[292]
171
+
172
+ The United States had a life expectancy of 78.6 years at birth in 2017, which was the third year of declines in life expectancy following decades of continuous increase. The recent decline, primarily among the age group 25 to 64, is largely due to sharp increases in the drug overdose and suicide rates; the country has one of the highest suicide rates among wealthy countries.[293][294] Life expectancy was highest among Asians and Hispanics and lowest among blacks.[295][296] According to CDC and Census Bureau data, deaths from suicide, alcohol and drug overdoses hit record highs in 2017.[297]
173
+
174
+ Increasing obesity in the United States and health improvements elsewhere contributed to lowering the country's rank in life expectancy from 11th in the world in 1987, to 42nd in 2007, and as of 2017 the country had the lowest life expectancy among Japan, Canada, Australia, the UK, and seven countries of western Europe.[298][299] Obesity rates have more than doubled in the last 30 years and are the highest in the industrialized world.[300][301] Approximately one-third of the adult population is obese and an additional third is overweight.[302] Obesity-related type 2 diabetes is considered epidemic by health care professionals.[303]
175
+
176
+ In 2010, coronary artery disease, lung cancer, stroke, chronic obstructive pulmonary diseases, and traffic accidents caused the most years of life lost in the U.S. Low back pain, depression, musculoskeletal disorders, neck pain, and anxiety caused the most years lost to disability. The most harmful risk factors were poor diet, tobacco smoking, obesity, high blood pressure, high blood sugar, physical inactivity, and alcohol use. Alzheimer's disease, drug abuse, kidney disease, cancer, and falls caused the most additional years of life lost over their age-adjusted 1990 per-capita rates.[304] U.S. teenage pregnancy and abortion rates are substantially higher than in other Western nations, especially among blacks and Hispanics.[305]
177
+
178
+ Health-care coverage in the United States is a combination of public and private efforts and is not universal. In 2017, 12.2% of the population did not carry health insurance.[306] The subject of uninsured and underinsured Americans is a major political issue.[307][308] Federal legislation, passed in early 2010, roughly halved the uninsured share of the population, though the bill and its ultimate effect are issues of controversy.[309][310] The U.S. health-care system far outspends any other nation, measured both in per capita spending and as percentage of GDP.[311] At the same time, the U.S. is a global leader in medical innovation.[312]
179
+
180
+ American public education is operated by state and local governments, regulated by the United States Department of Education through restrictions on federal grants. In most states, children are required to attend school from the age of six or seven (generally, kindergarten or first grade) until they turn 18 (generally bringing them through twelfth grade, the end of high school); some states allow students to leave school at 16 or 17.[313]
181
+
182
+ About 12% of children are enrolled in parochial or nonsectarian private schools. Just over 2% of children are homeschooled.[314] The U.S. spends more on education per student than any nation in the world, spending more than $11,000 per elementary student in 2010 and more than $12,000 per high school student.[315][needs update] Some 80% of U.S. college students attend public universities.[316]
183
+
184
+ Of Americans 25 and older, 84.6% graduated from high school, 52.6% attended some college, 27.2% earned a bachelor's degree, and 9.6% earned graduate degrees.[317] The basic literacy rate is approximately 99%.[202][318] The United Nations assigns the United States an Education Index of 0.97, tying it for 12th in the world.[319]
185
+
186
+ The United States has many private and public institutions of higher education. The majority of the world's top universities, as listed by various ranking organizations, are in the U.S.[320][321][322] There are also local community colleges with generally more open admission policies, shorter academic programs, and lower tuition.
187
+
188
+ In 2018, U21, a network of research-intensive universities, ranked the United States first in the world for breadth and quality of higher education, and 15th when GDP was a factor.[323] As for public expenditures on higher education, the U.S. trails some other OECD nations but spends more per student than the OECD average, and more than all nations in combined public and private spending.[315][324] As of 2018[update], student loan debt exceeded 1.5 trillion dollars.[325][326]
189
+
190
+ The United States is a federal republic of 50 states, a federal district, five territories and several uninhabited island possessions.[327][328][329] It is the world's oldest surviving federation. It is a federal republic and a representative democracy, "in which majority rule is tempered by minority rights protected by law."[330] For 2018, the U.S. ranked 25th on the Democracy Index.[331] On Transparency International's 2019 Corruption Perceptions Index its public sector position deteriorated from a score of 76 in 2015 to 69 in 2019.[332]
191
+
192
+ In the American federalist system, citizens are usually subject to three levels of government: federal, state, and local. The local government's duties are commonly split between county and municipal governments. In almost all cases, executive and legislative officials are elected by a plurality vote of citizens by district.
193
+
194
+ The government is regulated by a system of checks and balances defined by the U.S. Constitution, which serves as the country's supreme legal document.[333] The original text of the Constitution establishes the structure and responsibilities of the federal government and its relationship with the individual states. Article One protects the right to the "great writ" of habeas corpus. The Constitution has been amended 27 times;[334] the first ten amendments, which make up the Bill of Rights, and the Fourteenth Amendment form the central basis of Americans' individual rights. All laws and governmental procedures are subject to judicial review and any law ruled by the courts to be in violation of the Constitution is voided. The principle of judicial review, not explicitly mentioned in the Constitution, was established by the Supreme Court in Marbury v. Madison (1803)[335] in a decision handed down by Chief Justice John Marshall.[336]
195
+
196
+ The federal government comprises three branches:
197
+
198
+ The House of Representatives has 435 voting members, each representing a congressional district for a two-year term. House seats are apportioned among the states by population. Each state then draws single-member districts to conform with the census apportionment. The District of Columbia and the five major U.S. territories each have one member of Congress—these members are not allowed to vote.[341]
199
+
200
+ The Senate has 100 members with each state having two senators, elected at-large to six-year terms; one-third of Senate seats are up for election every two years. The District of Columbia and the five major U.S. territories do not have senators.[341] The president serves a four-year term and may be elected to the office no more than twice. The president is not elected by direct vote, but by an indirect electoral college system in which the determining votes are apportioned to the states and the District of Columbia.[342] The Supreme Court, led by the chief justice of the United States, has nine members, who serve for life.[343]
201
+
202
+ The state governments are structured in a roughly similar fashion, though Nebraska has a unicameral legislature.[344] The governor (chief executive) of each state is directly elected. Some state judges and cabinet officers are appointed by the governors of the respective states, while others are elected by popular vote.
203
+
204
+ The 50 states are the principal administrative divisions in the country. These are subdivided into counties or county equivalents and further divided into municipalities. The District of Columbia is a federal district that contains the capital of the United States, Washington, D.C.[345] The states and the District of Columbia choose the president of the United States. Each state has presidential electors equal to the number of their representatives and senators in Congress; the District of Columbia has three (because of the 23rd Amendment).[346] Territories of the United States such as Puerto Rico do not have presidential electors, and so people in those territories cannot vote for the president.[341]
205
+
206
+ The United States also observes tribal sovereignty of the American Indian nations to a limited degree, as it does with the states' sovereignty. American Indians are U.S. citizens and tribal lands are subject to the jurisdiction of the U.S. Congress and the federal courts. Like the states they have a great deal of autonomy, but also like the states, tribes are not allowed to make war, engage in their own foreign relations, or print and issue currency.[347]
207
+
208
+ Citizenship is granted at birth in all states, the District of Columbia, and all major U.S. territories except American Samoa.[348][349][m]
209
+
210
+ The United States has operated under a two-party system for most of its history.[352] For elective offices at most levels, state-administered primary elections choose the major party nominees for subsequent general elections. Since the general election of 1856, the major parties have been the Democratic Party, founded in 1824, and the Republican Party, founded in 1854. Since the Civil War, only one third-party presidential candidate—former president Theodore Roosevelt, running as a Progressive in 1912—has won as much as 20% of the popular vote. The president and vice president are elected by the Electoral College.[353]
211
+
212
+ In American political culture, the center-right Republican Party is considered "conservative" and the center-left Democratic Party is considered "liberal."[354][355] The states of the Northeast and West Coast and some of the Great Lakes states, known as "blue states," are relatively liberal. The "red states" of the South and parts of the Great Plains and Rocky Mountains are relatively conservative.
213
+
214
+ Republican Donald Trump, the winner of the 2016 presidential election, is serving as the 45th president of the United States.[356] Leadership in the Senate includes Republican vice president Mike Pence, Republican president pro tempore Chuck Grassley, Majority Leader Mitch McConnell, and Minority Leader Chuck Schumer.[357] Leadership in the House includes Speaker of the House Nancy Pelosi, Majority Leader Steny Hoyer, and Minority Leader Kevin McCarthy.[358]
215
+
216
+ In the 116th United States Congress, the House of Representatives is controlled by the Democratic Party and the Senate is controlled by the Republican Party, giving the U.S. a split Congress. The Senate consists of 53 Republicans and 45 Democrats with two Independents who caucus with the Democrats; the House consists of 233 Democrats, 196 Republicans, and 1 Libertarian.[359] Of state governors, there are 26 Republicans and 24 Democrats. Among the D.C. mayor and the five territorial governors, there are two Republicans, one Democrat, one New Progressive, and two Independents.[360]
217
+
218
+ The United States has an established structure of foreign relations. It is a permanent member of the United Nations Security Council. New York City is home to the United Nations Headquarters. Almost all countries have embassies in Washington, D.C., and many have consulates around the country. Likewise, nearly all nations host American diplomatic missions. However, Iran, North Korea, Bhutan, and the Republic of China (Taiwan) do not have formal diplomatic relations with the United States (although the U.S. still maintains unofficial relations with Bhutan and Taiwan).[361] It is a member of the G7,[362] G20, and OECD.
219
+
220
+ The United States has a "Special Relationship" with the United Kingdom[363] and strong ties with India, Canada,[364] Australia,[365] New Zealand,[366] the Philippines,[367] Japan,[368] South Korea,[369] Israel,[370] and several European Union countries, including France, Italy, Germany, Spain and Poland.[371] It works closely with fellow NATO members on military and security issues and with its neighbors through the Organization of American States and free trade agreements such as the trilateral North American Free Trade Agreement with Canada and Mexico. Colombia is traditionally considered by the United States as its most loyal ally in South America.[372][373]
221
+
222
+ The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands and Palau through the Compact of Free Association.[374]
223
+
224
+ Taxation in the United States is levied at the federal, state, and local government levels. This includes taxes on income, payroll, property, sales, imports, estates, and gifts, as well as various fees. Taxation in the United States is based on citizenship, not residency.[375] Both non-resident citizens and Green Card holders living abroad are taxed on their income irrespective of where they live or where their income is earned. The United States is one of the only countries in the world to do so.[376]
225
+
226
+ In 2010 taxes collected by federal, state and municipal governments amounted to 24.8% of GDP.[377] Based on CBO estimates,[378] under 2013 tax law the top 1% will be paying the highest average tax rates since 1979, while other income groups will remain at historic lows.[379] For 2018, the effective tax rate for the wealthiest 400 households was 23%, compared to 24.2% for the bottom half of U.S. households.[380]
227
+
228
+
229
+
230
+ During fiscal year 2012, the federal government spent $3.54 trillion on a budget or cash basis, down $60 billion or 1.7% vs. fiscal year 2011 spending of $3.60 trillion. Major categories of fiscal year 2012 spending included: Medicare & Medicaid (23%), Social Security (22%), Defense Department (19%), non-defense discretionary (17%), other mandatory (13%) and interest (6%).[382]
231
+
232
+ The total national debt of the United States in the United States was $18.527 trillion (106% of the GDP) in 2014.[383][n] The United States has the largest external debt in the world[387] and the 34th largest government debt as a % of GDP in the world.[388]
233
+
234
+ The president is the commander-in-chief of the country's armed forces and appoints its leaders, the Secretary of Defense and the Joint Chiefs of Staff. The United States Department of Defense administers the armed forces, including the Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is run by the Department of Homeland Security in peacetime and by the Department of the Navy during times of war. In 2008, the armed forces had 1.4 million personnel on active duty. The Reserves and National Guard brought the total number of troops to 2.3 million. The Department of Defense also employed about 700,000 civilians, not including contractors.[389]
235
+
236
+ Military service is voluntary, though conscription may occur in wartime through the Selective Service System.[390] American forces can be rapidly deployed by the Air Force's large fleet of transport aircraft, the Navy's 11 active aircraft carriers, and Marine expeditionary units at sea with the Navy's Atlantic and Pacific fleets. The military operates 865 bases and facilities abroad,[391] and maintains deployments greater than 100 active duty personnel in 25 foreign countries.[392]
237
+
238
+ The military budget of the United States in 2011 was more than $700 billion, 41% of global military spending. At 4.7% of GDP, the rate was the second-highest among the top 15 military spenders, after Saudi Arabia.[393] Defense spending plays a major role in science and technology investment, with roughly half of U.S. federal research and development funded by the Department of Defense.[394] Defense's share of the overall U.S. economy has generally declined in recent decades, from Cold War peaks of 14.2% of GDP in 1953 and 69.5% of federal outlays in 1954 to 4.7% of GDP and 18.8% of federal outlays in 2011.[395]
239
+
240
+ The country is one of the five recognized nuclear weapons states and possesses the second largest stockpile of nuclear weapons in the world.[396] More than 90% of the world's 14,000 nuclear weapons are owned by Russia and the United States.[397]
241
+
242
+ Law enforcement in the United States is primarily the responsibility of local police departments and sheriff's offices, with state police providing broader services. Federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have specialized duties, including protecting civil rights, national security and enforcing U.S. federal courts' rulings and federal laws.[398] State courts conduct most criminal trials while federal courts handle certain designated crimes as well as certain appeals from the state criminal courts.
243
+
244
+ A cross-sectional analysis of the World Health Organization Mortality Database from 2010 showed that United States "homicide rates were 7.0 times higher than in other high-income countries, driven by a gun homicide rate that was 25.2 times higher."[399] In 2016, the US murder rate was 5.4 per 100,000.[400] Gun ownership rights, guaranteed by the Second Amendment, continue to be the subject of contention.
245
+
246
+ The United States has the highest documented incarceration rate and largest prison population in the world.[401] As of 2020, the Prison Policy Initiative reported that there were some 2.3 million people incarcerated.[402] The imprisonment rate for all prisoners sentenced to more than a year in state or federal facilities is 478 per 100,000 in 2013.[403] According to the Federal Bureau of Prisons, the majority of inmates held in federal prisons are convicted of drug offenses.[404] About 9% of prisoners are held in privatized prisons.[402] The practice of privately operated prisons began in the 1980s and has been a subject of contention.[405]
247
+
248
+ Capital punishment is sanctioned in the United States for certain federal and military crimes, and at the state level in 30 states.[406][407] No executions took place from 1967 to 1977, owing in part to a U.S. Supreme Court ruling striking down arbitrary imposition of the death penalty. Since the decision there have been more than 1,300 executions, a majority of these taking place in three states: Texas, Virginia, and Oklahoma.[408] Meanwhile, several states have either abolished or struck down death penalty laws. In 2019, the country had the sixth-highest number of executions in the world, following China, Iran, Saudi Arabia, Iraq, and Egypt.[409]
249
+
250
+ According to the International Monetary Fund, the U.S. GDP of $16.8 trillion constitutes 24% of the gross world product at market exchange rates and over 19% of the gross world product at purchasing power parity (PPP).[417] The United States is the largest importer of goods and second-largest exporter, though exports per capita are relatively low. In 2010, the total U.S. trade deficit was $635 billion.[418] Canada, China, Mexico, Japan, and Germany are its top trading partners.[419]
251
+
252
+ From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7.[420] The country ranks ninth in the world in nominal GDP per capita[421] and sixth in GDP per capita at PPP.[417] The U.S. dollar is the world's primary reserve currency.[422]
253
+
254
+ In 2009, the private sector was estimated to constitute 86.4% of the economy.[425] While its economy has reached a postindustrial level of development, the United States remains an industrial power.[426] Consumer spending comprised 68% of the U.S. economy in 2015.[427] In August 2010, the American labor force consisted of 154.1 million people (50%). With 21.2 million people, government is the leading field of employment. The largest private employment sector is health care and social assistance, with 16.4 million people. It has a smaller welfare state and redistributes less income through government action than most European nations.[428]
255
+
256
+ The United States is the only advanced economy that does not guarantee its workers paid vacation[429] and is one of a few countries in the world without paid family leave as a legal right.[430] While federal law does not require sick leave, it is a common benefit for government workers and full-time employees at corporations.[431] 74% of full-time American workers get paid sick leave, according to the Bureau of Labor Statistics, although only 24% of part-time workers get the same benefits.[431] In 2009, the United States had the third-highest workforce productivity per person in the world, behind Luxembourg and Norway. It was fourth in productivity per hour, behind those two countries and the Netherlands.[432]
257
+
258
+
259
+
260
+ The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts were developed by the U.S. War Department by the Federal Armories during the first half of the 19th century. This technology, along with the establishment of a machine tool industry, enabled the U.S. to have large-scale manufacturing of sewing machines, bicycles, and other items in the late 19th century and became known as the American system of manufacturing. Factory electrification in the early 20th century and introduction of the assembly line and other labor-saving techniques created the system of mass production.[433] In the 21st century, approximately two-thirds of research and development funding comes from the private sector.[434] The United States leads the world in scientific research papers and impact factor.[435][436]
261
+
262
+ In 1876, Alexander Graham Bell was awarded the first U.S. patent for the telephone. Thomas Edison's research laboratory, one of the first of its kind, developed the phonograph, the first long-lasting light bulb, and the first viable movie camera.[437] The latter led to emergence of the worldwide entertainment industry. In the early 20th century, the automobile companies of Ransom E. Olds and Henry Ford popularized the assembly line. The Wright brothers, in 1903, made the first sustained and controlled heavier-than-air powered flight.[438]
263
+
264
+ The rise of fascism and Nazism in the 1920s and 30s led many European scientists, including Albert Einstein, Enrico Fermi, and John von Neumann, to immigrate to the United States.[439] During World War II, the Manhattan Project developed nuclear weapons, ushering in the Atomic Age, while the Space Race produced rapid advances in rocketry, materials science, and aeronautics.[440][441]
265
+
266
+ The invention of the transistor in the 1950s, a key active component in practically all modern electronics, led to many technological developments and a significant expansion of the U.S. technology industry.[442] This, in turn, led to the establishment of many new technology companies and regions around the country such as Silicon Valley in California. Advancements by American microprocessor companies such as Advanced Micro Devices (AMD), and Intel along with both computer software and hardware companies that include Adobe Systems, Apple Inc., IBM, Microsoft, and Sun Microsystems created and popularized the personal computer. The ARPANET was developed in the 1960s to meet Defense Department requirements, and became the first of a series of networks which evolved into the Internet.[443]
267
+
268
+ Accounting for 4.24% of the global population, Americans collectively possess 29.4% of the world's total wealth, and Americans make up roughly half of the world's population of millionaires.[444] The Global Food Security Index ranked the U.S. number one for food affordability and overall food security in March 2013.[445] Americans on average have more than twice as much living space per dwelling and per person as European Union residents, and more than every EU nation.[446] For 2017 the United Nations Development Programme ranked the United States 13th among 189 countries in its Human Development Index and 25th among 151 countries in its inequality-adjusted HDI (IHDI).[447]
269
+
270
+ Wealth, like income and taxes, is highly concentrated; the richest 10% of the adult population possess 72% of the country's household wealth, while the bottom half claim only 2%.[448] According to a September 2017 report by the Federal Reserve, the top 1% controlled 38.6% of the country's wealth in 2016.[449] According to a 2018 study by the OECD, the United States has a larger percentage of low-income workers than almost any other developed nation. This is largely because at-risk workers get almost no government support and are further set back by a very weak collective bargaining system.[450] The top one percent of income-earners accounted for 52 percent of the income gains from 2009 to 2015, where income is defined as market income excluding government transfers.[451] In 2018, U.S. income inequality reached the highest level ever recorded by the Census Bureau.[452]
271
+
272
+ After years of stagnation, median household income reached a record high in 2016 following two consecutive years of record growth. Income inequality remains at record highs however, with the top fifth of earners taking home more than half of all overall income.[454] The rise in the share of total annual income received by the top one percent, which has more than doubled from nine percent in 1976 to 20 percent in 2011, has significantly affected income inequality,[455] leaving the United States with one of the widest income distributions among OECD nations.[456] The extent and relevance of income inequality is a matter of debate.[457][458][459]
273
+
274
+ Between June 2007 and November 2008, the global recession led to falling asset prices around the world. Assets owned by Americans lost about a quarter of their value.[460] Since peaking in the second quarter of 2007, household wealth was down $14 trillion, but has since increased $14 trillion over 2006 levels.[461] At the end of 2014, household debt amounted to $11.8 trillion,[462] down from $13.8 trillion at the end of 2008.[463]
275
+
276
+ There were about 578,424 sheltered and unsheltered homeless persons in the US in January 2014, with almost two-thirds staying in an emergency shelter or transitional housing program.[464] In 2011, 16.7 million children lived in food-insecure households, about 35% more than 2007 levels, though only 1.1% of U.S. children, or 845,000, saw reduced food intake or disrupted eating patterns at some point during the year, and most cases were not chronic.[465] As of June 2018[update], 40 million people, roughly 12.7% of the U.S. population, were living in poverty, with 18.5 million of those living in deep poverty (a family income below one-half of the poverty threshold) and over five million live "in 'Third World' conditions." In 2016, 13.3 million children were living in poverty, which made up 32.6% of the impoverished population.[466] In 2017, the U.S. state or territory with the lowest poverty rate was New Hampshire (7.6%), and the one with the highest was American Samoa (65%).[467][468][469]
277
+
278
+ Personal transportation is dominated by automobiles, which operate on a network of 4 million miles (6.4 million kilometers) of public roads.[471] The United States has the world's second-largest automobile market,[472] and has the highest rate of per-capita vehicle ownership in the world, with 765 vehicles per 1,000 Americans (1996).[473][needs update] In 2017, there were 255,009,283 non-two wheel motor vehicles, or about 910 vehicles per 1,000 people.[474]
279
+
280
+ The civil airline industry is entirely privately owned and has been largely deregulated since 1978, while most major airports are publicly owned.[475] The three largest airlines in the world by passengers carried are US-based; American Airlines is number one after its 2013 acquisition by US Airways.[476] Of the world's 50 busiest passenger airports, 16 are in the United States, including the busiest, Hartsfield–Jackson Atlanta International Airport.[477]
281
+
282
+ The United States energy market is about 29,000 terawatt hours per year.[478] In 2005, 40% of this energy came from petroleum, 23% from coal, and 22% from natural gas. The remainder was supplied by nuclear and renewable energy sources.[479]
283
+
284
+ Since 2007, the total greenhouse gas emissions by the United States are the second highest by country, exceeded only by China.[480] The United States has historically been the world's largest producer of greenhouse gases, and greenhouse gas emissions per capita remain high.[481]
285
+
286
+ The United States is home to many cultures and a wide variety of ethnic groups, traditions, and values.[483][484] Aside from the Native American, Native Hawaiian, and Native Alaskan populations, nearly all Americans or their ancestors settled or immigrated within the past five centuries.[485] Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa.[483][486] More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as both a homogenizing melting pot, and a heterogeneous salad bowl in which immigrants and their descendants retain distinctive cultural characteristics.[483]
287
+
288
+ Americans have traditionally been characterized by a strong work ethic, competitiveness, and individualism,[487] as well as a unifying belief in an "American creed" emphasizing liberty, equality, private property, democracy, rule of law, and a preference for limited government.[488] Americans are extremely charitable by global standards. According to a 2006 British study, Americans gave 1.67% of GDP to charity, more than any other nation studied.[489][490][491]
289
+
290
+ The American Dream, or the perception that Americans enjoy high social mobility, plays a key role in attracting immigrants.[492] Whether this perception is accurate has been a topic of debate.[493][494][495][496][420][497] While mainstream culture holds that the United States is a classless society,[498] scholars identify significant differences between the country's social classes, affecting socialization, language, and values.[499] While Americans tend to greatly value socioeconomic achievement, being ordinary or average is also generally seen as a positive attribute.[500]
291
+
292
+ In the 18th and early 19th centuries, American art and literature took most of its cues from Europe. Writers such as Washington Irving, Nathaniel Hawthorne, Edgar Allan Poe, and Henry David Thoreau established a distinctive American literary voice by the middle of the 19th century. Mark Twain and poet Walt Whitman were major figures in the century's second half; Emily Dickinson, virtually unknown during her lifetime, is now recognized as an essential American poet.[501] A work seen as capturing fundamental aspects of the national experience and character—such as Herman Melville's Moby-Dick (1851), Twain's The Adventures of Huckleberry Finn (1885), F. Scott Fitzgerald's The Great Gatsby (1925) and Harper Lee's To Kill a Mockingbird (1960)—may be dubbed the "Great American Novel."[502]
293
+
294
+ Twelve U.S. citizens have won the Nobel Prize in Literature, most recently Bob Dylan in 2016. William Faulkner, Ernest Hemingway and John Steinbeck are often named among the most influential writers of the 20th century.[503] Popular literary genres such as the Western and hardboiled crime fiction developed in the United States. The Beat Generation writers opened up new literary approaches, as have postmodernist authors such as John Barth, Thomas Pynchon, and Don DeLillo.[504]
295
+
296
+ The transcendentalists, led by Thoreau and Ralph Waldo Emerson, established the first major American philosophical movement. After the Civil War, Charles Sanders Peirce and then William James and John Dewey were leaders in the development of pragmatism. In the 20th century, the work of W. V. O. Quine and Richard Rorty, and later Noam Chomsky, brought analytic philosophy to the fore of American philosophical academia. John Rawls and Robert Nozick also led a revival of political philosophy.
297
+
298
+ In the visual arts, the Hudson River School was a mid-19th-century movement in the tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene.[505] Georgia O'Keeffe, Marsden Hartley, and others experimented with new, individualistic styles. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. The tide of modernism and then postmodernism has brought fame to American architects such as Frank Lloyd Wright, Philip Johnson, and Frank Gehry.[506] Americans have long been important in the modern artistic medium of photography, with major photographers including Alfred Stieglitz, Edward Steichen, Edward Weston, and Ansel Adams.[507]
299
+
300
+ Mainstream American cuisine is similar to that in other Western countries. Wheat is the primary cereal grain with about three-quarters of grain products made of wheat flour[508] and many dishes use indigenous ingredients, such as turkey, venison, potatoes, sweet potatoes, corn, squash, and maple syrup which were consumed by Native Americans and early European settlers.[509] These homegrown foods are part of a shared national menu on one of America's most popular holidays, Thanksgiving, when some Americans make traditional foods to celebrate the occasion.[510]
301
+
302
+ The American fast food industry, the world's largest,[511] pioneered the drive-through format in the 1940s.[512] Characteristic dishes such as apple pie, fried chicken, pizza, hamburgers, and hot dogs derive from the recipes of various immigrants. French fries, Mexican dishes such as burritos and tacos, and pasta dishes freely adapted from Italian sources are widely consumed.[513] Americans drink three times as much coffee as tea.[514] Marketing by U.S. industries is largely responsible for making orange juice and milk ubiquitous breakfast beverages.[515][516]
303
+
304
+ Although little known at the time, Charles Ives's work of the 1910s established him as the first major U.S. composer in the classical tradition, while experimentalists such as Henry Cowell and John Cage created a distinctive American approach to classical composition. Aaron Copland and George Gershwin developed a new synthesis of popular and classical music.
305
+
306
+ The rhythmic and lyrical styles of African-American music have deeply influenced American music at large, distinguishing it from European and African traditions. Elements from folk idioms such as the blues and what is now known as old-time music were adopted and transformed into popular genres with global audiences. Jazz was developed by innovators such as Louis Armstrong and Duke Ellington early in the 20th century. Country music developed in the 1920s, and rhythm and blues in the 1940s.[517]
307
+
308
+ Elvis Presley and Chuck Berry were among the mid-1950s pioneers of rock and roll. Rock bands such as Metallica, the Eagles, and Aerosmith are among the highest grossing in worldwide sales.[518][519][520] In the 1960s, Bob Dylan emerged from the folk revival to become one of America's most celebrated songwriters and James Brown led the development of funk.
309
+
310
+ More recent American creations include hip hop and house music. American pop stars such as Elvis Presley, Michael Jackson, and Madonna have become global celebrities,[517] as have contemporary musical artists such as Taylor Swift, Britney Spears, Katy Perry, Beyoncé, Jay-Z, Eminem, Kanye West, and Ariana Grande.[521]
311
+
312
+ Hollywood, a northern district of Los Angeles, California, is one of the leaders in motion picture production.[522] The world's first commercial motion picture exhibition was given in New York City in 1894, using Thomas Edison's Kinetoscope.[523] Since the early 20th century, the U.S. film industry has largely been based in and around Hollywood, although in the 21st century an increasing number of films are not made there, and film companies have been subject to the forces of globalization.[524]
313
+
314
+ Director D. W. Griffith, the top American filmmaker during the silent film period, was central to the development of film grammar, and producer/entrepreneur Walt Disney was a leader in both animated film and movie merchandising.[525] Directors such as John Ford redefined the image of the American Old West, and, like others such as John Huston, broadened the possibilities of cinema with location shooting. The industry enjoyed its golden years, in what is commonly referred to as the "Golden Age of Hollywood," from the early sound period until the early 1960s,[526] with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures.[527][528] In the 1970s, "New Hollywood" or the "Hollywood Renaissance"[529] was defined by grittier films influenced by French and Italian realist pictures of the post-war period.[530] In more recent times, directors such as Steven Spielberg, George Lucas and James Cameron have gained renown for their blockbuster films, often characterized by high production costs and earnings.
315
+
316
+ Notable films topping the American Film Institute's AFI 100 list include Orson Welles's Citizen Kane (1941), which is frequently cited as the greatest film of all time,[531][532] Casablanca (1942), The Godfather (1972), Gone with the Wind (1939), Lawrence of Arabia (1962), The Wizard of Oz (1939), The Graduate (1967), On the Waterfront (1954), Schindler's List (1993), Singin' in the Rain (1952), It's a Wonderful Life (1946) and Sunset Boulevard (1950).[533] The Academy Awards, popularly known as the Oscars, have been held annually by the Academy of Motion Picture Arts and Sciences since 1929,[534] and the Golden Globe Awards have been held annually since January 1944.[535]
317
+
318
+ American football is by several measures the most popular spectator sport;[537] the National Football League (NFL) has the highest average attendance of any sports league in the world, and the Super Bowl is watched by tens of millions globally. Baseball has been regarded as the U.S. national sport since the late 19th century, with Major League Baseball (MLB) being the top league. Basketball and ice hockey are the country's next two leading professional team sports, with the top leagues being the National Basketball Association (NBA) and the National Hockey League (NHL). College football and basketball attract large audiences.[538] In soccer, the country hosted the 1994 FIFA World Cup, the men's national soccer team qualified for ten World Cups and the women's team has won the FIFA Women's World Cup four times; Major League Soccer is the sport's highest league in the United States (featuring 23 American and three Canadian teams). The market for professional sports in the United States is roughly $69 billion, roughly 50% larger than that of all of Europe, the Middle East, and Africa combined.[539]
319
+
320
+ Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri were the first ever Olympic Games held outside of Europe.[540] As of 2017[update], the United States has won 2,522 medals at the Summer Olympic Games, more than any other country, and 305 in the Winter Olympic Games, the second most behind Norway.[541]
321
+ While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, some of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate Western contact.[542] The most watched individual sports are golf and auto racing, particularly NASCAR.[543][544]
322
+
323
+ The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (FOX). The four major broadcast television networks are all commercial entities. Cable television offers hundreds of channels catering to a variety of niches.[545] Americans listen to radio programming, also largely commercial, on average just over two-and-a-half hours a day.[546]
324
+
325
+ In 1998, the number of U.S. commercial radio stations had grown to 4,793 AM stations and 5,662 FM stations. In addition, there are 1,460 public radio stations. Most of these stations are run by universities and public authorities for educational purposes and are financed by public or private funds, subscriptions, and corporate underwriting. Much public-radio broadcasting is supplied by NPR. NPR was incorporated in February 1970 under the Public Broadcasting Act of 1967; its television counterpart, PBS, was created by the same legislation. As of September 30, 2014[update], there are 15,433 licensed full-power radio stations in the U.S. according to the U.S. Federal Communications Commission (FCC).[547]
326
+
327
+ Well-known newspapers include The Wall Street Journal, The New York Times, and USA Today.[548] Although the cost of publishing has increased over the years, the price of newspapers has generally remained low, forcing newspapers to rely more on advertising revenue and on articles provided by a major wire service, such as the Associated Press or Reuters, for their national and world coverage. With very few exceptions, all the newspapers in the U.S. are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or in a situation that is increasingly rare, by individuals or families. Major cities often have "alternative weeklies" to complement the mainstream daily papers, such as New York City's The Village Voice or Los Angeles' LA Weekly. Major cities may also support a local business journal, trade papers relating to local industries, and papers for local ethnic and social groups. Aside from web portals and search engines, the most popular websites are Facebook, YouTube, Wikipedia, Yahoo!, eBay, Amazon, and Twitter.[549]
328
+
329
+ More than 800 publications are produced in Spanish, the second most commonly used language in the United States behind English.[550][551]
330
+
331
+ Internet sources
332
+
en/5884.html.txt ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Coordinates: 40°N 100°W / 40°N 100°W / 40; -100
4
+
5
+ The United States of America (USA), commonly known as the United States (U.S. or US) or America, is a country mostly located in central North America, between Canada and Mexico. It consists of 50 states, a federal district, five major self-governing territories, and various possessions.[i] At 3.8 million square miles (9.8 million km2), it is the world's third- or fourth-largest country by total area.[e] With a 2019 estimated population of over 328 million,[7] the U.S. is the third most populous country in the world. The Americans are a racially and ethnically diverse population that has been shaped through centuries of immigration. The capital is Washington, D.C., and the most populous city is New York City.
6
+
7
+ Paleo-Indians migrated from Siberia to the North American mainland at least 12,000 years ago,[19] and European colonization began in the 16th century. The United States emerged from the thirteen British colonies established along the East Coast. Numerous disputes between Great Britain and the colonies led to the American Revolutionary War lasting between 1775 and 1783, leading to independence.[20] Beginning in the late 18th century, the United States vigorously expanded across North America, gradually acquiring new territories,[21] killing and displacing Native Americans, and admitting new states. By 1848, the United States spanned the continent.[21]
8
+ Slavery was legal in much of the United States until the second half of the 19th century, when the American Civil War led to its abolition.[22][23]
9
+
10
+ The Spanish–American War and World War I entrenched the U.S. as a world power, a status confirmed by the outcome of World War II. It was the first country to develop nuclear weapons and is the only country to have used them in warfare. During the Cold War, the United States and the Soviet Union competed in the Space Race, culminating with the 1969 Apollo 11 mission, the spaceflight that first landed humans on the Moon. The end of the Cold War and collapse of the Soviet Union in 1991 left the United States as the world's sole superpower.[24]
11
+
12
+ The United States is a federal republic and a representative democracy. It is a founding member of the United Nations, World Bank, International Monetary Fund, Organization of American States (OAS), NATO, and other international organizations. It is a permanent member of the United Nations Security Council.
13
+
14
+ A highly developed country, the United States is the world's largest economy and accounts for approximately a quarter of global gross domestic product (GDP).[25] The United States is the world's largest importer and the second-largest exporter of goods, by value.[26][27] Although its population is only 4.3% of the world total,[28] it holds 29.4% of the total wealth in the world, the largest share held by any country.[29] Despite income and wealth disparities, the United States continues to rank high in measures of socioeconomic performance, including average wage, median income, median wealth, human development, per capita GDP, and worker productivity.[30][31] It is the foremost military power in the world, making up more than a third of global military spending,[32] and is a leading political, cultural, and scientific force internationally.[33]
15
+
16
+ The first known use of the name "America" dates back to 1507, when it appeared on a world map created by the German cartographer Martin Waldseemüller. On this map, the name applied to South America in honor of the Italian explorer Amerigo Vespucci.[34] After returning from his expeditions, Vespucci first postulated that the West Indies did not represent Asia's eastern limit, as initially thought by Christopher Columbus, but instead were part of an entirely separate landmass thus far unknown to the Europeans.[35] In 1538, the Flemish cartographer Gerardus Mercator used the name "America" on his own world map, applying it to the entire Western Hemisphere.[36]
17
+
18
+ The first documentary evidence of the phrase "United States of America" dates from a January 2, 1776 letter written by Stephen Moylan, Esq., to Lt. Col. Joseph Reed, George Washington's aide-de-camp and Muster-Master General of the Continental Army. Moylan expressed his wish to go "with full and ample powers from the United States of America to Spain" to seek assistance in the revolutionary war effort.[37][38][39] The first known publication of the phrase "United States of America" was in an anonymous essay in The Virginia Gazette newspaper in Williamsburg, Virginia, on April 6, 1776.[40]
19
+
20
+ The second draft of the Articles of Confederation, prepared by John Dickinson and completed no later than June 17, 1776, declared "The name of this Confederation shall be the 'United States of America'".[41] The final version of the Articles sent to the states for ratification in late 1777 contains the sentence "The Stile of this Confederacy shall be 'The United States of America'".[42] In June 1776, Thomas Jefferson wrote the phrase "UNITED STATES OF AMERICA" in all capitalized letters in the headline of his "original Rough draught" of the Declaration of Independence.[41] This draft of the document did not surface until June 21, 1776, and it is unclear whether it was written before or after Dickinson used the term in his June 17 draft of the Articles of Confederation.[41]
21
+
22
+ The short form "United States" is also standard. Other common forms are the "U.S.," the "USA," and "America." Colloquial names are the "U.S. of A." and, internationally, the "States." "Columbia," a name popular in poetry and songs of the late 18th century, derives its origin from Christopher Columbus; it appears in the name "District of Columbia." Many landmarks and institutions in the Western Hemisphere bear his name, including the country of Colombia.[43]
23
+
24
+ The phrase "United States" was originally plural, a description of a collection of independent states—e.g., "the United States are"—including in the Thirteenth Amendment to the United States Constitution, ratified in 1865.[44] The singular form—e.g., "the United States is"—became popular after the end of the Civil War. The singular form is now standard; the plural form is retained in the idiom "these United States." The difference is more significant than usage; it is a difference between a collection of states and a unit.[45]
25
+
26
+ A citizen of the United States is an "American." "United States," "American" and "U.S." refer to the country adjectivally ("American values," "U.S. forces"). In English, the word "American" rarely refers to topics or subjects not directly connected with the United States.[46]
27
+
28
+ It has been generally accepted that the first inhabitants of North America migrated from Siberia by way of the Bering land bridge and arrived at least 12,000 years ago; however, increasing evidence suggests an even earlier arrival.[19][47][48] After crossing the land bridge, the Paleo-Indians moved southward along the Pacific coast[49] and through an interior ice-free corridor.[50] The Clovis culture, which appeared around 11,000 BC, was initially believed to represent the first wave of human settlement of the Americas.[51][52] It is likely these represent the first of three major waves of migration into North America.[53]
29
+
30
+ Over time, indigenous cultures in North America grew increasingly complex, and some, such as the pre-Columbian Mississippian culture in the southeast, developed advanced agriculture, grand architecture, and state-level societies.[54] The Mississippian culture flourished in the south from 800 to 1600 AD, extending from the Mexican border down through Florida.[55] Its city state Cahokia is the largest, most complex pre-Columbian archaeological site in the modern-day United States.[56] In the Four Corners region, Ancestral Puebloan culture developed from centuries of agricultural experimentation.[57]
31
+
32
+ Three UNESCO World Heritage Sites in the United States are credited to the Pueblos: Mesa Verde National Park, Chaco Culture National Historical Park, and Taos Pueblo.[58][59] The earthworks constructed by Native Americans of the Poverty Point culture have also been designated a UNESCO World Heritage site. In the southern Great Lakes region, the Iroquois Confederacy was established at some point between the twelfth and fifteenth centuries.[60] Most prominent along the Atlantic coast were the Algonquian tribes, who practiced hunting and trapping, along with limited cultivation.
33
+
34
+ With the progress of European colonization in the territories of the contemporary United States, the Native Americans were often conquered and displaced.[61] The native population of America declined after European arrival for various reasons,[62][63] primarily diseases such as smallpox and measles.[64][65]
35
+
36
+ Estimating the native population of North America at the time of European contact is difficult.[66][67] Douglas H. Ubelaker of the Smithsonian Institution estimated that there was a population of 92,916 in the south Atlantic states and a population of 473,616 in the Gulf states,[68] but most academics regard this figure as too low.[66] Anthropologist Henry F. Dobyns believed the populations were much higher, suggesting 1,100,000 along the shores of the gulf of Mexico, 2,211,000 people living between Florida and Massachusetts, 5,250,000 in the Mississippi Valley and tributaries and 697,000 people in the Florida peninsula.[66][67]
37
+
38
+ In the early days of colonization, many European settlers were subject to food shortages, disease, and attacks from Native Americans. Native Americans were also often at war with neighboring tribes and allied with Europeans in their colonial wars. In many cases, however, natives and settlers came to depend on each other. Settlers traded for food and animal pelts; natives for guns, ammunition and other European goods.[69] Natives taught many settlers to cultivate corn, beans, and squash. European missionaries and others felt it was important to "civilize" the Native Americans and urged them to adopt European agricultural techniques and lifestyles.[70][71]
39
+
40
+ With the advancement of European colonization in North America, the Native Americans were often conquered and displaced.[72] The first Europeans to arrive in the contiguous United States were Spanish conquistadors such as Juan Ponce de León, who made his first visit to Florida in 1513. Even earlier, Christopher Columbus landed in Puerto Rico on his 1493 voyage. The Spanish set up the first settlements in Florida and New Mexico such as Saint Augustine[73] and Santa Fe. The French established their own as well along the Mississippi River. Successful English settlement on the eastern coast of North America began with the Virginia Colony in 1607 at Jamestown and with the Pilgrims' Plymouth Colony in 1620. Many settlers were dissenting Christian groups who came seeking religious freedom. The continent's first elected legislative assembly, Virginia's House of Burgesses, was created in 1619. The Mayflower Compact, signed by the Pilgrims before disembarking, and the Fundamental Orders of Connecticut, established precedents for the pattern of representative self-government and constitutionalism that would develop throughout the American colonies.[74][75]
41
+
42
+ Most settlers in every colony were small farmers, though other industries were formed. Cash crops included tobacco, rice, and wheat. Extraction industries grew up in furs, fishing and lumber. Manufacturers produced rum and ships, and by the late colonial period, Americans were producing one-seventh of the world's iron supply.[76] Cities eventually dotted the coast to support local economies and serve as trade hubs. English colonists were supplemented by waves of Scotch-Irish immigrants and other groups. As coastal land grew more expensive, freed indentured servants claimed lands further west.[77]
43
+
44
+ A large-scale slave trade with English privateers began.[78] Because of less disease and better food and treatment, the life expectancy of slaves was much higher in North America than further south, leading to a rapid increase in the numbers of slaves.[79][80] Colonial society was largely divided over the religious and moral implications of slavery, and colonies passed acts for and against the practice.[81][82] But by the turn of the 18th century, African slaves were replacing indentured servants for cash crop labor, especially in the South.[83]
45
+
46
+ With the establishment of the Province of Georgia in 1732, the 13 colonies that would become the United States of America were administered by the British as overseas dependencies.[84] All nonetheless had local governments with elections open to most free men.[85] With extremely high birth rates, low death rates, and steady settlement, the colonial population grew rapidly. Relatively small Native American populations were eclipsed.[86] The Christian revivalist movement of the 1730s and 1740s known as the Great Awakening fueled interest both in religion and in religious liberty.[87]
47
+
48
+ During the Seven Years' War (known in the United States as the French and Indian War), British forces seized Canada from the French, but the francophone population remained politically isolated from the southern colonies. Excluding the Native Americans, who were being conquered and displaced, the 13 British colonies had a population of over 2.1 million in 1770, about a third that of Britain. Despite continuing, new arrivals, the rate of natural increase was such that by the 1770s only a small minority of Americans had been born overseas.[88] The colonies' distance from Britain had allowed the development of self-government, but their unprecedented success motivated monarchs to periodically seek to reassert royal authority.[89]
49
+
50
+ In 1774, the Spanish Navy ship Santiago, under Juan Pérez, entered and anchored in an inlet of Nootka Sound, Vancouver Island, in present-day British Columbia. Although the Spanish did not land, natives paddled to the ship to trade furs for abalone shells from California.[90] At the time, the Spanish were able to monopolize the trade between Asia and North America, granting limited licenses to the Portuguese. When the Russians began establishing a growing fur trading system in Alaska, the Spanish began to challenge the Russians, with Pérez's voyage being the first of many to the Pacific Northwest.[91][j]
51
+
52
+ During his third and final voyage, Captain James Cook became the first European to begin formal contact with Hawaii.[93] Captain Cook's last voyage included sailing along the coast of North America and Alaska searching for a Northwest Passage for approximately nine months.[94]
53
+
54
+ The American Revolutionary War was the first successful colonial war of independence against a European power. Americans had developed an ideology of "republicanism" asserting that government rested on the will of the people as expressed in their local legislatures. They demanded their rights as Englishmen and "no taxation without representation". The British insisted on administering the empire through Parliament, and the conflict escalated into war.[95]
55
+
56
+ The Second Continental Congress unanimously adopted the Declaration of Independence, which asserted that Great Britain was not protecting Americans' unalienable rights. July 4 is celebrated annually as Independence Day.[96] In 1777, the Articles of Confederation established a decentralized government that operated until 1789.[96]
57
+
58
+ Following the decisive Franco-American victory at Yorktown in 1781,[97] Britain signed the peace treaty of 1783, and American sovereignty was internationally recognized and the country was granted all lands east of the Mississippi River. Nationalists led the Philadelphia Convention of 1787 in writing the United States Constitution, ratified in state conventions in 1788. The federal government was reorganized into three branches, on the principle of creating salutary checks and balances, in 1789. George Washington, who had led the Continental Army to victory, was the first president elected under the new constitution. The Bill of Rights, forbidding federal restriction of personal freedoms and guaranteeing a range of legal protections, was adopted in 1791.[98]
59
+
60
+ Although the federal government criminalized the international slave trade in 1808, after 1820, cultivation of the highly profitable cotton crop exploded in the Deep South, and along with it, the slave population.[99][100][101] The Second Great Awakening, especially 1800–1840, converted millions to evangelical Protestantism. In the North, it energized multiple social reform movements, including abolitionism;[102] in the South, Methodists and Baptists proselytized among slave populations.[103]
61
+
62
+ Americans' eagerness to expand westward prompted a long series of American Indian Wars.[104] The Louisiana Purchase of French-claimed territory in 1803 almost doubled the nation's area.[105] The War of 1812, declared against Britain over various grievances and fought to a draw, strengthened U.S. nationalism.[106] A series of military incursions into Florida led Spain to cede it and other Gulf Coast territory in 1819.[107] The expansion was aided by steam power, when steamboats began traveling along America's large water systems, many of which were connected by new canals, such as the Erie and the I&M; then, even faster railroads began their stretch across the nation's land.[108]
63
+
64
+ From 1820 to 1850, Jacksonian democracy began a set of reforms which included wider white male suffrage; it led to the rise of the Second Party System of Democrats and Whigs as the dominant parties from 1828 to 1854. The Trail of Tears in the 1830s exemplified the Indian removal policy that forcibly resettled Indians into the west on Indian reservations. The U.S. annexed the Republic of Texas in 1845 during a period of expansionist Manifest destiny.[109] The 1846 Oregon Treaty with Britain led to U.S. control of the present-day American Northwest.[110] Victory in the Mexican–American War resulted in the 1848 Mexican Cession of California and much of the present-day American Southwest.[111]
65
+ The California Gold Rush of 1848–49 spurred migration to the Pacific coast, which led to the California Genocide[112][113][114][115] and the creation of additional western states.[116] After the Civil War, new transcontinental railways made relocation easier for settlers, expanded internal trade and increased conflicts with Native Americans.[117] In 1869, a new Peace Policy nominally promised to protect Native Americans from abuses, avoid further war, and secure their eventual U.S. citizenship. Nonetheless, large-scale conflicts continued throughout the West into the 1900s.
66
+
67
+ Irreconcilable sectional conflict regarding the slavery of Africans and African Americans ultimately led to the American Civil War.[118] Initially, states entering the Union had alternated between slave and free states, keeping a sectional balance in the Senate, while free states outstripped slave states in population and in the House of Representatives. But with additional western territory and more free-soil states, tensions between slave and free states mounted with arguments over federalism and disposition of the territories, as well as whether to expand or restrict slavery.[119]
68
+
69
+ With the 1860 election of Republican Abraham Lincoln, conventions in thirteen slave states ultimately declared secession and formed the Confederate States of America (the "South" or the "Confederacy"), while the federal government (the "Union") maintained that secession was illegal.[119] In order to bring about this secession, military action was initiated by the secessionists, and the Union responded in kind. The ensuing war would become the deadliest military conflict in American history, resulting in the deaths of approximately 618,000 soldiers as well as many civilians.[120] The Union initially simply fought to keep the country united. Nevertheless, as casualties mounted after 1863 and Lincoln delivered his Emancipation Proclamation, the main purpose of the war from the Union's viewpoint became the abolition of slavery. Indeed, when the Union ultimately won the war in April 1865, each of the states in the defeated South was required to ratify the Thirteenth Amendment, which prohibited slavery.
70
+
71
+ The government enacted three constitutional amendments in the years after the war: the aforementioned Thirteenth as well as the Fourteenth Amendment providing citizenship to the nearly four million African Americans who had been slaves,[121] and the Fifteenth Amendment ensuring in theory that African Americans had the right to vote. The war and its resolution led to a substantial increase in federal power[122] aimed at reintegrating and rebuilding the South while guaranteeing the rights of the newly freed slaves.
72
+
73
+ Reconstruction began in earnest following the war. While President Lincoln attempted to foster friendship and forgiveness between the Union and the former Confederacy, his assassination on April 14, 1865, drove a wedge between North and South again. Republicans in the federal government made it their goal to oversee the rebuilding of the South and to ensure the rights of African Americans. They persisted until the Compromise of 1877 when the Republicans agreed to cease protecting the rights of African Americans in the South in order for Democrats to concede the presidential election of 1876.
74
+
75
+ Southern white Democrats, calling themselves "Redeemers," took control of the South after the end of Reconstruction. From 1890 to 1910 the Redeemers established so-called Jim Crow laws, disenfranchising most blacks and some poor whites throughout the region. Blacks faced racial segregation, especially in the South.[123] They also occasionally experienced vigilante violence, including lynching.[124]
76
+
77
+ In the North, urbanization and an unprecedented influx of immigrants from Southern and Eastern Europe supplied a surplus of labor for the country's industrialization and transformed its culture.[126] National infrastructure including telegraph and transcontinental railroads spurred economic growth and greater settlement and development of the American Old West. The later invention of electric light and the telephone would also affect communication and urban life.[127]
78
+
79
+ The United States fought Indian Wars west of the Mississippi River from 1810 to at least 1890.[128] Most of these conflicts ended with the cession of Native American territory and their confinement to Indian reservations. This further expanded acreage under mechanical cultivation, increasing surpluses for international markets.[129] Mainland expansion also included the purchase of Alaska from Russia in 1867.[130] In 1893, pro-American elements in Hawaii overthrew the monarchy and formed the Republic of Hawaii, which the U.S. annexed in 1898. Puerto Rico, Guam, and the Philippines were ceded by Spain in the same year, following the Spanish–American War.[131] American Samoa was acquired by the United States in 1900 after the end of the Second Samoan Civil War.[132] The U.S. Virgin Islands were purchased from Denmark in 1917.[133]
80
+
81
+ Rapid economic development during the late 19th and early 20th centuries fostered the rise of many prominent industrialists. Tycoons like Cornelius Vanderbilt, John D. Rockefeller, and Andrew Carnegie led the nation's progress in railroad, petroleum, and steel industries. Banking became a major part of the economy, with J. P. Morgan playing a notable role. The American economy boomed, becoming the world's largest, and the United States achieved great power status.[134] These dramatic changes were accompanied by social unrest and the rise of populist, socialist, and anarchist movements.[135] This period eventually ended with the advent of the Progressive Era, which saw significant reforms including women's suffrage, alcohol prohibition, regulation of consumer goods, greater antitrust measures to ensure competition and attention to worker conditions.[136][137][138]
82
+
83
+ The United States remained neutral from the outbreak of World War I in 1914 until 1917, when it joined the war as an "associated power," alongside the formal Allies of World War I, helping to turn the tide against the Central Powers. In 1919, President Woodrow Wilson took a leading diplomatic role at the Paris Peace Conference and advocated strongly for the U.S. to join the League of Nations. However, the Senate refused to approve this and did not ratify the Treaty of Versailles that established the League of Nations.[139]
84
+
85
+ In 1920, the women's rights movement won passage of a constitutional amendment granting women's suffrage.[140] The 1920s and 1930s saw the rise of radio for mass communication and the invention of early television.[141] The prosperity of the Roaring Twenties ended with the Wall Street Crash of 1929 and the onset of the Great Depression. After his election as president in 1932, Franklin D. Roosevelt responded with the New Deal.[142] The Great Migration of millions of African Americans out of the American South began before World War I and extended through the 1960s;[143] whereas the Dust Bowl of the mid-1930s impoverished many farming communities and spurred a new wave of western migration.[144]
86
+
87
+ At first effectively neutral during World War II, the United States began supplying materiel to the Allies in March 1941 through the Lend-Lease program. On December 7, 1941, the Empire of Japan launched a surprise attack on Pearl Harbor, prompting the United States to join the Allies against the Axis powers.[145] Although Japan attacked the United States first, the U.S. nonetheless pursued a "Europe first" defense policy.[146] The United States thus left its vast Asian colony, the Philippines, isolated and fighting a losing struggle against Japanese invasion and occupation, as military resources were devoted to the European theater. During the war, the United States was referred to as one of the "Four Policemen"[147] of Allies power who met to plan the postwar world, along with Britain, the Soviet Union and China.[148][149] Although the nation lost around 400,000 military personnel,[150] it emerged relatively undamaged from the war with even greater economic and military influence.[151]
88
+
89
+ The United States played a leading role in the Bretton Woods and Yalta conferences with the United Kingdom, the Soviet Union, and other Allies, which signed agreements on new international financial institutions and Europe's postwar reorganization. As an Allied victory was won in Europe, a 1945 international conference held in San Francisco produced the United Nations Charter, which became active after the war.[152] The United States and Japan then fought each other in the largest naval battle in history, the Battle of Leyte Gulf.[153][154] The United States eventually developed the first nuclear weapons and used them on Japan in the cities of Hiroshima and Nagasaki; the Japanese surrendered on September 2, ending World War II.[155][156]
90
+
91
+ After World War II, the United States and the Soviet Union competed for power, influence, and prestige during what became known as the Cold War, driven by an ideological divide between capitalism and communism.[157] They dominated the military affairs of Europe, with the U.S. and its NATO allies on one side and the USSR and its Warsaw Pact allies on the other. The U.S. developed a policy of containment towards the expansion of communist influence. While the U.S. and Soviet Union engaged in proxy wars and developed powerful nuclear arsenals, the two countries avoided direct military conflict.[citation needed]
92
+
93
+ The United States often opposed Third World movements that it viewed as Soviet-sponsored, and occasionally pursued direct action for regime change against left-wing governments, even supporting right-wing authoritarian governments at times.[158] American troops fought communist Chinese and North Korean forces in the Korean War of 1950–53.[159] The Soviet Union's 1957 launch of the first artificial satellite and its 1961 launch of the first manned spaceflight initiated a "Space Race" in which the United States became the first nation to land a man on the moon in 1969.[159] A proxy war in Southeast Asia eventually evolved into full American participation, as the Vietnam War.[citation needed]
94
+
95
+ At home, the U.S. experienced sustained economic expansion and a rapid growth of its population and middle class. Construction of an Interstate Highway System transformed the nation's infrastructure over the following decades. Millions moved from farms and inner cities to large suburban housing developments.[160][161] In 1959 Hawaii became the 50th and last U.S. state added to the country.[162] The growing Civil Rights Movement used nonviolence to confront segregation and discrimination, with Martin Luther King Jr. becoming a prominent leader and figurehead. A combination of court decisions and legislation, culminating in the Civil Rights Act of 1968, sought to end racial discrimination.[163][164][165] Meanwhile, a counterculture movement grew which was fueled by opposition to the Vietnam war, black nationalism, and the sexual revolution.
96
+
97
+ The launch of a "War on Poverty" expanded entitlements and welfare spending, including the creation of Medicare and Medicaid, two programs that provide health coverage to the elderly and poor, respectively, and the means-tested Food Stamp Program and Aid to Families with Dependent Children.[166]
98
+
99
+ The 1970s and early 1980s saw the onset of stagflation. After his election in 1980, President Ronald Reagan responded to economic stagnation with free-market oriented reforms. Following the collapse of détente, he abandoned "containment" and initiated the more aggressive "rollback" strategy towards the USSR.[167][168][169][170][171] After a surge in female labor participation over the previous decade, by 1985 the majority of women aged 16 and over were employed.[172]
100
+
101
+ The late 1980s brought a "thaw" in relations with the USSR, and its collapse in 1991 finally ended the Cold War.[173][174][175][176] This brought about unipolarity[177] with the U.S. unchallenged as the world's dominant superpower. The concept of Pax Americana, which had appeared in the post-World War II period, gained wide popularity as a term for the post-Cold War new world order.[citation needed]
102
+
103
+ After the Cold War, the conflict in the Middle East triggered a crisis in 1990, when Iraq under Saddam Hussein invaded and attempted to annex Kuwait, an ally of the United States. Fearing the instability would spread to other regions, President George H. W. Bush launched Operation Desert Shield, a defensive force buildup in Saudi Arabia, and Operation Desert Storm, in a staging titled the Gulf War; waged by coalition forces from 34 nations, led by the United States against Iraq ending in the expulsion of Iraqi forces from Kuwait and restoration of the monarchy.[178]
104
+
105
+ Originating within U.S. military defense networks, the Internet spread to international academic platforms and then to the public in the 1990s, greatly affecting the global economy, society, and culture.[179] Due to the dot-com boom, stable monetary policy, and reduced social welfare spending, the 1990s saw the longest economic expansion in modern U.S. history.[180] Beginning in 1994, the U.S. entered into the North American Free Trade Agreement (NAFTA), prompting trade among the U.S., Canada, and Mexico to soar.[181]
106
+
107
+ On September 11, 2001, Al-Qaeda terrorists struck the World Trade Center in New York City and the Pentagon near Washington, D.C., killing nearly 3,000 people.[182] In response, the United States launched the War on Terror, which included a war in Afghanistan and the 2003–11 Iraq War.[183][184]
108
+
109
+ Government policy designed to promote affordable housing,[185] widespread failures in corporate and regulatory governance,[186] and historically low interest rates set by the Federal Reserve[187] led to the mid-2000s housing bubble, which culminated with the 2008 financial crisis, the nation's largest economic contraction since the Great Depression.[188] Barack Obama, the first African-American[189] and multiracial[190] president, was elected in 2008 amid the crisis,[191] and subsequently passed stimulus measures and the Dodd–Frank Act in an attempt to mitigate its negative effects and ensure there would not be a repeat of the crisis. In 2010, the Obama administration passed the Affordable Care Act, which made the most sweeping reforms to the nation's healthcare system in nearly five decades, including mandates, subsidies and insurance exchanges.[citation needed]
110
+
111
+ American forces in Iraq were withdrawn in large numbers in 2009 and 2010, and the war in the region was declared formally over in December 2011.[192] But months earlier, Operation Neptune Spear led to the death of the leader of Al-Qaeda in Pakistan.[193] In the presidential election of 2016, Republican Donald Trump was elected as the 45th president of the United States. On January 20, 2020, the first case of COVID-19 in the United States was confirmed.[194] As of July 2020, the United States has over 4 million COVID-19 cases and over 145,000 deaths.[195] The United States is, by far, the country with the most cases of COVID-19 since April 11, 2020.[196]
112
+
113
+ The 48 contiguous states and the District of Columbia occupy a combined area of 3,119,884.69 square miles (8,080,464.3 km2). Of this area, 2,959,064.44 square miles (7,663,941.7 km2) is contiguous land, composing 83.65% of total U.S. land area.[197][198] Hawaii, occupying an archipelago in the central Pacific, southwest of North America, is 10,931 square miles (28,311 km2) in area. The populated territories of Puerto Rico, American Samoa, Guam, Northern Mariana Islands, and U.S. Virgin Islands together cover 9,185 square miles (23,789 km2).[199] Measured by only land area, the United States is third in size behind Russia and China, just ahead of Canada.[200]
114
+
115
+ The United States is the world's third- or fourth-largest nation by total area (land and water), ranking behind Russia and Canada and nearly equal to China. The ranking varies depending on how two territories disputed by China and India are counted, and how the total size of the United States is measured.[e][201][202]
116
+
117
+ The coastal plain of the Atlantic seaboard gives way further inland to deciduous forests and the rolling hills of the Piedmont.[203] The Appalachian Mountains divide the eastern seaboard from the Great Lakes and the grasslands of the Midwest.[204] The Mississippi–Missouri River, the world's fourth longest river system, runs mainly north–south through the heart of the country. The flat, fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast.[204]
118
+
119
+ The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking around 14,000 feet (4,300 m) in Colorado.[205] Farther west are the rocky Great Basin and deserts such as the Chihuahua and Mojave.[206] The Sierra Nevada and Cascade mountain ranges run close to the Pacific coast, both ranges reaching altitudes higher than 14,000 feet (4,300 m). The lowest and highest points in the contiguous United States are in the state of California,[207] and only about 84 miles (135 km) apart.[208] At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali is the highest peak in the country and in North America.[209] Active volcanoes are common throughout Alaska's Alexander and Aleutian Islands, and Hawaii consists of volcanic islands. The supervolcano underlying Yellowstone National Park in the Rockies is the continent's largest volcanic feature.[210]
120
+
121
+ The United States, with its large size and geographic variety, includes most climate types. To the east of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south.[211] The Great Plains west of the 100th meridian are semi-arid. Much of the Western mountains have an alpine climate. The climate is arid in the Great Basin, desert in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon and Washington and southern Alaska. Most of Alaska is subarctic or polar. Hawaii and the southern tip of Florida are tropical, as well as its territories in the Caribbean and the Pacific.[212] States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley areas in the Midwest and South.[213] Overall, the United States has the world's most violent weather, receiving more high-impact extreme weather incidents than any other country in the world.[214]
122
+
123
+ The U.S. ecology is megadiverse: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and more than 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland.[216] The United States is home to 428 mammal species, 784 bird species, 311 reptile species, and 295 amphibian species,[217] as well as about 91,000 insect species.[218]
124
+
125
+ There are 62 national parks and hundreds of other federally managed parks, forests, and wilderness areas.[219] Altogether, the government owns about 28% of the country's land area,[220] mostly in the western states.[221] Most of this land is protected, though some is leased for oil and gas drilling, mining, logging, or cattle ranching, and about .86% is used for military purposes.[222][223]
126
+
127
+ Environmental issues include debates on oil and nuclear energy, dealing with air and water pollution, the economic costs of protecting wildlife, logging and deforestation,[224][225] and international responses to global warming.[226][227] The most prominent environmental agency is the Environmental Protection Agency (EPA), created by presidential order in 1970.[228] The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act.[229] The Endangered Species Act of 1973 is intended to protect threatened and endangered species and their habitats, which are monitored by the United States Fish and Wildlife Service.[230]
128
+
129
+ The U.S. Census Bureau officially estimated the country's population to be 328,239,523 as of July 1, 2019.[231] In addition, the Census Bureau provides a continuously updated U.S. Population Clock that approximates the latest population of the 50 states and District of Columbia based on the Bureau's most recent demographic trends.[234] According to the clock, on May 23, 2020, the U.S. population exceeded 329 million residents, with a net gain of one person every 19 seconds, or about 4,547 people per day. The United States is the third most populous nation in the world, after China and India. In 2018 the median age of the United States population was 38.1 years.[235]
130
+
131
+ In 2018, there were almost 90 million immigrants and U.S.-born children of immigrants (second-generation Americans) in the United States, accounting for 28% of the overall U.S. population.[236] The United States has a very diverse population; 37 ancestry groups have more than one million members.[237] German Americans are the largest ethnic group (more than 50 million)—followed by Irish Americans (circa 37 million), Mexican Americans (circa 31 million) and English Americans (circa 28 million).[238][239]
132
+
133
+ White Americans (mostly European ancestry) are the largest racial group at 73.1% of the population; African Americans are the nation's largest racial minority and third-largest ancestry group.[237] Asian Americans are the country's second-largest racial minority; the three largest Asian American ethnic groups are Chinese Americans, Filipino Americans, and Indian Americans.[237] The largest American community with European ancestry is German Americans, which consists of more than 14% of the total population.[240] In 2010, the U.S. population included an estimated 5.2 million people with some American Indian or Alaska Native ancestry (2.9 million exclusively of such ancestry) and 1.2 million with some native Hawaiian or Pacific island ancestry (0.5 million exclusively).[241] The census counted more than 19 million people of "Some Other Race" who were "unable to identify with any" of its five official race categories in 2010, more than 18.5 million (97%) of whom are of Hispanic ethnicity.[241]
134
+
135
+ In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents (including many eligible to become citizens), 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants.[242] Among current living immigrants to the U.S., the top five countries of birth are Mexico, China, India, the Philippines and El Salvador. Until 2017 and 2018, the United States led the world in refugee resettlement for decades, admitted more refugees than the rest of the world combined.[243] From fiscal year 1980 until 2017, 55% of refugees came from Asia, 27% from Europe, 13% from Africa, and 4% from Latin America.[243]
136
+
137
+ A 2017 United Nations report projected that the U.S. would be one of nine countries in which world population growth through 2050 would be concentrated.[244] A 2020 U.S. Census Bureau report projected the population of the country could be anywhere between 320 million and 447 million by 2060, depending on the rate of in-migration; in all projected scenarios, a lower fertility rate and increases in life expectancy would result in an aging population.[245] The United States has an annual birth rate of 13 per 1,000, which is five births per 1,000 below the world average.[246] Its population growth rate is positive at 0.7%, higher than that of many developed nations.[247]
138
+
139
+ About 82% of Americans live in urban areas (including suburbs);[202] about half of those reside in cities with populations over 50,000.[248] In 2008, 273 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities had over two million (namely New York, Los Angeles, Chicago, and Houston).[249] Estimates for the year 2018 show that 53 metropolitan areas have populations greater than one million. Many metros in the South, Southwest and West grew significantly between 2010 and 2018. The Dallas and Houston metros increased by more than a million people, while the Washington, D.C., Miami, Atlanta, and Phoenix metros all grew by more than 500,000 people.
140
+
141
+ English (specifically, American English) is the de facto national language of the United States. Although there is no official language at the federal level, some laws—such as U.S. naturalization requirements—standardize English. In 2010, about 230 million, or 80% of the population aged five years and older, spoke only English at home. 12% of the population speaks Spanish at home, making it the second most common language. Spanish is also the most widely taught second language.[250][251]
142
+
143
+ Both Hawaiian and English are official languages in Hawaii.[252] In addition to English, Alaska recognizes twenty official Native languages,[253][k] and South Dakota recognizes Sioux.[254] While neither has an official language, New Mexico has laws providing for the use of both English and Spanish, as Louisiana does for English and French.[255] Other states, such as California, mandate the publication of Spanish versions of certain government documents including court forms.[256]
144
+
145
+ Several insular territories grant official recognition to their native languages, along with English: Samoan[257] is officially recognized by American Samoa and Chamorro[258] is an official language of Guam. Both Carolinian and Chamorro have official recognition in the Northern Mariana Islands.[259]
146
+ Spanish is an official language of Puerto Rico and is more widely spoken than English there.[260]
147
+
148
+ The most widely taught foreign languages in the United States, in terms of enrollment numbers from kindergarten through university undergraduate education, are Spanish (around 7.2 million students), French (1.5 million), and German (500,000). Other commonly taught languages include Latin, Japanese, ASL, Italian, and Chinese.[261][262] 18% of all Americans claim to speak both English and another language.[263]
149
+
150
+ Religion in the United States (2017)[266]
151
+
152
+ The First Amendment of the U.S. Constitution guarantees the free exercise of religion and forbids Congress from passing laws respecting its establishment.
153
+
154
+ In a 2013 survey, 56% of Americans said religion played a "very important role in their lives," a far higher figure than that of any other Western nation.[267] In a 2009 Gallup poll, 42% of Americans said they attended church weekly or almost weekly; the figures ranged from a low of 23% in Vermont to a high of 63% in Mississippi.[268]
155
+
156
+ In a 2014 survey, 70.6% of adults in the United States identified themselves as Christians;[269] Protestants accounted for 46.5%, while Roman Catholics, at 20.8%, formed the largest single Christian group.[270] In 2014, 5.9% of the U.S. adult population claimed a non-Christian religion.[271] These include Judaism (1.9%), Islam (0.9%), Hinduism (0.7%), and Buddhism (0.7%).[271] The survey also reported that 22.8% of Americans described themselves as agnostic, atheist or simply having no religion—up from 8.2% in 1990.[270][272][273] There are also Unitarian Universalist, Scientologist, Baha'i, Sikh, Jain, Shinto, Zoroastrian, Confucian, Satanist, Taoist, Druid, Native American, Afro-American, traditional African, Wiccan, Gnostic, humanist and deist communities.[274][275]
157
+
158
+ Protestantism is the largest Christian religious grouping in the United States, accounting for almost half of all Americans. Baptists collectively form the largest branch of Protestantism at 15.4%,[276] and the Southern Baptist Convention is the largest individual Protestant denomination at 5.3% of the U.S. population.[276] Apart from Baptists, other Protestant categories include nondenominational Protestants, Methodists, Pentecostals, unspecified Protestants, Lutherans, Presbyterians, Congregationalists, other Reformed, Episcopalians/Anglicans, Quakers, Adventists, Holiness, Christian fundamentalists, Anabaptists, Pietists, and multiple others.[276]
159
+
160
+ As with other Western countries, the U.S. is becoming less religious. Irreligion is growing rapidly among Americans under 30.[277] Polls show that overall American confidence in organized religion has been declining since the mid to late 1980s,[278] and that younger Americans, in particular, are becoming increasingly irreligious.[271][279] In a 2012 study, the Protestant share of the U.S. population had dropped to 48%, thus ending its status as religious category of the majority for the first time.[280][281] Americans with no religion have 1.7 children compared to 2.2 among Christians. The unaffiliated are less likely to marry with 37% marrying compared to 52% of Christians.[282]
161
+
162
+ The Bible Belt is an informal term for a region in the Southern United States in which socially conservative evangelical Protestantism is a significant part of the culture and Christian church attendance across the denominations is generally higher than the nation's average. By contrast, religion plays the least important role in New England and in the Western United States.[268]
163
+
164
+ As of 2018[update], 52% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 32% had never been married.[283] Women now work mostly outside the home and receive the majority of bachelor's degrees.[284]
165
+
166
+ The U.S. teenage pregnancy rate is 26.5 per 1,000 women. The rate has declined by 57% since 1991.[285] Abortion is legal throughout the country. Abortion rates, currently 241 per 1,000 live births and 15 per 1,000 women aged 15–44, are falling but remain higher than most Western nations.[286] In 2013, the average age at first birth was 26 and 41% of births were to unmarried women.[287]
167
+
168
+ The total fertility rate in 2016 was 1820.5 births per 1000 women.[288] Adoption in the United States is common and relatively easy from a legal point of view (compared to other Western countries).[289] As of 2001[update], with more than 127,000 adoptions, the U.S. accounted for nearly half of the total number of adoptions worldwide.[needs update][290] Same-sex marriage is legal nationwide, and it is legal for same-sex couples to adopt. Polygamy is illegal throughout the U.S.[291]
169
+
170
+ In 2019, the U.S. had the world's highest rate of children living in single-parent households.[292]
171
+
172
+ The United States had a life expectancy of 78.6 years at birth in 2017, which was the third year of declines in life expectancy following decades of continuous increase. The recent decline, primarily among the age group 25 to 64, is largely due to sharp increases in the drug overdose and suicide rates; the country has one of the highest suicide rates among wealthy countries.[293][294] Life expectancy was highest among Asians and Hispanics and lowest among blacks.[295][296] According to CDC and Census Bureau data, deaths from suicide, alcohol and drug overdoses hit record highs in 2017.[297]
173
+
174
+ Increasing obesity in the United States and health improvements elsewhere contributed to lowering the country's rank in life expectancy from 11th in the world in 1987, to 42nd in 2007, and as of 2017 the country had the lowest life expectancy among Japan, Canada, Australia, the UK, and seven countries of western Europe.[298][299] Obesity rates have more than doubled in the last 30 years and are the highest in the industrialized world.[300][301] Approximately one-third of the adult population is obese and an additional third is overweight.[302] Obesity-related type 2 diabetes is considered epidemic by health care professionals.[303]
175
+
176
+ In 2010, coronary artery disease, lung cancer, stroke, chronic obstructive pulmonary diseases, and traffic accidents caused the most years of life lost in the U.S. Low back pain, depression, musculoskeletal disorders, neck pain, and anxiety caused the most years lost to disability. The most harmful risk factors were poor diet, tobacco smoking, obesity, high blood pressure, high blood sugar, physical inactivity, and alcohol use. Alzheimer's disease, drug abuse, kidney disease, cancer, and falls caused the most additional years of life lost over their age-adjusted 1990 per-capita rates.[304] U.S. teenage pregnancy and abortion rates are substantially higher than in other Western nations, especially among blacks and Hispanics.[305]
177
+
178
+ Health-care coverage in the United States is a combination of public and private efforts and is not universal. In 2017, 12.2% of the population did not carry health insurance.[306] The subject of uninsured and underinsured Americans is a major political issue.[307][308] Federal legislation, passed in early 2010, roughly halved the uninsured share of the population, though the bill and its ultimate effect are issues of controversy.[309][310] The U.S. health-care system far outspends any other nation, measured both in per capita spending and as percentage of GDP.[311] At the same time, the U.S. is a global leader in medical innovation.[312]
179
+
180
+ American public education is operated by state and local governments, regulated by the United States Department of Education through restrictions on federal grants. In most states, children are required to attend school from the age of six or seven (generally, kindergarten or first grade) until they turn 18 (generally bringing them through twelfth grade, the end of high school); some states allow students to leave school at 16 or 17.[313]
181
+
182
+ About 12% of children are enrolled in parochial or nonsectarian private schools. Just over 2% of children are homeschooled.[314] The U.S. spends more on education per student than any nation in the world, spending more than $11,000 per elementary student in 2010 and more than $12,000 per high school student.[315][needs update] Some 80% of U.S. college students attend public universities.[316]
183
+
184
+ Of Americans 25 and older, 84.6% graduated from high school, 52.6% attended some college, 27.2% earned a bachelor's degree, and 9.6% earned graduate degrees.[317] The basic literacy rate is approximately 99%.[202][318] The United Nations assigns the United States an Education Index of 0.97, tying it for 12th in the world.[319]
185
+
186
+ The United States has many private and public institutions of higher education. The majority of the world's top universities, as listed by various ranking organizations, are in the U.S.[320][321][322] There are also local community colleges with generally more open admission policies, shorter academic programs, and lower tuition.
187
+
188
+ In 2018, U21, a network of research-intensive universities, ranked the United States first in the world for breadth and quality of higher education, and 15th when GDP was a factor.[323] As for public expenditures on higher education, the U.S. trails some other OECD nations but spends more per student than the OECD average, and more than all nations in combined public and private spending.[315][324] As of 2018[update], student loan debt exceeded 1.5 trillion dollars.[325][326]
189
+
190
+ The United States is a federal republic of 50 states, a federal district, five territories and several uninhabited island possessions.[327][328][329] It is the world's oldest surviving federation. It is a federal republic and a representative democracy, "in which majority rule is tempered by minority rights protected by law."[330] For 2018, the U.S. ranked 25th on the Democracy Index.[331] On Transparency International's 2019 Corruption Perceptions Index its public sector position deteriorated from a score of 76 in 2015 to 69 in 2019.[332]
191
+
192
+ In the American federalist system, citizens are usually subject to three levels of government: federal, state, and local. The local government's duties are commonly split between county and municipal governments. In almost all cases, executive and legislative officials are elected by a plurality vote of citizens by district.
193
+
194
+ The government is regulated by a system of checks and balances defined by the U.S. Constitution, which serves as the country's supreme legal document.[333] The original text of the Constitution establishes the structure and responsibilities of the federal government and its relationship with the individual states. Article One protects the right to the "great writ" of habeas corpus. The Constitution has been amended 27 times;[334] the first ten amendments, which make up the Bill of Rights, and the Fourteenth Amendment form the central basis of Americans' individual rights. All laws and governmental procedures are subject to judicial review and any law ruled by the courts to be in violation of the Constitution is voided. The principle of judicial review, not explicitly mentioned in the Constitution, was established by the Supreme Court in Marbury v. Madison (1803)[335] in a decision handed down by Chief Justice John Marshall.[336]
195
+
196
+ The federal government comprises three branches:
197
+
198
+ The House of Representatives has 435 voting members, each representing a congressional district for a two-year term. House seats are apportioned among the states by population. Each state then draws single-member districts to conform with the census apportionment. The District of Columbia and the five major U.S. territories each have one member of Congress—these members are not allowed to vote.[341]
199
+
200
+ The Senate has 100 members with each state having two senators, elected at-large to six-year terms; one-third of Senate seats are up for election every two years. The District of Columbia and the five major U.S. territories do not have senators.[341] The president serves a four-year term and may be elected to the office no more than twice. The president is not elected by direct vote, but by an indirect electoral college system in which the determining votes are apportioned to the states and the District of Columbia.[342] The Supreme Court, led by the chief justice of the United States, has nine members, who serve for life.[343]
201
+
202
+ The state governments are structured in a roughly similar fashion, though Nebraska has a unicameral legislature.[344] The governor (chief executive) of each state is directly elected. Some state judges and cabinet officers are appointed by the governors of the respective states, while others are elected by popular vote.
203
+
204
+ The 50 states are the principal administrative divisions in the country. These are subdivided into counties or county equivalents and further divided into municipalities. The District of Columbia is a federal district that contains the capital of the United States, Washington, D.C.[345] The states and the District of Columbia choose the president of the United States. Each state has presidential electors equal to the number of their representatives and senators in Congress; the District of Columbia has three (because of the 23rd Amendment).[346] Territories of the United States such as Puerto Rico do not have presidential electors, and so people in those territories cannot vote for the president.[341]
205
+
206
+ The United States also observes tribal sovereignty of the American Indian nations to a limited degree, as it does with the states' sovereignty. American Indians are U.S. citizens and tribal lands are subject to the jurisdiction of the U.S. Congress and the federal courts. Like the states they have a great deal of autonomy, but also like the states, tribes are not allowed to make war, engage in their own foreign relations, or print and issue currency.[347]
207
+
208
+ Citizenship is granted at birth in all states, the District of Columbia, and all major U.S. territories except American Samoa.[348][349][m]
209
+
210
+ The United States has operated under a two-party system for most of its history.[352] For elective offices at most levels, state-administered primary elections choose the major party nominees for subsequent general elections. Since the general election of 1856, the major parties have been the Democratic Party, founded in 1824, and the Republican Party, founded in 1854. Since the Civil War, only one third-party presidential candidate—former president Theodore Roosevelt, running as a Progressive in 1912—has won as much as 20% of the popular vote. The president and vice president are elected by the Electoral College.[353]
211
+
212
+ In American political culture, the center-right Republican Party is considered "conservative" and the center-left Democratic Party is considered "liberal."[354][355] The states of the Northeast and West Coast and some of the Great Lakes states, known as "blue states," are relatively liberal. The "red states" of the South and parts of the Great Plains and Rocky Mountains are relatively conservative.
213
+
214
+ Republican Donald Trump, the winner of the 2016 presidential election, is serving as the 45th president of the United States.[356] Leadership in the Senate includes Republican vice president Mike Pence, Republican president pro tempore Chuck Grassley, Majority Leader Mitch McConnell, and Minority Leader Chuck Schumer.[357] Leadership in the House includes Speaker of the House Nancy Pelosi, Majority Leader Steny Hoyer, and Minority Leader Kevin McCarthy.[358]
215
+
216
+ In the 116th United States Congress, the House of Representatives is controlled by the Democratic Party and the Senate is controlled by the Republican Party, giving the U.S. a split Congress. The Senate consists of 53 Republicans and 45 Democrats with two Independents who caucus with the Democrats; the House consists of 233 Democrats, 196 Republicans, and 1 Libertarian.[359] Of state governors, there are 26 Republicans and 24 Democrats. Among the D.C. mayor and the five territorial governors, there are two Republicans, one Democrat, one New Progressive, and two Independents.[360]
217
+
218
+ The United States has an established structure of foreign relations. It is a permanent member of the United Nations Security Council. New York City is home to the United Nations Headquarters. Almost all countries have embassies in Washington, D.C., and many have consulates around the country. Likewise, nearly all nations host American diplomatic missions. However, Iran, North Korea, Bhutan, and the Republic of China (Taiwan) do not have formal diplomatic relations with the United States (although the U.S. still maintains unofficial relations with Bhutan and Taiwan).[361] It is a member of the G7,[362] G20, and OECD.
219
+
220
+ The United States has a "Special Relationship" with the United Kingdom[363] and strong ties with India, Canada,[364] Australia,[365] New Zealand,[366] the Philippines,[367] Japan,[368] South Korea,[369] Israel,[370] and several European Union countries, including France, Italy, Germany, Spain and Poland.[371] It works closely with fellow NATO members on military and security issues and with its neighbors through the Organization of American States and free trade agreements such as the trilateral North American Free Trade Agreement with Canada and Mexico. Colombia is traditionally considered by the United States as its most loyal ally in South America.[372][373]
221
+
222
+ The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands and Palau through the Compact of Free Association.[374]
223
+
224
+ Taxation in the United States is levied at the federal, state, and local government levels. This includes taxes on income, payroll, property, sales, imports, estates, and gifts, as well as various fees. Taxation in the United States is based on citizenship, not residency.[375] Both non-resident citizens and Green Card holders living abroad are taxed on their income irrespective of where they live or where their income is earned. The United States is one of the only countries in the world to do so.[376]
225
+
226
+ In 2010 taxes collected by federal, state and municipal governments amounted to 24.8% of GDP.[377] Based on CBO estimates,[378] under 2013 tax law the top 1% will be paying the highest average tax rates since 1979, while other income groups will remain at historic lows.[379] For 2018, the effective tax rate for the wealthiest 400 households was 23%, compared to 24.2% for the bottom half of U.S. households.[380]
227
+
228
+
229
+
230
+ During fiscal year 2012, the federal government spent $3.54 trillion on a budget or cash basis, down $60 billion or 1.7% vs. fiscal year 2011 spending of $3.60 trillion. Major categories of fiscal year 2012 spending included: Medicare & Medicaid (23%), Social Security (22%), Defense Department (19%), non-defense discretionary (17%), other mandatory (13%) and interest (6%).[382]
231
+
232
+ The total national debt of the United States in the United States was $18.527 trillion (106% of the GDP) in 2014.[383][n] The United States has the largest external debt in the world[387] and the 34th largest government debt as a % of GDP in the world.[388]
233
+
234
+ The president is the commander-in-chief of the country's armed forces and appoints its leaders, the Secretary of Defense and the Joint Chiefs of Staff. The United States Department of Defense administers the armed forces, including the Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is run by the Department of Homeland Security in peacetime and by the Department of the Navy during times of war. In 2008, the armed forces had 1.4 million personnel on active duty. The Reserves and National Guard brought the total number of troops to 2.3 million. The Department of Defense also employed about 700,000 civilians, not including contractors.[389]
235
+
236
+ Military service is voluntary, though conscription may occur in wartime through the Selective Service System.[390] American forces can be rapidly deployed by the Air Force's large fleet of transport aircraft, the Navy's 11 active aircraft carriers, and Marine expeditionary units at sea with the Navy's Atlantic and Pacific fleets. The military operates 865 bases and facilities abroad,[391] and maintains deployments greater than 100 active duty personnel in 25 foreign countries.[392]
237
+
238
+ The military budget of the United States in 2011 was more than $700 billion, 41% of global military spending. At 4.7% of GDP, the rate was the second-highest among the top 15 military spenders, after Saudi Arabia.[393] Defense spending plays a major role in science and technology investment, with roughly half of U.S. federal research and development funded by the Department of Defense.[394] Defense's share of the overall U.S. economy has generally declined in recent decades, from Cold War peaks of 14.2% of GDP in 1953 and 69.5% of federal outlays in 1954 to 4.7% of GDP and 18.8% of federal outlays in 2011.[395]
239
+
240
+ The country is one of the five recognized nuclear weapons states and possesses the second largest stockpile of nuclear weapons in the world.[396] More than 90% of the world's 14,000 nuclear weapons are owned by Russia and the United States.[397]
241
+
242
+ Law enforcement in the United States is primarily the responsibility of local police departments and sheriff's offices, with state police providing broader services. Federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have specialized duties, including protecting civil rights, national security and enforcing U.S. federal courts' rulings and federal laws.[398] State courts conduct most criminal trials while federal courts handle certain designated crimes as well as certain appeals from the state criminal courts.
243
+
244
+ A cross-sectional analysis of the World Health Organization Mortality Database from 2010 showed that United States "homicide rates were 7.0 times higher than in other high-income countries, driven by a gun homicide rate that was 25.2 times higher."[399] In 2016, the US murder rate was 5.4 per 100,000.[400] Gun ownership rights, guaranteed by the Second Amendment, continue to be the subject of contention.
245
+
246
+ The United States has the highest documented incarceration rate and largest prison population in the world.[401] As of 2020, the Prison Policy Initiative reported that there were some 2.3 million people incarcerated.[402] The imprisonment rate for all prisoners sentenced to more than a year in state or federal facilities is 478 per 100,000 in 2013.[403] According to the Federal Bureau of Prisons, the majority of inmates held in federal prisons are convicted of drug offenses.[404] About 9% of prisoners are held in privatized prisons.[402] The practice of privately operated prisons began in the 1980s and has been a subject of contention.[405]
247
+
248
+ Capital punishment is sanctioned in the United States for certain federal and military crimes, and at the state level in 30 states.[406][407] No executions took place from 1967 to 1977, owing in part to a U.S. Supreme Court ruling striking down arbitrary imposition of the death penalty. Since the decision there have been more than 1,300 executions, a majority of these taking place in three states: Texas, Virginia, and Oklahoma.[408] Meanwhile, several states have either abolished or struck down death penalty laws. In 2019, the country had the sixth-highest number of executions in the world, following China, Iran, Saudi Arabia, Iraq, and Egypt.[409]
249
+
250
+ According to the International Monetary Fund, the U.S. GDP of $16.8 trillion constitutes 24% of the gross world product at market exchange rates and over 19% of the gross world product at purchasing power parity (PPP).[417] The United States is the largest importer of goods and second-largest exporter, though exports per capita are relatively low. In 2010, the total U.S. trade deficit was $635 billion.[418] Canada, China, Mexico, Japan, and Germany are its top trading partners.[419]
251
+
252
+ From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7.[420] The country ranks ninth in the world in nominal GDP per capita[421] and sixth in GDP per capita at PPP.[417] The U.S. dollar is the world's primary reserve currency.[422]
253
+
254
+ In 2009, the private sector was estimated to constitute 86.4% of the economy.[425] While its economy has reached a postindustrial level of development, the United States remains an industrial power.[426] Consumer spending comprised 68% of the U.S. economy in 2015.[427] In August 2010, the American labor force consisted of 154.1 million people (50%). With 21.2 million people, government is the leading field of employment. The largest private employment sector is health care and social assistance, with 16.4 million people. It has a smaller welfare state and redistributes less income through government action than most European nations.[428]
255
+
256
+ The United States is the only advanced economy that does not guarantee its workers paid vacation[429] and is one of a few countries in the world without paid family leave as a legal right.[430] While federal law does not require sick leave, it is a common benefit for government workers and full-time employees at corporations.[431] 74% of full-time American workers get paid sick leave, according to the Bureau of Labor Statistics, although only 24% of part-time workers get the same benefits.[431] In 2009, the United States had the third-highest workforce productivity per person in the world, behind Luxembourg and Norway. It was fourth in productivity per hour, behind those two countries and the Netherlands.[432]
257
+
258
+
259
+
260
+ The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts were developed by the U.S. War Department by the Federal Armories during the first half of the 19th century. This technology, along with the establishment of a machine tool industry, enabled the U.S. to have large-scale manufacturing of sewing machines, bicycles, and other items in the late 19th century and became known as the American system of manufacturing. Factory electrification in the early 20th century and introduction of the assembly line and other labor-saving techniques created the system of mass production.[433] In the 21st century, approximately two-thirds of research and development funding comes from the private sector.[434] The United States leads the world in scientific research papers and impact factor.[435][436]
261
+
262
+ In 1876, Alexander Graham Bell was awarded the first U.S. patent for the telephone. Thomas Edison's research laboratory, one of the first of its kind, developed the phonograph, the first long-lasting light bulb, and the first viable movie camera.[437] The latter led to emergence of the worldwide entertainment industry. In the early 20th century, the automobile companies of Ransom E. Olds and Henry Ford popularized the assembly line. The Wright brothers, in 1903, made the first sustained and controlled heavier-than-air powered flight.[438]
263
+
264
+ The rise of fascism and Nazism in the 1920s and 30s led many European scientists, including Albert Einstein, Enrico Fermi, and John von Neumann, to immigrate to the United States.[439] During World War II, the Manhattan Project developed nuclear weapons, ushering in the Atomic Age, while the Space Race produced rapid advances in rocketry, materials science, and aeronautics.[440][441]
265
+
266
+ The invention of the transistor in the 1950s, a key active component in practically all modern electronics, led to many technological developments and a significant expansion of the U.S. technology industry.[442] This, in turn, led to the establishment of many new technology companies and regions around the country such as Silicon Valley in California. Advancements by American microprocessor companies such as Advanced Micro Devices (AMD), and Intel along with both computer software and hardware companies that include Adobe Systems, Apple Inc., IBM, Microsoft, and Sun Microsystems created and popularized the personal computer. The ARPANET was developed in the 1960s to meet Defense Department requirements, and became the first of a series of networks which evolved into the Internet.[443]
267
+
268
+ Accounting for 4.24% of the global population, Americans collectively possess 29.4% of the world's total wealth, and Americans make up roughly half of the world's population of millionaires.[444] The Global Food Security Index ranked the U.S. number one for food affordability and overall food security in March 2013.[445] Americans on average have more than twice as much living space per dwelling and per person as European Union residents, and more than every EU nation.[446] For 2017 the United Nations Development Programme ranked the United States 13th among 189 countries in its Human Development Index and 25th among 151 countries in its inequality-adjusted HDI (IHDI).[447]
269
+
270
+ Wealth, like income and taxes, is highly concentrated; the richest 10% of the adult population possess 72% of the country's household wealth, while the bottom half claim only 2%.[448] According to a September 2017 report by the Federal Reserve, the top 1% controlled 38.6% of the country's wealth in 2016.[449] According to a 2018 study by the OECD, the United States has a larger percentage of low-income workers than almost any other developed nation. This is largely because at-risk workers get almost no government support and are further set back by a very weak collective bargaining system.[450] The top one percent of income-earners accounted for 52 percent of the income gains from 2009 to 2015, where income is defined as market income excluding government transfers.[451] In 2018, U.S. income inequality reached the highest level ever recorded by the Census Bureau.[452]
271
+
272
+ After years of stagnation, median household income reached a record high in 2016 following two consecutive years of record growth. Income inequality remains at record highs however, with the top fifth of earners taking home more than half of all overall income.[454] The rise in the share of total annual income received by the top one percent, which has more than doubled from nine percent in 1976 to 20 percent in 2011, has significantly affected income inequality,[455] leaving the United States with one of the widest income distributions among OECD nations.[456] The extent and relevance of income inequality is a matter of debate.[457][458][459]
273
+
274
+ Between June 2007 and November 2008, the global recession led to falling asset prices around the world. Assets owned by Americans lost about a quarter of their value.[460] Since peaking in the second quarter of 2007, household wealth was down $14 trillion, but has since increased $14 trillion over 2006 levels.[461] At the end of 2014, household debt amounted to $11.8 trillion,[462] down from $13.8 trillion at the end of 2008.[463]
275
+
276
+ There were about 578,424 sheltered and unsheltered homeless persons in the US in January 2014, with almost two-thirds staying in an emergency shelter or transitional housing program.[464] In 2011, 16.7 million children lived in food-insecure households, about 35% more than 2007 levels, though only 1.1% of U.S. children, or 845,000, saw reduced food intake or disrupted eating patterns at some point during the year, and most cases were not chronic.[465] As of June 2018[update], 40 million people, roughly 12.7% of the U.S. population, were living in poverty, with 18.5 million of those living in deep poverty (a family income below one-half of the poverty threshold) and over five million live "in 'Third World' conditions." In 2016, 13.3 million children were living in poverty, which made up 32.6% of the impoverished population.[466] In 2017, the U.S. state or territory with the lowest poverty rate was New Hampshire (7.6%), and the one with the highest was American Samoa (65%).[467][468][469]
277
+
278
+ Personal transportation is dominated by automobiles, which operate on a network of 4 million miles (6.4 million kilometers) of public roads.[471] The United States has the world's second-largest automobile market,[472] and has the highest rate of per-capita vehicle ownership in the world, with 765 vehicles per 1,000 Americans (1996).[473][needs update] In 2017, there were 255,009,283 non-two wheel motor vehicles, or about 910 vehicles per 1,000 people.[474]
279
+
280
+ The civil airline industry is entirely privately owned and has been largely deregulated since 1978, while most major airports are publicly owned.[475] The three largest airlines in the world by passengers carried are US-based; American Airlines is number one after its 2013 acquisition by US Airways.[476] Of the world's 50 busiest passenger airports, 16 are in the United States, including the busiest, Hartsfield–Jackson Atlanta International Airport.[477]
281
+
282
+ The United States energy market is about 29,000 terawatt hours per year.[478] In 2005, 40% of this energy came from petroleum, 23% from coal, and 22% from natural gas. The remainder was supplied by nuclear and renewable energy sources.[479]
283
+
284
+ Since 2007, the total greenhouse gas emissions by the United States are the second highest by country, exceeded only by China.[480] The United States has historically been the world's largest producer of greenhouse gases, and greenhouse gas emissions per capita remain high.[481]
285
+
286
+ The United States is home to many cultures and a wide variety of ethnic groups, traditions, and values.[483][484] Aside from the Native American, Native Hawaiian, and Native Alaskan populations, nearly all Americans or their ancestors settled or immigrated within the past five centuries.[485] Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa.[483][486] More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as both a homogenizing melting pot, and a heterogeneous salad bowl in which immigrants and their descendants retain distinctive cultural characteristics.[483]
287
+
288
+ Americans have traditionally been characterized by a strong work ethic, competitiveness, and individualism,[487] as well as a unifying belief in an "American creed" emphasizing liberty, equality, private property, democracy, rule of law, and a preference for limited government.[488] Americans are extremely charitable by global standards. According to a 2006 British study, Americans gave 1.67% of GDP to charity, more than any other nation studied.[489][490][491]
289
+
290
+ The American Dream, or the perception that Americans enjoy high social mobility, plays a key role in attracting immigrants.[492] Whether this perception is accurate has been a topic of debate.[493][494][495][496][420][497] While mainstream culture holds that the United States is a classless society,[498] scholars identify significant differences between the country's social classes, affecting socialization, language, and values.[499] While Americans tend to greatly value socioeconomic achievement, being ordinary or average is also generally seen as a positive attribute.[500]
291
+
292
+ In the 18th and early 19th centuries, American art and literature took most of its cues from Europe. Writers such as Washington Irving, Nathaniel Hawthorne, Edgar Allan Poe, and Henry David Thoreau established a distinctive American literary voice by the middle of the 19th century. Mark Twain and poet Walt Whitman were major figures in the century's second half; Emily Dickinson, virtually unknown during her lifetime, is now recognized as an essential American poet.[501] A work seen as capturing fundamental aspects of the national experience and character—such as Herman Melville's Moby-Dick (1851), Twain's The Adventures of Huckleberry Finn (1885), F. Scott Fitzgerald's The Great Gatsby (1925) and Harper Lee's To Kill a Mockingbird (1960)—may be dubbed the "Great American Novel."[502]
293
+
294
+ Twelve U.S. citizens have won the Nobel Prize in Literature, most recently Bob Dylan in 2016. William Faulkner, Ernest Hemingway and John Steinbeck are often named among the most influential writers of the 20th century.[503] Popular literary genres such as the Western and hardboiled crime fiction developed in the United States. The Beat Generation writers opened up new literary approaches, as have postmodernist authors such as John Barth, Thomas Pynchon, and Don DeLillo.[504]
295
+
296
+ The transcendentalists, led by Thoreau and Ralph Waldo Emerson, established the first major American philosophical movement. After the Civil War, Charles Sanders Peirce and then William James and John Dewey were leaders in the development of pragmatism. In the 20th century, the work of W. V. O. Quine and Richard Rorty, and later Noam Chomsky, brought analytic philosophy to the fore of American philosophical academia. John Rawls and Robert Nozick also led a revival of political philosophy.
297
+
298
+ In the visual arts, the Hudson River School was a mid-19th-century movement in the tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene.[505] Georgia O'Keeffe, Marsden Hartley, and others experimented with new, individualistic styles. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. The tide of modernism and then postmodernism has brought fame to American architects such as Frank Lloyd Wright, Philip Johnson, and Frank Gehry.[506] Americans have long been important in the modern artistic medium of photography, with major photographers including Alfred Stieglitz, Edward Steichen, Edward Weston, and Ansel Adams.[507]
299
+
300
+ Mainstream American cuisine is similar to that in other Western countries. Wheat is the primary cereal grain with about three-quarters of grain products made of wheat flour[508] and many dishes use indigenous ingredients, such as turkey, venison, potatoes, sweet potatoes, corn, squash, and maple syrup which were consumed by Native Americans and early European settlers.[509] These homegrown foods are part of a shared national menu on one of America's most popular holidays, Thanksgiving, when some Americans make traditional foods to celebrate the occasion.[510]
301
+
302
+ The American fast food industry, the world's largest,[511] pioneered the drive-through format in the 1940s.[512] Characteristic dishes such as apple pie, fried chicken, pizza, hamburgers, and hot dogs derive from the recipes of various immigrants. French fries, Mexican dishes such as burritos and tacos, and pasta dishes freely adapted from Italian sources are widely consumed.[513] Americans drink three times as much coffee as tea.[514] Marketing by U.S. industries is largely responsible for making orange juice and milk ubiquitous breakfast beverages.[515][516]
303
+
304
+ Although little known at the time, Charles Ives's work of the 1910s established him as the first major U.S. composer in the classical tradition, while experimentalists such as Henry Cowell and John Cage created a distinctive American approach to classical composition. Aaron Copland and George Gershwin developed a new synthesis of popular and classical music.
305
+
306
+ The rhythmic and lyrical styles of African-American music have deeply influenced American music at large, distinguishing it from European and African traditions. Elements from folk idioms such as the blues and what is now known as old-time music were adopted and transformed into popular genres with global audiences. Jazz was developed by innovators such as Louis Armstrong and Duke Ellington early in the 20th century. Country music developed in the 1920s, and rhythm and blues in the 1940s.[517]
307
+
308
+ Elvis Presley and Chuck Berry were among the mid-1950s pioneers of rock and roll. Rock bands such as Metallica, the Eagles, and Aerosmith are among the highest grossing in worldwide sales.[518][519][520] In the 1960s, Bob Dylan emerged from the folk revival to become one of America's most celebrated songwriters and James Brown led the development of funk.
309
+
310
+ More recent American creations include hip hop and house music. American pop stars such as Elvis Presley, Michael Jackson, and Madonna have become global celebrities,[517] as have contemporary musical artists such as Taylor Swift, Britney Spears, Katy Perry, Beyoncé, Jay-Z, Eminem, Kanye West, and Ariana Grande.[521]
311
+
312
+ Hollywood, a northern district of Los Angeles, California, is one of the leaders in motion picture production.[522] The world's first commercial motion picture exhibition was given in New York City in 1894, using Thomas Edison's Kinetoscope.[523] Since the early 20th century, the U.S. film industry has largely been based in and around Hollywood, although in the 21st century an increasing number of films are not made there, and film companies have been subject to the forces of globalization.[524]
313
+
314
+ Director D. W. Griffith, the top American filmmaker during the silent film period, was central to the development of film grammar, and producer/entrepreneur Walt Disney was a leader in both animated film and movie merchandising.[525] Directors such as John Ford redefined the image of the American Old West, and, like others such as John Huston, broadened the possibilities of cinema with location shooting. The industry enjoyed its golden years, in what is commonly referred to as the "Golden Age of Hollywood," from the early sound period until the early 1960s,[526] with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures.[527][528] In the 1970s, "New Hollywood" or the "Hollywood Renaissance"[529] was defined by grittier films influenced by French and Italian realist pictures of the post-war period.[530] In more recent times, directors such as Steven Spielberg, George Lucas and James Cameron have gained renown for their blockbuster films, often characterized by high production costs and earnings.
315
+
316
+ Notable films topping the American Film Institute's AFI 100 list include Orson Welles's Citizen Kane (1941), which is frequently cited as the greatest film of all time,[531][532] Casablanca (1942), The Godfather (1972), Gone with the Wind (1939), Lawrence of Arabia (1962), The Wizard of Oz (1939), The Graduate (1967), On the Waterfront (1954), Schindler's List (1993), Singin' in the Rain (1952), It's a Wonderful Life (1946) and Sunset Boulevard (1950).[533] The Academy Awards, popularly known as the Oscars, have been held annually by the Academy of Motion Picture Arts and Sciences since 1929,[534] and the Golden Globe Awards have been held annually since January 1944.[535]
317
+
318
+ American football is by several measures the most popular spectator sport;[537] the National Football League (NFL) has the highest average attendance of any sports league in the world, and the Super Bowl is watched by tens of millions globally. Baseball has been regarded as the U.S. national sport since the late 19th century, with Major League Baseball (MLB) being the top league. Basketball and ice hockey are the country's next two leading professional team sports, with the top leagues being the National Basketball Association (NBA) and the National Hockey League (NHL). College football and basketball attract large audiences.[538] In soccer, the country hosted the 1994 FIFA World Cup, the men's national soccer team qualified for ten World Cups and the women's team has won the FIFA Women's World Cup four times; Major League Soccer is the sport's highest league in the United States (featuring 23 American and three Canadian teams). The market for professional sports in the United States is roughly $69 billion, roughly 50% larger than that of all of Europe, the Middle East, and Africa combined.[539]
319
+
320
+ Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri were the first ever Olympic Games held outside of Europe.[540] As of 2017[update], the United States has won 2,522 medals at the Summer Olympic Games, more than any other country, and 305 in the Winter Olympic Games, the second most behind Norway.[541]
321
+ While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, some of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate Western contact.[542] The most watched individual sports are golf and auto racing, particularly NASCAR.[543][544]
322
+
323
+ The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (FOX). The four major broadcast television networks are all commercial entities. Cable television offers hundreds of channels catering to a variety of niches.[545] Americans listen to radio programming, also largely commercial, on average just over two-and-a-half hours a day.[546]
324
+
325
+ In 1998, the number of U.S. commercial radio stations had grown to 4,793 AM stations and 5,662 FM stations. In addition, there are 1,460 public radio stations. Most of these stations are run by universities and public authorities for educational purposes and are financed by public or private funds, subscriptions, and corporate underwriting. Much public-radio broadcasting is supplied by NPR. NPR was incorporated in February 1970 under the Public Broadcasting Act of 1967; its television counterpart, PBS, was created by the same legislation. As of September 30, 2014[update], there are 15,433 licensed full-power radio stations in the U.S. according to the U.S. Federal Communications Commission (FCC).[547]
326
+
327
+ Well-known newspapers include The Wall Street Journal, The New York Times, and USA Today.[548] Although the cost of publishing has increased over the years, the price of newspapers has generally remained low, forcing newspapers to rely more on advertising revenue and on articles provided by a major wire service, such as the Associated Press or Reuters, for their national and world coverage. With very few exceptions, all the newspapers in the U.S. are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or in a situation that is increasingly rare, by individuals or families. Major cities often have "alternative weeklies" to complement the mainstream daily papers, such as New York City's The Village Voice or Los Angeles' LA Weekly. Major cities may also support a local business journal, trade papers relating to local industries, and papers for local ethnic and social groups. Aside from web portals and search engines, the most popular websites are Facebook, YouTube, Wikipedia, Yahoo!, eBay, Amazon, and Twitter.[549]
328
+
329
+ More than 800 publications are produced in Spanish, the second most commonly used language in the United States behind English.[550][551]
330
+
331
+ Internet sources
332
+
en/5885.html.txt ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A USB flash drive[note 1] is a data storage device that includes flash memory with an integrated USB interface. It is typically removable, rewritable and much smaller than an optical disc. Most weigh less than 30 g (1 oz). Since first appearing on the market in late 2000, as with virtually all other computer memory devices, storage capacities have risen while prices have dropped. As of March 2016[update], flash drives with anywhere from 8 to 256 gigabytes (GB[2]) were frequently sold, while 512 GB and 1 terabyte (TB[3]) units were less frequent.[4][5] As of 2018, 2 TB flash drives were the largest available in terms of storage capacity.[6] Some allow up to 100,000 write/erase cycles, depending on the exact type of memory chip used, and are thought to last between 10 and 100 years under normal circumstances (shelf storage time[7]).
2
+
3
+ USB flash drives are often used for storage, data back-up and transfering of computer files. Compared with floppy disks or CDs, they are smaller, faster, have significantly more capacity, and are more durable due to a lack of moving parts. Additionally, they are immune to electromagnetic interference (unlike floppy disks), and are unharmed by surface scratches (unlike CDs). Until about 2005, most desktop and laptop computers were supplied with floppy disk drives in addition to USB ports, but floppy disk drives became obsolete after widespread adoption of USB ports and the larger USB drive capacity compared to the "1.44 megabyte" (1440 kibibyte) 3.5-inch floppy disk.
4
+
5
+ USB flash drives use the USB mass storage device class standard, supported natively by modern operating systems such as Windows, Linux, macOS and other Unix-like systems, as well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox One, PlayStation 4, DVD players, automobile entertainment systems, and in a number of handheld devices such as smartphones and tablet computers, though the electronically similar SD card is better suited for those devices.
6
+
7
+ A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case, which can be carried in a pocket or on a key chain, for example. The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not likely to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing connection with a port on a personal computer, but drives for other interfaces also exist. USB flash drives draw power from the computer via the USB connection. Some devices combine the functionality of a portable media player with USB flash storage; they require a battery only when used to play music on the go.
8
+
9
+ The basis for USB flash drives is flash memory, a type of floating-gate semiconductor memory invented by Fujio Masuoka in the early 1980s. Flash memory uses floating-gate MOSFET transistors as memory cells.[8][9]
10
+
11
+ M-Systems, an Israeli company, were granted a US patent on November 14, 2000, titled "Architecture for a [USB]-based Flash Disk", and crediting the invention to Amir Ban, Dov Moran and Oron Ogdan, all M-Systems employees at the time. The patent application was filed by M-Systems in April 1999.[10][1][11] Later in 1999, IBM filed an invention disclosure by one of its employees.[1] Flash drives were sold initially by Trek 2000 International, a company in Singapore, which began selling in early 2000. IBM became the first to sell USB flash drives in the United States in 2000.[1] The initial storage capacity of a flash drive was 8 MB.[12][11] Another version of the flash drive, described as a pen drive, was also developed. Pua Khein-Seng from Malaysia has been credited with this invention.[13] Patent disputes have arisen over the years, with competing companies including Singaporean company Trek Technology and Chinese company Netac Technology, attempting to enforce their patents.[14] Trek won a suit in Singapore,[15][16] but has lost battles in other countries.[17] Netac Technology has brought lawsuits against PNY Technologies,[18] Lenovo,[19] aigo,[20] Sony,[21][22][23] and Taiwan's Acer and Tai Guen Enterprise Co.[23]
12
+
13
+ Flash drives are often measured by the rate at which they transfer data. Transfer rates may be given in megabytes per second (MB/s), megabits per second (Mbit/s), or in optical drive multipliers such as "180X" (180 times 150 KiB/s).[24] File transfer rates vary considerably among devices. Second generation flash drives have claimed to read at up to 30 MB/s and write at about half that rate, which was about 20 times faster than the theoretical transfer rate achievable by the previous model, USB 1.1, which is limited to 12 Mbit/s (1.5 MB/s) with accounted overhead.[25] The effective transfer rate of a device is significantly affected by the data access pattern.[26]
14
+
15
+ By 2002, USB flash drives had USB 2.0 connectivity, which has 480 Mbit/s as the transfer rate upper bound; after accounting for the protocol overhead that translates to a 35 MB/s effective throughput.[27] That same year, Intel sparked widespread use of second generation USB by including them within its laptops.[28]
16
+
17
+ Third generation USB flash drives were announced in late 2008 and became available for purchase in 2010.[citation needed] Like USB 2.0 before it, USB 3.0 dramatically improved data transfer rates compared to its predecessor. The USB 3.0 interface specified transfer rates up to 5 Gbit/s (625 MB/s), compared to USB 2.0's 480 Mbit/s (60 MB/s).[citation needed] By 2010 the maximum available storage capacity for the devices had reached upwards of 128 GB.[11] USB 3.0 was slow to appear in laptops. As of 2010, the majority of laptop models still contained the 2.0.[28]
18
+
19
+ In January 2013, tech company Kingston, released a flash drive with 1 TB of storage.[29] The first USB 3.1 type-C flash drives, with read/write speeds of around 530 MB/s, were announced in March 2015.[30] As of July 2016, flash drives with 8 to 256 GB capacity were sold more frequently than those with capacities between 512 GB and 1 TB.[4][5] In 2017, Kingston Technology announced the release of a 2-TB flash drive.[31] In 2018, SanDisk announced a 1TB USB-C flash drive, the smallest of its kind.[32]
20
+
21
+ Internals of a typical USB flash drive
22
+
23
+ On a USB flash drive, one end of the device is fitted with a single Standard-A USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices.[33]
24
+
25
+ On a USB flash drive, one end of the device is fitted with a single Standard-A USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices.
26
+
27
+ Inside the plastic casing is a small printed circuit board, which has some power circuitry and a small number of surface-mounted integrated circuits (ICs).[citation needed] Typically, one of these ICs provides an interface between the USB connector and the onboard memory, while the other is the flash memory. Drives typically use the USB mass storage device class to communicate with the host.[34]
28
+
29
+ Flash memory combines a number of older technologies, with lower cost, lower power consumption and small size made possible by advances in semiconductor device fabrication technology. The memory storage was based on earlier EPROM and EEPROM technologies. These had limited capacity, were slow for both reading and writing, required complex high-voltage drive circuitry, and could be re-written only after erasing the entire contents of the chip.
30
+
31
+ Hardware designers later developed EEPROMs with the erasure region broken up into smaller "fields" that could be erased individually without affecting the others. Altering the contents of a particular memory location involved copying the entire field into an off-chip buffer memory, erasing the field, modifying the data as required in the buffer, and re-writing it into the same field. This required considerable computer support, and PC-based EEPROM flash memory systems often carried their own dedicated microprocessor system. Flash drives are more or less a miniaturized version of this.
32
+
33
+ The development of high-speed serial data interfaces such as USB made semiconductor memory systems with serially accessed storage viable, and the simultaneous development of small, high-speed, low-power microprocessor systems allowed this to be incorporated into extremely compact systems. Serial access requires far fewer electrical connections for the memory chips than does parallel access, which has simplified the manufacture of multi-gigabyte drives.
34
+
35
+ Computers access modern[update] flash memory systems very much like hard disk drives, where the controller system has full control over where information is actually stored. The actual EEPROM writing and erasure processes are, however, still very similar to the earlier systems described above.
36
+
37
+ Many low-cost MP3 players simply add extra software and a battery to a standard flash memory control microprocessor so it can also serve as a music playback decoder. Most of these players can also be used as a conventional flash drive, for storing files of any type.
38
+
39
+ There are typically five parts to a flash drive:
40
+
41
+ The typical device may also include:
42
+
43
+ Most USB flash drives weigh less than 30 g (1 oz).[37] While some manufacturers are competing for the smallest size,[38] with the biggest memory, offering drives only a few millimeters larger than the USB plug itself,[39] some manufacturers differentiate their products by using elaborate housings, which are often bulky and make the drive difficult to connect to the USB port. Because the USB port connectors on a computer housing are often closely spaced, plugging a flash drive into a USB port may block an adjacent port. Such devices may carry the USB logo only if sold with a separate extension cable. Such cables are USB-compatible but do not conform to the USB standard.[40][41]
44
+
45
+ USB flash drives have been integrated into other commonly carried items, such as watches, pens, laser pointers, and even the Swiss Army Knife; others have been fitted with novelty cases such as toy cars or Lego bricks. USB flash drives with images of dragons, cats or aliens are very popular in Asia.[42] The small size, robustness and cheapness of USB flash drives make them an increasingly popular peripheral for case modding.
46
+
47
+ Most flash drives ship preformatted with the FAT32, or exFAT file systems. The ubiquity of the FAT32 file system allows the drive to be accessed on virtually any host device with USB support. Also, standard FAT maintenance utilities (e.g., ScanDisk) can be used to repair or retrieve corrupted data. However, because a flash drive appears as a USB-connected hard drive to the host system, the drive can be reformatted to any file system supported by the host operating system.
48
+
49
+ The memory in flash drives is commonly engineered with multi-level cell (MLC) based memory that is good for around 3,000-5,000 program-erase cycles,[46] but some flash drives have single-level cell (SLC) based memory that is good for around 100,000 writes. There is virtually no limit to the number of reads from such flash memory, so a well-worn USB drive may be write-protected to help ensure the life of individual cells.
50
+
51
+ Estimation of flash memory endurance is a challenging subject that depends on the SLC/MLC/TLC memory type, size of the flash memory chips, and actual usage pattern. As a result, a USB flash drive can last from a few days to several hundred years.[47]
52
+
53
+ Regardless of the endurance of the memory itself, the USB connector hardware is specified to withstand only around 1,500 insert-removal cycles.[48]
54
+
55
+ Counterfeit USB flash drives are sometimes sold with claims of having higher capacities than they actually have. These are typically low capacity USB drives whose flash memory controller firmware is modified so that they emulate larger capacity drives (for example, a 2 GB drive being marketed as a 64 GB drive). When plugged into a computer, they report themselves as being the larger capacity they were sold as, but when data is written to them, either the write fails, the drive freezes up, or it overwrites existing data. Software tools exist to check and detect fake USB drives,[49][50] and in some cases it is possible to repair these devices to remove the false capacity information and use its real storage limit.[51]
56
+
57
+ Transfer speeds are technically determined by the slowest of three factors: the USB version used, the speed in which the USB controller device can read and write data onto the flash memory, and the speed of the hardware bus, especially in the case of add-on USB ports.
58
+
59
+ USB flash drives usually specify their read and write speeds in megabytes per second (MB/s); read speed is usually faster. These speeds are for optimal conditions; real-world speeds are usually slower. In particular, circumstances that often lead to speeds much lower than advertised are transfer (particularly writing) of many small files rather than a few very large ones, and mixed reading and writing to the same device.
60
+
61
+ In a typical well-conducted review of a number of high-performance USB 3.0 drives, a drive that could read large files at 68 MB/s and write at 46 MB/s, could only manage 14 MB/s and 0.3 MB/s with many small files. When combining streaming reads and writes the speed of another drive, that could read at 92 MB/s and write at 70 MB/s, was 8 MB/s. These differences differ radically from one drive to another; some drives could write small files at over 10% of the speed for large ones. The examples given are chosen to illustrate extremes....′[52]
62
+
63
+ The most common use of flash drives is to transport and store personal files, such as documents, pictures and videos. Individuals also store medical information on flash drives for emergencies and disaster preparation.
64
+
65
+ With wide deployment(s) of flash drives being used in various environments (secured or otherwise), the issue of data and information security remains important. The use of biometrics and encryption is becoming the norm with the need for increased security for data; on-the-fly encryption systems are particularly useful in this regard, as they can transparently encrypt large amounts of data. In some cases a secure USB drive may use a hardware-based encryption mechanism that uses a hardware module instead of software for strongly encrypting data. IEEE 1667 is an attempt to create a generic authentication platform for USB drives. It is supported in Windows 7 and Windows Vista (Service Pack 2 with a hotfix).[53]
66
+
67
+ A recent development for the use of a USB Flash Drive as an application carrier is to carry the Computer Online Forensic Evidence Extractor (COFEE) application developed by Microsoft. COFEE is a set of applications designed to search for and extract digital evidence on computers confiscated from suspects.[54] Forensic software is required not to alter, in any way, the information stored on the computer being examined. Other forensic suites run from CD-ROM or DVD-ROM, but cannot store data on the media they are run from (although they can write to other attached devices, such as external drives or memory sticks).
68
+
69
+ Motherboard firmware (including BIOS and UEFI) can be updated using USB flash drives. Usually, new firmware image is downloaded and placed onto a FAT16- or FAT32-formatted USB flash drive connected to a system which is to be updated, and path to the new firmware image is selected within the update component of system's firmware.[55] Some motherboard manufacturers are also allowing such updates to be performed without the need for entering system's firmware update component, making it possible to easily recover systems with corrupted firmware.[56]
70
+
71
+ Also, HP has introduced a USB floppy drive key, which is an ordinary USB flash drive with additional possibility for performing floppy drive emulation, allowing its usage for updating system firmware where direct usage of USB flash drives is not supported. Desired mode of operation (either regular USB mass storage device or of floppy drive emulation) is made selectable by a sliding switch on the device's housing.[57][58]
72
+
73
+ Most current PC firmware permits booting from a USB drive, allowing the launch of an operating system from a bootable flash drive. Such a configuration is known as a Live USB.[59]
74
+
75
+ Original flash memory designs had very limited estimated lifetimes. The failure mechanism for flash memory cells is analogous to a metal fatigue mode; the device fails by refusing to write new data to specific cells that have been subject to many read-write cycles over the device's lifetime. Premature failure of a "live USB" could be circumvented by using a flash drive with a write-lock switch as a WORM device, identical to a live CD. Originally, this potential failure mode limited the use of "live USB" system to special-purpose applications or temporary tasks, such as:
76
+
77
+ As of 2011[update], newer flash memory designs have much higher estimated lifetimes. Several manufacturers are now offering warranties of 5 years or more. Such warranties should make the device more attractive for more applications. By reducing the probability of the device's premature failure, flash memory devices can now be considered for use where a magnetic disk would normally have been required. Flash drives have also experienced an exponential growth in their storage capacity over time (following the Moore's Law growth curve). As of 2013, single-packaged devices with capacities of 1 TB are readily available,[60] and devices with 16 GB capacity are very economical. Storage capacities in this range have traditionally been considered to offer adequate space, because they allow enough space for both the operating system software and some free space for the user's data.
78
+
79
+ Installers of some operating systems can be stored to a flash drive instead of a CD or DVD, including various Linux distributions, Windows 7 and newer versions, and macOS. In particular, Mac OS X 10.7 is distributed only online, through the Mac App Store, or on flash drives; for a MacBook Air with Boot Camp and no external optical drive, a flash drive can be used to run installation of Windows or Linux.
80
+
81
+ However, for installation of Windows 7 and later versions, using USB flash drive with hard disk drive emulation as detected in PC's firmware is recommended in order to boot from it. Transcend is the only manufacturer of USB flash drives containing such feature.
82
+
83
+ Furthermore, for installation of Windows XP, using USB flash drive with storage limit of at most 2 GB is recommended in order to boot from it.
84
+
85
+ In Windows Vista and later versions, ReadyBoost feature allows flash drives (from 4 GB in case of Windows Vista) to augment operating system memory.[61]
86
+
87
+ Flash drives are used to carry applications that run on the host computer without requiring installation. While any standalone application can in principle be used this way, many programs store data, configuration information, etc. on the hard drive and registry of the host computer.
88
+
89
+ The U3 company works with drive makers (parent company SanDisk as well as others) to deliver custom versions of applications designed for Microsoft Windows from a special flash drive; U3-compatible devices are designed to autoload a menu when plugged into a computer running Windows. Applications must be modified for the U3 platform not to leave any data on the host machine. U3 also provides a software framework for independent software vendors interested in their platform.
90
+
91
+ Ceedo is an alternative product, with the key difference that it does not require Windows applications to be modified in order for them to be carried and run on the drive.
92
+
93
+ Similarly, other application virtualization solutions and portable application creators, such as VMware ThinApp (for Windows) or RUNZ (for Linux) can be used to run software from a flash drive without installation.
94
+
95
+ In October 2010, Apple Inc. released their newest iteration of the MacBook Air, which had the system's restore files contained on a USB hard drive rather than the traditional install CDs, due to the Air not coming with an optical drive.[62]
96
+
97
+ A wide range of portable applications which are all free of charge, and able to run off a computer running Windows without storing anything on the host computer's drives or registry, can be found in the list of portable software.
98
+
99
+ Some value-added resellers are now using a flash drive as part of small-business turnkey solutions (e.g., point-of-sale systems). The drive is used as a backup medium: at the close of business each night, the drive is inserted, and a database backup is saved to the drive. Alternatively, the drive can be left inserted through the business day, and data regularly updated. In either case, the drive is removed at night and taken offsite.
100
+
101
+ Flash drives also have disadvantages. They are easy to lose and facilitate unauthorized backups. A lesser setback for flash drives is that they have only one tenth the capacity of hard drives manufactured around their time of distribution.
102
+
103
+ Many companies make small solid-state digital audio players, essentially producing flash drives with sound output and a simple user interface. Examples include the Creative MuVo, Philips GoGear and the first generation iPod shuffle. Some of these players are true USB flash drives as well as music players; others do not support general-purpose data storage. Other applications requiring storage, such as digital voice or sound recording, can also be combined with flash drive functionality.[63]
104
+
105
+ Many of the smallest players are powered by a permanently fitted rechargeable battery, charged from the USB interface. Fancier devices that function as a digital audio player have a USB host port (type A female typically).
106
+
107
+ Digital audio files can be transported from one computer to another like any other file, and played on a compatible media player (with caveats for DRM-locked files). In addition, many home Hi-Fi and car stereo head units are now equipped with a USB port. This allows a USB flash drive containing media files in a variety of formats to be played directly on devices which support the format. Some LCD monitors for consumer HDTV viewing have a dedicated USB port through which music and video files can also be played without use of a personal computer.
108
+
109
+ Artists have sold or given away USB flash drives, with the first instance believed to be in 2004 when the German punk band Wizo released the Stick EP, only as a USB drive. In addition to five high-bitrate MP3s, it also included a video, pictures, lyrics, and guitar tablature.[64] Subsequently, artists including Nine Inch Nails and Kylie Minogue[65] have released music and promotional material on USB flash drives. The first USB album to be released in the UK was Kiss Does... Rave, a compilation album released by the Kiss Network in April 2007.[66]
110
+
111
+ The availability of inexpensive flash drives has enabled them to be used for promotional and marketing purposes, particularly within technical and computer-industry circles (e.g., technology trade shows). They may be given away for free, sold at less than wholesale price, or included as a bonus with another purchased product.
112
+
113
+ Usually, such drives will be custom-stamped with a company's logo, as a form of advertising. The drive may be blank, or preloaded with graphics, documentation, web links, Flash animation or other multimedia, and free or demonstration software. Some preloaded drives are read-only, while others are configured with both read-only and user-writable segments. Such dual-partition drives are more expensive.[67]
114
+
115
+ Flash drives can be set up to automatically launch stored presentations, websites, articles, and any other software immediately on insertion of the drive using the Microsoft Windows AutoRun feature.[68] Autorunning software this way does not work on all computers, and it is normally disabled by security-conscious users.
116
+
117
+ In the arcade game In the Groove and more commonly In The Groove 2, flash drives are used to transfer high scores, screenshots, dance edits, and combos throughout sessions. As of software revision 21 (R21), players can also store custom songs and play them on any machine on which this feature is enabled. While use of flash drives is common, the drive must be Linux compatible.
118
+
119
+ In the arcade games Pump it Up NX2 and Pump it Up NXA, a specially produced flash drive is used as a "save file" for unlocked songs, as well as for progressing in the WorldMax and Brain Shower sections of the game.
120
+
121
+ In the arcade game Dance Dance Revolution X, an exclusive USB flash drive was made by Konami for the purpose of the link feature from its Sony PlayStation 2 counterpart. However, any USB flash drive can be used in this arcade game.
122
+
123
+ Flash drives use little power, have no fragile moving parts, and for most capacities are small and light. Data stored on flash drives is impervious to mechanical shock, magnetic fields, scratches and dust. These properties make them suitable for transporting data from place to place and keeping the data readily at hand.
124
+
125
+ Flash drives also store data densely compared to many removable media. In mid-2009, 256 GB drives became available, with the ability to hold many times more data than a DVD (54 DVDs) or even a Blu-ray (10 BDs).[69]
126
+
127
+ Flash drives implement the USB mass storage device class so that most modern operating systems can read and write to them without installing device drivers. The flash drives present a simple block-structured logical unit to the host operating system, hiding the individual complex implementation details of the various underlying flash memory devices. The operating system can use any file system or block addressing scheme. Some computers can boot up from flash drives.
128
+
129
+ Specially manufactured flash drives are available that have a tough rubber or metal casing designed to be waterproof and virtually "unbreakable". These flash drives retain their memory after being submerged in water, and even through a machine wash. Leaving such a flash drive out to dry completely before allowing current to run through it has been known to result in a working drive with no future problems. Channel Five's Gadget Show cooked one of these flash drives with propane, froze it with dry ice, submerged it in various acidic liquids, ran over it with a jeep and fired it against a wall with a mortar. A company specializing in recovering lost data from computer drives managed to recover all the data on the drive.[70] All data on the other removable storage devices tested, using optical or magnetic technologies, were destroyed.
130
+
131
+ The applications of current data tape cartridges hardly overlap those of flash drives: on tape, cost per gigabyte is very low for large volumes, but the individual drives and media are expensive. Media have a very high capacity and very fast transfer speeds, but store data sequentially and are very slow for random access of data. While disk-based backup is now the primary medium of choice for most companies, tape backup is still popular for taking data off-site for worst-case scenarios and for very large volumes (more than a few hundreds of TB). See LTO tapes.
132
+
133
+ Floppy disk drives are rarely fitted to modern computers and are obsolete for normal purposes, although internal and external drives can be fitted if required. Floppy disks may be the method of choice for transferring data to and from very old computers without USB or booting from floppy disks, and so they are sometimes used to change the firmware on, for example, BIOS chips. Devices with removable storage like older Yamaha music keyboards are also dependent on floppy disks, which require computers to process them. Newer devices are built with USB flash drive support.
134
+
135
+ Floppy disk hardware emulators exist which effectively utilize the internal connections and physical attributes of a floppy disk drive to utilize a device where a USB flash drive emulates the storage space of a floppy disk in a solid state form, and can be divided into a number of individual virtual floppy disk images using individual data channels.
136
+
137
+ The various writable and re-writable forms of CD and DVD are portable storage media supported by the vast majority of computers as of 2008. CD-R, DVD-R, and DVD+R can be written to only once, RW varieties up to about 1,000 erase/write cycles, while modern NAND-based flash drives often last for 500,000 or more erase/write cycles. DVD-RAM discs are the most suitable optical discs for data storage involving much rewriting.
138
+
139
+ Optical storage devices are among the cheapest methods of mass data storage after the hard drive. They are slower than their flash-based counterparts. Standard 120 mm optical discs are larger than flash drives and more subject to damage. Smaller optical media do exist, such as business card CD-Rs which have the same dimensions as a credit card, and the slightly less convenient but higher capacity 80 mm recordable MiniCD and Mini DVD. The small discs are more expensive than the standard size, and do not work in all drives.
140
+
141
+ Universal Disk Format (UDF) version 1.50 and above has facilities to support rewritable discs like sparing tables and virtual allocation tables, spreading usage over the entire surface of a disc and maximising life, but many older operating systems do not support this format. Packet-writing utilities such as DirectCD and InCD are available but produce discs that are not universally readable (although based on the UDF standard). The Mount Rainier standard addresses this shortcoming in CD-RW media by running the older file systems on top of it and performing defect management for those standards, but it requires support from both the CD/DVD burner and the operating system. Many drives made today do not support Mount Rainier, and many older operating systems such as Windows XP and below, and Linux kernels older than 2.6.2, do not support it (later versions do). Essentially CDs/DVDs are a good way to record a great deal of information cheaply and have the advantage of being readable by most standalone players, but they are poor at making ongoing small changes to a large collection of information. Flash drives' ability to do this is their major advantage over optical media.
142
+
143
+ Flash memory cards, e.g., Secure Digital cards, are available in various formats and capacities, and are used by many consumer devices. However, while virtually all PCs have USB ports, allowing the use of USB flash drives, memory card readers are not commonly supplied as standard equipment (particularly with desktop computers). Although inexpensive card readers are available that read many common formats, this results in two pieces of portable equipment (card plus reader) rather than one.
144
+
145
+ Some manufacturers, aiming at a "best of both worlds" solution, have produced card readers that approach the size and form of USB flash drives (e.g., Kingston MobileLite,[71] SanDisk MobileMate[72]) These readers are limited to a specific subset of memory card formats (such as SD, microSD, or Memory Stick), and often completely enclose the card, offering durability and portability approaching, if not quite equal to, that of a flash drive. Although the combined cost of a mini-reader and a memory card is usually slightly higher than a USB flash drive of comparable capacity, the reader + card solution offers additional flexibility of use, and virtually "unlimited" capacity. The ubiquity of SD cards is such that, circa 2011, due to economies of scale, their price is now less than an equivalent-capacity USB flash drive, even with the added cost of a USB SD card reader.
146
+
147
+ An additional advantage of memory cards is that many consumer devices (e.g., digital cameras, portable music players) cannot make use of USB flash drives (even if the device has a USB port), whereas the memory cards used by the devices can be read by PCs with a card reader.
148
+
149
+ Particularly with the advent of USB, external hard disks have become widely available and inexpensive. External hard disk drives currently cost less per gigabyte than flash drives and are available in larger capacities. Some hard drives support alternative and faster interfaces than USB 2.0 (e.g., Thunderbolt, FireWire and eSATA). For consecutive sector writes and reads (for example, from an unfragmented file), most hard drives can provide a much higher sustained data rate than current NAND flash memory, though mechanical latencies seriously impact hard drive performance.
150
+
151
+ Unlike solid-state memory, hard drives are susceptible to damage by shock (e.g., a short fall) and vibration, have limitations on use at high altitude, and although they are shielded by their casings, they are vulnerable when exposed to strong magnetic fields. In terms of overall mass, hard drives are usually larger and heavier than flash drives; however, hard disks sometimes weigh less per unit of storage. Like flash drives, hard disks also suffer from file fragmentation, which can reduce access speed.
152
+
153
+ Audio tape cassettes and high-capacity floppy disks (e.g., Imation SuperDisk), and other forms of drives with removable magnetic media, such as the Iomega Zip and Jaz drives, are now largely obsolete and rarely used. There are products in today's market that will emulate these legacy drives for both tape and disk (SCSI1/SCSI2, SASI, Magneto optic, Ricoh ZIP, Jaz, IBM3590/ Fujitsu 3490E and Bernoulli for example) in state-of-the-art Compact Flash storage devices – CF2SCSI.
154
+
155
+ As highly portable media, USB flash drives are easily lost or stolen. All USB flash drives can have their contents encrypted using third-party disk encryption software, which can often be run directly from the USB drive without installation (for example, FreeOTFE), although some, such as BitLocker, require the user to have administrative rights on every computer it is run on.
156
+
157
+ Archiving software can achieve a similar result by creating encrypted ZIP or RAR files.[73][74]
158
+
159
+ Some manufacturers have produced USB flash drives which use hardware-based encryption as part of the design,[75] removing the need for third-party encryption software. In limited circumstances these drives have been shown to have security problems, and are typically more expensive than software-based systems, which are available for free.
160
+
161
+ A minority of flash drives support biometric fingerprinting to confirm the user's identity. As of mid-2005[update],[needs update] this was an expensive alternative to standard password protection offered on many new USB flash storage devices. Most fingerprint scanning drives rely upon the host operating system to validate the fingerprint via a software driver, often restricting the drive to Microsoft Windows computers. However, there are USB drives with fingerprint scanners which use controllers that allow access to protected data without any authentication.[76]
162
+
163
+ Some manufacturers deploy physical authentication tokens in the form of a flash drive. These are used to control access to a sensitive system by containing encryption keys or, more commonly, communicating with security software on the target machine. The system is designed so the target machine will not operate except when the flash drive device is plugged into it. Some of these "PC lock" devices also function as normal flash drives when plugged into other machines.
164
+
165
+ Like all flash memory devices, flash drives can sustain only a limited number of write and erase cycles before the drive fails.[77][unreliable source?][78] This should be a consideration when using a flash drive to run application software or an operating system. To address this, as well as space limitations, some developers have produced special versions of operating systems (such as Linux in Live USB)[79] or commonplace applications (such as Mozilla Firefox) designed to run from flash drives. These are typically optimized for size and configured to place temporary or intermediate files in the computer's main RAM rather than store them temporarily on the flash drive.
166
+
167
+ When used in the same manner as external rotating drives (hard drives, optical drives, or floppy drives), i.e. in ignorance of their technology, USB drives' failure is more likely to be sudden: while rotating drives can fail instantaneously, they more frequently give some indication (noises, slowness) that they are about to fail, often with enough advance warning that data can be removed before total failure. USB drives give little or no advance warning of failure. Furthermore, when internal wear-leveling is applied to prolong life of the flash drive, once failure of even part of the memory occurs it can be difficult or impossible to use the remainder of the drive, which differs from magnetic media, where bad sectors can be marked permanently not to be used.[80]
168
+
169
+ Most USB flash drives do not include a write protection mechanism. This feature, which gradually became less common, consists of a switch on the housing of the drive itself, that prevents the host computer from writing or modifying data on the drive. For example, write protection makes a device suitable for repairing virus-contaminated host computers without the risk of infecting a USB flash drive itself. In contrast to SD cards, write protection on USB flash drives (when available) is connected to the drive circuitry, and is handled by the drive itself instead of the host (on SD cards handling of the write-protection notch is optional).
170
+
171
+ A drawback to the small physical size of flash drives is that they are easily misplaced or otherwise lost. This is a particular problem if they contain sensitive data (see data security). As a consequence, some manufacturers have added encryption hardware to their drives, although software encryption systems which can be used in conjunction with any mass storage medium will achieve the same result. Most drives can be attached to keychains or lanyards. The USB plug is usually retractable or fitted with a removable protective cap.
172
+
173
+ Storage capacity of USB flash drives in 2019 was up to 2 TB while hard disks can be as large as 16 TB. As of 2011, USB flash drives were more expensive per unit of storage than large hard drives, but were less expensive in capacities of a few tens of gigabytes.[81]
174
+
175
+ Most USB-based flash technology integrates a printed circuit board with a metal tip, which is simply soldered on. As a result, the stress point is where the two pieces join. The quality control of some manufacturers does not ensure a proper solder temperature, further weakening the stress point.[82][83] Since many flash drives stick out from computers, they are likely to be bumped repeatedly and may break at the stress point. Most of the time, a break at the stress point tears the joint from the printed circuit board and results in permanent damage. However, some manufacturers produce discreet flash drives that do not stick out, and others use a solid metal or plastic uni-body that has no easily discernible stress point. SD cards serve as a good alternative to USB drives since they can be inserted flush.
176
+
177
+ Flash drives may present a significant security challenge for some organizations. Their small size and ease of use allows unsupervised visitors or employees to store and smuggle out confidential data with little chance of detection. Both corporate and public computers are vulnerable to attackers connecting a flash drive to a free USB port and using malicious software such as keyboard loggers or packet sniffers.
178
+
179
+ For computers set up to be bootable from a USB drive, it is possible to use a flash drive containing a bootable portable operating system to access the files of the computer, even if the computer is password protected. The password can then be changed, or it may be possible to crack the password with a password cracking program and gain full control over the computer. Encrypting files provides considerable protection against this type of attack.
180
+
181
+ USB flash drives may also be used deliberately or unwittingly to transfer malware and autorun worms onto a network.
182
+
183
+ Some organizations forbid the use of flash drives, and some computers are configured to disable the mounting of USB mass storage devices by users other than administrators; others use third-party software to control USB usage. The use of software allows the administrator to not only provide a USB lock but also control the use of CD-RW, SD cards and other memory devices. This enables companies with policies forbidding the use of USB flash drives in the workplace to enforce these policies. In a lower-tech security solution, some organizations disconnect USB ports inside the computer or fill the USB sockets with epoxy.
184
+
185
+ Some of the security measures taken to prevent confidential data from being taken have presented some side effects such as curtailing user privileges of recharging mobile devices off the USB ports on the systems.
186
+
187
+ In appearance similar to a USB flash drive, a USB killer is a circuit that charges up capacitors to a high voltage using the power supply pins of a USB port then discharges high voltage pulses onto the data pins. This completely standalone device can instantly and permanently damage or destroy any host hardware that it is connected to.[84]
188
+
189
+ The New York-based Human Rights Foundation collaborated with Forum 280 and USB Memory Direct to launch the "Flash Drives for Freedom" program.[85][86] The program was created in 2016 to smuggle flash drives with American and South Korean movies and television shows, as well as a copy of the Korean Wikipedia, into North Korea to spread pro-Western sentiment.[87][88]
190
+
191
+ In 2005, Microsoft was using the term "USB Flash Drive" as the common name for these devices when they introduced the Microsoft USB Flash Drive Manager.[89] Alternative names are commonly used, many of which are trademarks of various manufacturers.
192
+
193
+ Semiconductor corporations have worked to reduce the cost of the components in a flash drive by integrating various flash drive functions in a single chip, thereby reducing the part-count and overall package-cost.
194
+
195
+ Flash drive capacities on the market increase continually. High speed has become a standard for modern flash drives. Capacities exceeding 256 GB were available on the market as early as 2009.[69]
196
+
197
+ Lexar is attempting to introduce a USB FlashCard, which would be a compact USB flash drive intended to replace various kinds of flash memory cards. Pretec introduced a similar card, which also plugs into any USB port, but is just one quarter the thickness of the Lexar model.[90] Until 2008, SanDisk manufactured a product called SD Plus, which was a SecureDigital card with a USB connector.[91]
198
+
199
+ SanDisk has also introduced a new technology to allow controlled storage and usage of copyrighted materials on flash drives, primarily for use by students. This technology is termed FlashCP.
en/5886.html.txt ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Next Generation films
4
+
5
+ Reboot (Kelvin Timeline) films
6
+
7
+ Streaming series
8
+
9
+ Star Trek is an American media franchise based on the science fiction television series created by Gene Roddenberry. The first television series, called Star Trek and now known as "The Original Series", debuted on September 8, 1966, and aired for three seasons on NBC. It followed the voyages of the starship USS Enterprise on its five-year mission, the purpose of which was "to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before". The USS Enterprise was a space exploration vessel built by the United Federation of Planets in the 23rd century. The Star Trek canon includes the Original Series, an animated series, six spin-off television series, the film franchise, and further adaptations in several media.
10
+
11
+ In creating Star Trek, Roddenberry was inspired by C. S. Forester's Horatio Hornblower series of novels, Jonathan Swift's Gulliver's Travels, and television westerns such as Wagon Train. These adventures continued in the 22-episode Star Trek: The Animated Series and six feature films. Six new television series were eventually produced: Star Trek: The Next Generation follows the crew of a new starship Enterprise a century after the original series; Star Trek: Deep Space Nine and Star Trek: Voyager are set contemporaneously with the Next Generation, and Enterprise, set before the original series in the early days of human interstellar travel. The most recent Star Trek television series, Star Trek: Discovery and Star Trek: Picard, stream exclusively on digital platforms. The adventures of the Next Generation crew continued in four additional feature films. In 2009, the film franchise underwent a reboot called the Kelvin Timeline. Three films were made in this alternate universe. Two additional television series are in development for CBS All Access, Star Trek: Lower Decks, an animated series scheduled to debut in 2020; and Star Trek: Strange New Worlds, featuring the previous crew of the Enterprise prior to the original series, tentatively set to debut in 2021.[1][2]
12
+
13
+ Star Trek has been a cult phenomenon for decades.[3] Fans of the franchise are called "Trekkies" or "Trekkers". The franchise spans a wide range of spin-offs including games, figurines, novels, toys, and comics. Star Trek had a themed attraction in Las Vegas that opened in 1998 and closed in September 2008. At least two museum exhibits of props travel the world. The series has its own full-fledged constructed language, Klingon. Several parodies have been made of Star Trek. In addition, viewers have produced several fan productions. As of July 2016, the franchise had generated $10 billion in revenue, making Star Trek one of the highest-grossing media franchises of all time.[4]
14
+ Star Trek is noted for its cultural influence beyond works of science fiction.[5] The franchise is also noted for its progressive civil rights stances.[6] The Original Series included one of television's first multiracial casts.
15
+
16
+ As early as 1964, Gene Roddenberry drafted a proposal for the science fiction series that would become Star Trek. Although he publicly marketed it as a Western in outer space—a so-called "Wagon Train to the Stars"—he privately told friends that he was modeling it on Jonathan Swift's Gulliver's Travels, intending each episode to act on two levels: as a suspenseful adventure story and as a morality tale.[7][8][9][10]
17
+
18
+ Most Star Trek stories depict the adventures of humans[b] and aliens who serve in Starfleet, the space-borne humanitarian and peacekeeping armada of the United Federation of Planets. The protagonists have altruistic values, and must apply these ideals to difficult dilemmas.
19
+
20
+ Many of the conflicts and political dimensions of Star Trek are allegories of contemporary cultural realities. The Original Series addressed issues of the 1960s, just as later spin-offs have tackled issues of their respective decades.[11] Issues depicted in the various series include war and peace, the value of personal loyalty, authoritarianism, imperialism, class warfare, economics, racism, religion, human rights, sexism, feminism, and the role of technology.[12]:57 Roddenberry stated: "[By creating] a new world with new rules, I could make statements about sex, religion, Vietnam, politics, and intercontinental missiles. Indeed, we did make them on Star Trek: we were sending messages and fortunately they all got by the network."[12]:79 "If you talked about purple people on a far off planet, they (the television network) never really caught on. They were more concerned about cleavage. They actually would send a censor down to the set to measure a woman's cleavage to make sure too much of her breast wasn't showing"[13]
21
+
22
+ Roddenberry intended the show to have a progressive political agenda reflective of the emerging counter-culture of the youth movement, though he was not fully forthcoming to the networks about this. He wanted Star Trek to show what humanity might develop into, if it would learn from the lessons of the past, most specifically by ending violence. An extreme example is the alien species, the Vulcans, who had a violent past but learned to control their emotions. Roddenberry also gave Star Trek an anti-war message and depicted the United Federation of Planets as an ideal, optimistic version of the United Nations.[14] His efforts were opposed by the network because of concerns over marketability, e.g., they opposed Roddenberry's insistence that Enterprise have a racially diverse crew.[15]
23
+
24
+ The central trio of Kirk, Spock, and McCoy from the Original Series was modeled on classical mythological storytelling.[16]
25
+
26
+ There is a mythological component [to pop culture], especially with science fiction. It's people looking for answers – and science fiction offers to explain the inexplicable, the same as religion tends to do... If we accept the premise that it has a mythological element, then all the stuff about going out into space and meeting new life – trying to explain it and put a human element to it – it's a hopeful vision. All these things offer hope and imaginative solutions for the future.
27
+
28
+
29
+
30
+ In early 1964, Roddenberry presented a brief treatment for a television series to Desilu Productions, calling it "a Wagon Train to the stars."[18] Desilu worked with Roddenberry to develop the treatment into a script, which was then pitched to NBC.[19]
31
+
32
+ NBC paid to make a pilot, "The Cage", starring Jeffrey Hunter as Enterprise Captain Christopher Pike. NBC rejected The Cage, but the executives were still impressed with the concept, and made the unusual decision to commission a second pilot: "Where No Man Has Gone Before".[19]
33
+
34
+ While the show initially enjoyed high ratings, the average rating of the show at the end of its first season dropped to 52nd out of 94 programs. Unhappy with the show's ratings, NBC threatened to cancel the show during its second season.[20] The show's fan base, led by Bjo Trimble, conducted an unprecedented letter-writing campaign, petitioning the network to keep the show on the air.[20][21] NBC renewed the show, but moved it from primetime to the "Friday night death slot", and substantially reduced its budget.[22] In protest, Roddenberry resigned as producer and reduced his direct involvement in Star Trek, which led to Fred Freiberger becoming producer for the show's third and final season.[c] Despite another letter-writing campaign, NBC canceled the series after three seasons and 79 episodes.[19]
35
+
36
+ After the original series was canceled, Desilu, which by then had been renamed Paramount Television, licensed the broadcast syndication rights to help recoup the production losses. Reruns began in late 1969 and by the late 1970s the series aired in over 150 domestic and 60 international markets. This helped Star Trek develop a cult following greater than its popularity during its original run.[23]
37
+
38
+ One sign of the series' growing popularity was the first Star Trek convention which occurred on January 21–23, 1972 in New York City. Although the original estimate of attendees was only a few hundred, several thousand fans turned up. Star Trek fans continue to attend similar conventions worldwide.[24]
39
+
40
+ The series' newfound success led to the idea of reviving the franchise.[25] Filmation with Paramount Television produced the first post original series show, Star Trek: The Animated Series. It ran on NBC for 22 half-hour episodes over two seasons on Saturday mornings from 1973 to 1974.[26]:208 Although short-lived, typical for animated productions in that time slot during that period, the series garnered the franchise's only "Best Series" Emmy Award as opposed to the franchise's later technical ones. Paramount Pictures and Roddenberry began developing a new series, Star Trek: Phase II, in May 1975 in response to the franchise's newfound popularity. Work on the series ended when the proposed Paramount Television Service folded.
41
+
42
+ Following the success of the science fiction movies Star Wars [d] and Close Encounters of the Third Kind, Paramount adapted the planned pilot episode of Phase II into the feature film Star Trek: The Motion Picture. The film opened in North America on December 7, 1979, with mixed reviews from critics. The film earned $139 million worldwide, below expectations but enough for Paramount to create a sequel. The studio forced Roddenberry to relinquish creative control of future sequels.
43
+
44
+ The success of the sequel, Star Trek II: The Wrath of Khan, reversed the fortunes of the franchise. While the sequel grossed less than the first movie, The Wrath of Khan's lower production costs made it net more profit. Paramount produced six Star Trek feature films between 1979 and 1991.
45
+
46
+ In response to the popularity of Star Trek feature films, the franchise returned to television with Star Trek: The Next Generation in 1987. Paramount chose to distribute it as a first-run syndication show rather than a network show.[9]:545
47
+
48
+ Following Star Trek: The Motion Picture, Roddenberry's role was changed from producer to creative consultant with minimal input to the films while being heavily involved with the creation of The Next Generation. Roddenberry died on October 24, 1991, giving executive producer Rick Berman control of the franchise.[12]:268[9]:591–593 Star Trek had become known to those within Paramount as "the franchise", because of its great success and recurring role as a tent pole for the studio when other projects failed.[27] The Next Generation had the highest ratings of any Star Trek series and became the most syndicated show during the last years of its original seven-season run.[28] In response to the Next Generation's success, Paramount released a spin-off series Deep Space Nine in 1993. While never as popular as the Next Generation, the series had sufficient ratings for it to last seven seasons.
49
+
50
+ In January 1995, a few months after the Next Generation ended, Paramount released a fourth television series, Voyager. Star Trek saturation reached a peak in the mid-1990s with Deep Space Nine and Voyager airing concurrently and three of the four Next Generation-based feature films released in 1994, 1996, and 1998. By 1998, Star Trek was Paramount's most important property; the enormous profits of "the franchise" funded much of the rest of the studio's operations.[29] Voyager became the flagship show of the new United Paramount Network (UPN) and thus the first major network Star Trek series since the original.[30]
51
+
52
+ After Voyager ended, UPN produced Enterprise, a prequel series. Enterprise did not enjoy the high ratings of its predecessors and UPN threatened to cancel it after the series' third season. Fans launched a campaign reminiscent of the one that saved the third season of the Original Series. Paramount renewed Enterprise for a fourth season, but moved it to the Friday night death slot.[31] Like the Original Series, Enterprise ratings dropped during this time slot, and UPN cancelled Enterprise at the end of its fourth season. Enterprise aired its final episode on May 13, 2005.[32] A fan group, "Save Enterprise", attempted to save the series and tried to raise $30 million to privately finance a fifth season of Enterprise.[33] Though the effort garnered considerable press, the fan drive failed to save the series. The cancellation of Enterprise ended an eighteen-year continuous production run of Star Trek programming on television. The poor box office performance in 2002 of the film Nemesis cast an uncertain light upon the future of the franchise. Paramount relieved Berman, the franchise producer, of control of Star Trek.
53
+
54
+ In 2005, Paramount's parent company Viacom split into two companies, the CBS Corporation owner of CBS Television Studios, and Viacom owner of Paramount Pictures. CBS owned the film brand while Paramount owned the film library and would continue the film franchise. Paramount was the first company to try to revive the franchise by hiring a new creative team to reinvigorate in 2007. Writers Roberto Orci and Alex Kurtzman and producer J. J. Abrams had the freedom to reinvent the feel of the franchise.
55
+
56
+ The team created the franchise's eleventh film, Star Trek, releasing it in May 2009. The film featured a new cast portraying the crew of the original show. Star Trek was a prequel of the original series set in an alternate timeline, later named the Kelvin Timeline. This gave the film and sequels freedom from the need to conform to the franchise's canonical timeline. The eleventh Star Trek film's marketing campaign targeted non-fans, even stating in the film's advertisements that "this is not your father's Star Trek".[34] It also would not interfere with CBS's franchise.
57
+
58
+ The film earned considerable critical and financial success, grossing (in inflation-adjusted dollars) more box office sales than any previous Star Trek film.[35] The plaudits include the franchise's first Academy Award (for makeup). The film's major cast members are contracted for two sequels.[36] Paramount's sequel to the 2009 film, Star Trek Into Darkness, premiered in Sydney, Australia, on April 23, 2013, but the film did not release in the United States until May 17, 2013.[37] While the film was not as successful in the North American box office as its predecessor, internationally, in terms of box office receipts, Into Darkness was the most successful of the franchise.[38] The thirteenth film, Star Trek Beyond, was released on July 22, 2016.[39] The film had many pre-production problems and its script went through several rewrites. While receiving positive reviews, Star Trek Beyond disappointed in the box office.[40]
59
+
60
+ CBS turned down several proposals in the mid-2000s to restart the franchise. These included pitches from film director Bryan Singer, Babylon 5 creator J. Michael Straczynski, and Trek actors Jonathan Frakes and William Shatner.[41][42][43] The company also turned down an animated web series.[44]
61
+
62
+ Despite the franchise's absence from network television, the Star Trek film library would become highly accessible to the average viewer due to the rise of streaming services such as Netflix and Amazon Prime Video. To capitalize on this trend, CBS brought the franchise back to the small screen with the series Star Trek: Discovery to help launch and draw subscribers to its streaming service CBS All Access.[45] The first season premiered on September 24, 2017 and a second season premiered in January 2019.[46] A third Discovery season was announced on February 27, 2019.[47] While Discovery is shown in the United States exclusively on CBS All Access, Netflix, in exchange for funding the production costs of the show, owns the international screening rights for the show.[48]
63
+
64
+ A second All Access series, Star Trek: Picard, features Patrick Stewart reprising the show's namesake character. Picard premiered on January 23, 2020. Unlike Discovery, Amazon Prime Video will stream Picard internationally.[49] CBS has also released two seasons of Star Trek: Short Treks, a series of standalone mini-episodes which air between Discovery and Picard seasons. An additional streaming series following the crew of the Enterprise under the command of Captain Pike featured in Discovery's second season, Star Trek: Strange New Worlds, was announced on May 15, 2020.[1][2]
65
+
66
+ Additional All Access series are under development including the Star Trek: Lower Decks adult animated series, and a show centered around the Discovery character Philippa Georgiou. CBS's goal is to have new Star Trek content year-round on All Access.[50][51][52]
67
+
68
+ Eight television series and one short-form companion series make up the bulk of the Star Trek mythos: Original Series, Animated Series, Next Generation, Deep Space Nine, Voyager, Enterprise, Discovery, Short Treks and Picard. All the series in total amount to 774 episodes across 35 seasons of television.[e]
69
+
70
+ Star Trek: The Original Series, frequently abbreviated as TOS,[f] debuted on NBC on September 8, 1966.[53] The show tells the tale of the crew of the starship USS Enterprise and its five-year mission "to boldly go where no man has gone before". During the series initial run, it was nominated for Hugo Award for Best Dramatic Presentation multiple times, and won twice.[26]:231 Cast included:
71
+
72
+ NBC canceled the show after three seasons; the last original episode aired on June 3, 1969.[54] A petition near the end of the second season to save the show signed by many Caltech students and its multiple Hugo nominations would, however, indicate that despite low Nielsen ratings, it was highly popular with science fiction fans and engineering students.[55] The series later became popular in reruns and found a cult following.[53]
73
+
74
+ Star Trek: The Animated Series, produced by Filmation, ran for two seasons from 1973 to 1974. Most of the original cast performed the voices of their characters from the Original Series, and some of the writers who worked on the Original Series returned. While the animated format allowed the producers to create more exotic alien landscapes and life forms, animation errors and liberal reuse of shots and musical cues have tarnished the series' reputation.[56] Gene Roddenberry often spoke of it as non-canon.[57]:232 The cast included:
75
+
76
+ The Animated Series won Star Trek's first Emmy Award on May 15, 1975.[58] The series briefly returned to television in the mid-1980s on the children's cable network Nickelodeon, and again on Sci-Fi Channel in the mid-90s. The complete series was released on LaserDisc during the 1980s.[59] The complete series was first released in the U.S. on eleven volumes of VHS tapes in 1989. All 22 episodes were released on DVD in 2006.
77
+
78
+ Star Trek: The Next Generation, frequently abbreviated as TNG, takes place about a century after the Original Series (2364–2370). It features a new starship, Enterprise (NCC-1701-D), and a new crew:
79
+
80
+ The series premiered on September 28, 1987, and ran for seven seasons. It had the highest ratings of any of the Star Trek series and became the highest rated syndicated show near the end of its run, allowing it to act as a springboard for other series. Many relationships and races introduced in the Next Generation became the basis for episodes in Deep Space Nine and Voyager.[28] The series earned several Emmy awards and nominations—including Best Dramatic Series for its final season—two Hugo Awards, and a Peabody Award for Outstanding Television Programming for one episode.[60]
81
+
82
+ Star Trek: Deep Space Nine, frequently abbreviated as DS9, takes place during the last years and immediately after the Next Generation (2369–2375). It debuted the week of January 3, 1993, and ran for seven seasons. Unlike the other Star Trek series, Deep Space Nine was set primarily on a space station of the same name rather than aboard a starship. The cast included:
83
+
84
+ The show begins after the brutal Cardassian occupation of the planet Bajor. The liberated Bajoran people ask the United Federation of Planets to help run a space station near Bajor. After the Federation takes control of the station, the protagonists of the show discover a uniquely stable wormhole that provides immediate access to the distant Gamma Quadrant, making Bajor and the station a strategically important location.[61] The show chronicles the events of the station's crew, led by Commander Benjamin Sisko (Avery Brooks), and Major Kira Nerys (Nana Visitor).
85
+
86
+ Deep Space Nine stands apart from earlier Trek series for its lengthy serialized storytelling, character conflicts, and religious themes—all elements critics and audiences praised but were forbidden by Roddenberry while a producer of the original series and the Next Generation.[62]
87
+
88
+ Star Trek: Voyager ran for seven seasons, airing from January 16, 1995 to May 23, 2001. It features Kate Mulgrew as Captain Kathryn Janeway, the first female commanding officer in a leading role of a Star Trek series.[63] Cast included:
89
+
90
+ Voyager takes place at about the same time period as Deep Space Nine and the years following that show's end (2371–2378). The premiere episode has the USS Voyager and its crew pursue a Maquis (Federation rebels) ship. Both ships become stranded in the Delta Quadrant about 70,000 light-years from Earth.[64] Faced with a 75-year voyage to Earth, the crew must learn to work together to overcome challenges on their long and perilous journey home while also seeking ways to shorten the voyage.
91
+
92
+ Like Deep Space Nine, early seasons of Voyager feature more conflict between its crew members than seen in later episodes. Such conflict often arose from friction between "by-the-book" Starfleet crew and rebellious Maquis fugitives forced by circumstance to work together. The starship Voyager, isolated from its home, faced new cultures and dilemmas not possible in shows based in the Alpha Quadrant. Later seasons brought in an influx of characters and cultures from prior shows, such as the Borg, Q, the Ferengi, Romulans, Klingons, Cardassians and cast members of the Next Generation.
93
+
94
+ Star Trek: Enterprise, originally titled Enterprise, is a prequel to the original Star Trek series. It aired from September 26, 2001 to May 13, 2005 on UPN.[65] Enterprise is set during the 2150s, ninety years after Zefram Cochrane's first warp flight, and approximately ten years before the creation of the Coalition of Planets which became the United Federation of Planets. The show follows the crew of Earth's first Warp-5 capable starship, Enterprise (NX-01). Cast included:
95
+
96
+ Initially, Enterprise featured self-contained episodes, much like the Original Series, Next Generation and Voyager. The third season comprised a single narrative arc. The fourth and final season consisted of several three and four episode arcs, which explored the origins of some elements of previous series, and resolved some continuity errors with The Original Series.
97
+
98
+ Ratings for Enterprise started strong but declined rapidly. Although critics received the fourth season well, both fans and the cast reviled the series finale, partly because of the episode's focus on the guest appearance of members of the Next Generation cast.[66][67][68] The cancellation of Enterprise ended an 18-year run of new Star Trek series, which began with the Next Generation in 1987.
99
+
100
+ Star Trek: Discovery is a direct prequel to the Original Series, set roughly ten years prior.[69] It premiered September 24, 2017 in the United States and Canada on CBS.[46] The series is CBS All Access exclusive in the United States. Netflix distributes the series worldwide, except for Canada.[70]
101
+
102
+ The series primary protagonist is Lt. Commander Michael Burnham, portrayed by Martin-Green. This is a departure from previous Star Trek series whose lead character is traditionally the "captain of the ship". The series opened with a conflict between the United Federation of Planets and the Klingon T'Kuvma, who is attempting to unite the twenty-four Klingon factions called the Great Houses.[71][72]
103
+
104
+ Star Trek: Short Treks is a short film anthology companion series initially exploring settings and characters from Discovery. More recent episodes feature the crew of the Enterprise under the command of Christopher Pike.[73]
105
+
106
+ Star Trek: Picard is the ninth series in the Star Trek franchise and centers on the character Jean-Luc Picard at the end of the 24th century, 18 years after the events of Star Trek: Nemesis (2002).
107
+
108
+ CBS All Access has two animated and two live-action television series that are currently in development.[74]
109
+
110
+ Two seasons have been ordered for Lower Decks, an animated adult comedy series created by the Rick and Morty writer Mike McMahan. The series will follow the support crew of "one of Starfleet's least important ships."[75] Nickelodeon has commissioned an animated children's series, to be produced as a joint-venture with CBS,[76] titled Prodigy and set for a premiere in 2021.[77]
111
+
112
+ A series titled Strange New Worlds has been announced, starring Ethan Peck, Anson Mount and Rebecca Romijn reprising their Star Trek: Discovery season 2 roles as Spock, Captain Pike and Number One respectively.[78][2] Michelle Yeoh will reprise her role as the mirror universe's Philippa Georgiou of Section 31 from Discovery in a separate still-untitled series.[79][80]
113
+
114
+ Paramount Pictures has produced thirteen Star Trek feature films, the most recent being released in July 2016.[81] The first six films continue the adventures of the cast of the Original Series; the seventh film, Generations was intended as a transition from original cast to the cast of the Next Generation; the next three films, focused completely on the Next Generation cast.[g]
115
+
116
+ The eleventh film and its sequels occur in an alternate timeline with a new cast portraying the Original Series characters. Leonard Nimoy portrayed an elderly Spock in the films, providing a narrative link to what became known as the Prime Timeline. The alternate reality was christened the Kelvin Timeline by Michael and Denise Okuda, in honor of the starship USS Kelvin which was first seen in the 2009 film.[82]
117
+
118
+ An R-rated Star Trek film, to be directed by Quentin Tarantino, was announced as in-development in December 2017. In a December 2019 interview with Consequence of Sound, Tarantino indicated he may not direct the film.[83] He later confirmed he would not direct any future Star Trek film in a January 2020 interview with Deadline, effectively ending development.[84] In November 2019, a new, unrelated film was announced as in-development, to be directed by Noah Hawley.[85]
119
+
120
+ Star Trek has an on-going tradition of actors returning to reprise their roles in other spin-off series. In some instances, actors have portrayed potential ancestors, descendants, or relatives of characters they originated. Characters have also been recast for later appearances.
121
+
122
+ Below is an incomplete list:
123
+
124
+ Many licensed products are based on the Star Trek franchise. Merchandising is very lucrative for both studio and actors; by 1986 Nimoy had earned more than $500,000 from royalties.[86] Products include novels, comic books, video games, and other materials, which are generally considered non-canon. Star Trek merchandise generated $4 billion for Paramount by 2002.[87]
125
+
126
+ Since 1967, hundreds of original novels, short stories, and television and movie adaptations have been published. The first original Star Trek novel was Mission to Horatius by Mack Reynolds, which was published in hardcover by Whitman Books in 1968.[57]:131
127
+
128
+ Among the most recent is the Star Trek Collection of Little Golden Books. Three titles were published by Random House in 2019, a fourth is scheduled for July 2020.
129
+
130
+ The first publisher of Star Trek fiction aimed at adult readers was Bantam Books. James Blish wrote adaptations of episodes of the original series in twelve volumes from 1967 to 1977; in 1970, he wrote the first original Star Trek novel published by Bantam, Spock Must Die!.[57]:xi
131
+
132
+ Pocket Books published subsequent Star Trek novels. Prolific Star Trek novelists include Peter David, Diane Carey, Keith DeCandido, J.M. Dillard, Diane Duane, Michael Jan Friedman, and Judith and Garfield Reeves-Stevens. Several actors from the television series have also written or co-written books featuring their respective characters: William Shatner, John de Lancie, Andrew J. Robinson, J. G. Hertzler and Armin Shimerman. Voyager producer Jeri Taylor wrote two novels detailing the personal histories of Voyager characters. Screenplay writers David Gerrold, D. C. Fontana, and Melinda Snodgrass have also penned books.[57]:213
133
+
134
+ A 2014 scholarly work Newton Lee discussed the actualization of Star Trek's holodeck in the future by making extensive use of artificial intelligence and cyborgs.[88]
135
+
136
+ Star Trek-based comics have been issued almost continuously since 1967, published by Marvel, DC, Malibu, Wildstorm, and Gold Key, among others. In 2009, Tokyopop produced an anthology of Next Generation-based stories presented in the style of Japanese manga.[89] In 2006, IDW Publishing secured publishing rights to Star Trek comics and issued a prequel to the 2009 film, Star Trek: Countdown.[90] In 2012, IDW published the first volume of Star Trek – The Newspaper Strip, featuring the work of Thomas Warkentin.[91] As of 2020, IDS continues to produce new titles. [92]
137
+
138
+ The Star Trek franchise has numerous games in many formats. Beginning in 1967 with a board game based on the original series and continuing through today with online and DVD games, Star Trek games continue to be popular among fans.
139
+
140
+ Video games based on the series include Star Trek: Legacy and Star Trek: Conquest. An MMORPG based on Star Trek called Star Trek Online was developed by Cryptic Studios and published by Perfect World. It is set during the Next Generation era, about 30 years after the events of Star Trek: Nemesis.[93] The most recent video game was set in the alternate timeline from Abrams's Star Trek.
141
+
142
+ On June 8, 2010, WizKids announced the development of a Star Trek collectible miniatures game using the HeroClix game system.[94]
143
+
144
+ Star Trek has led directly or indirectly to the creation of a number of magazines which focus either on science fiction or specifically on Star Trek. Starlog was a magazine which was founded in the 1970s.[57]:13 Initially, its focus was on Star Trek actors, but then it expanded its scope.[57]:80 Star Trek: The Magazine was a magazine published in the U.S. that ceased publication in 2003. Star Trek Magazine, originally published as Star Trek Monthly by Titan Magazines for the United Kingdom market, began in February 1995. The magazine has since expanded to worldwide distribution.
145
+
146
+ Other magazines through the years included professional, as well well as magazines published by fans, or fanzines.
147
+
148
+ The Star Trek media franchise is a multibillion-dollar industry, owned by ViacomCBS.[95] Gene Roddenberry sold Star Trek to NBC as a classic adventure drama; he pitched the show as "Wagon Train to the Stars" and as Horatio Hornblower in Space.[16] The opening line, "to boldly go where no man has gone before," was taken almost verbatim from a U.S. White House booklet on space produced after the Sputnik flight in 1957.[96] The central trio of Kirk, Spock, and McCoy was modeled on classical mythological storytelling.[16]
149
+
150
+ Star Trek and its spin-offs have proven highly popular in syndication and was broadcast worldwide.[97] The show's cultural impact goes far beyond its longevity and profitability. Star Trek conventions have become popular among its fans, who call themselves "trekkie" or "trekkers".[98] An entire subculture has grown up around the franchise, which was documented in the film Trekkies. Star Trek was ranked most popular cult show by TV Guide.[99] The franchise has also garnered many comparisons of the Star Wars franchise being rivals in the science fiction genre with many fans and scholars.[100][101][102]
151
+
152
+ The Star Trek franchise inspired some designers of technologies, the Palm PDA and the handheld mobile phone.[103][104] Michael Jones, Chief technologist of Google Earth, has cited the tricorder's mapping capability as one inspiration in the development of Keyhole/Google Earth.[105] The Tricorder X Prize, a contest to build a medical tricorder device was announced in 2012. Ten finalists were selected in 2014, and the winner was to be selected in January 2016. However, no team managed to reach the required criteria. Star Trek also brought teleportation to popular attention with its depiction of "matter-energy transport", with the famously misquoted phrase "Beam me up, Scotty" entering the vernacular.[106] The Star Trek replicator is credited in the scientific literature with inspiring the field of diatom nanotechnology.[107] In 1976, following a letter-writing campaign, NASA named its prototype space shuttle Enterprise, after the fictional starship.[108] Later, the introductory sequence to Star Trek: Enterprise included footage of this shuttle which, along with images of a naval sailing vessel called Enterprise, depicted the advancement of human transportation technology. Additionally, some contend that the Star Trek society resembles communism.[109][110]
153
+
154
+ Beyond Star Trek's fictional innovations, its contributions to television history included a multicultural and multiracial cast. While more common in subsequent years, in the 1960s it was controversial to feature an Enterprise crew that included a Japanese helmsman, a Russian navigator, a black female communications officer, and a human–Vulcan first officer. Captain Kirk's and Lt. Uhura's kiss, in the episode "Plato's Stepchildren", was also daring, and is often mis-cited as being American television's first scripted, interracial kiss, even though several other interracial kisses predated this one. Nichelle Nichols, who played the communications officer, said that the day after she told Roddenberry of her plan to leave the series, she was told a big fan wanted to meet her while attending a NAACP dinner party:
155
+
156
+ I thought it was a Trekkie, and so I said, 'Sure.' I looked across the room, and there was Dr. Martin Luther King walking towards me with this big grin on his face. He reached out to me and said, 'Yes, Ms. Nichols, I am your greatest fan.' He said that Star Trek was the only show that he, and his wife Coretta, would allow their three little children to stay up and watch. [She told King about her plans to leave the series.] I never got to tell him why, because he said, 'You can't. You're part of history.'
157
+
158
+ Computer engineer and entrepreneur Steve Wozniak credited watching Star Trek and attending Star Trek conventions in his youth as a source of inspiration for co-founding Apple Inc. in 1976. Apple later became the world's largest information technology company by revenue and the world's third-largest mobile phone manufacturer.[112]
159
+
160
+ Early parodies of Star Trek included a famous sketch on Saturday Night Live titled "The Last Voyage of the Starship Enterprise", with John Belushi as Kirk, Chevy Chase as Spock and Dan Aykroyd as McCoy.[113] In the 1980s, Saturday Night Live did a sketch with William Shatner reprising his Captain Kirk role in The Restaurant Enterprise, preceded by a sketch in which he played himself at a Trek convention angrily telling fans to "Get a Life", a phrase that has become part of Trek folklore.[114] In Living Color continued the tradition in a sketch where Captain Kirk is played by a fellow Canadian Jim Carrey.[115]
161
+
162
+ A feature-length film that indirectly parodies Star Trek is Galaxy Quest. This film is based on the premise that aliens monitoring the broadcast of an Earth-based television series called Galaxy Quest, modeled heavily on Star Trek, believe that what they are seeing is real.[116] Many Star Trek actors have been quoted saying that Galaxy Quest was a brilliant parody.[117][118]
163
+
164
+ Star Trek has been blended with Gilbert and Sullivan at least twice. The North Toronto Players presented a Star Trek adaptation of Gilbert & Sullivan titled H.M.S. Starship Pinafore: The Next Generation in 1991 and an adaptation by Jon Mullich of Gilbert and Sullivan's H.M.S. Pinafore that sets the operetta in the world of Star Trek has played in Los Angeles and was attended by series luminaries Nichelle Nichols,[citation needed] D.C. Fontana and David Gerrold.[119] A similar blend of Gilbert and Sullivan and Star Trek was presented as a benefit concert in San Francisco by the Lamplighters in 2009. The show was titled Star Drek: The Generation After That. It presented an original story with Gilbert and Sullivan melodies.[120]
165
+
166
+ The Simpsons and Futurama television series and others have had many individual episodes parodying Star Trek or with Trek allusions.[121] Black Mirror's Star Trek parody episode, "USS Callister", won four Emmy Awards, including the Outstanding Television Movie and Writing for a Limited Series, Movie or Drama, and was nominated for three more.[122]
167
+
168
+ In August 2010, the members of the Internal Revenue Service created a Star Trek themed training video for a conference. Revealed to the public in 2013, the spoof along with parodies of other media franchises was cited as an example of the misuse of taxpayer funds in a congressional investigation.[123][124]
169
+
170
+ Star Trek has been parodied in several non-English movies, including the German Traumschiff Surprise – Periode 1 which features a gay version of the Original Series bridge crew and a Turkish film that spoofs that same series' episode "The Man Trap" in one of the series of films based on the character Turist Ömer.[citation needed] An entire series of films and novel parodies titled Star Wreck has been created in Finnish.[125]
171
+
172
+ The Orville is a comedy-drama science fiction television series created by Seth MacFarlane that premiered on September 10, 2017, on Fox. MacFarlane, a longtime fan of the franchise who previously guest-starred on an episode of Enterprise, created the series with a similar look and feel as the Star Trek series.[126] MacFarlane has made references to Star Trek on his animated series Family Guy, where the Next Generation cast guest-starred in the episode "Not All Dogs Go to Heaven".
173
+
174
+ Until 2016, Paramount Pictures and CBS permitted fan-produced films and episode-like clips to be produced. Several veteran Star Trek actors and writers participated in many of these productions. Several producers turned to crowdfunding, such as Kickstarter, to help with production and other costs.[127]
175
+
176
+ Popular productions include: New Voyages (2004–2016) and Star Trek Continues (2013–2017). Additional productions include: Of Gods and Men (2008), originally released as a three-part web series, and Prelude to Axanar.[128] Audio dramatizations such as The Continuing Mission (2007–2016) have also been published by fans.
177
+
178
+ In 2016, CBS published guidelines which restricted the scope of fan productions, such as limiting the length of episodes or films to fifteen minutes, limiting production budgets to $50,000, and preventing actors and technicians from previous Star Trek productions from participating.[129] A number of highly publicized productions have since been cancelled or have gone abeyant.[130]
179
+
180
+ Star Trek inspired the popularity of slash fiction, a genre of fan-produced, in-universe fiction where normally platonic, same-sex characters are portrayed as being a romantic couple. The most notable being Kirk/Spock stories.[131]:799[132]
181
+
182
+ Over the intervening decades, especially with the advent of the Internet, fan fiction has become its own thriving fandom.[133][131]:798
183
+
184
+ Of the various science fiction awards for drama, only the Hugo Award dates back as far as the original series.[i] In 1968, all five nominees for a Hugo Award were individual episodes of Star Trek, as were three of the five nominees in 1967.[j][26]:231 The only Star Trek series not to receive a Hugo Award nomination are the Animated Series and Voyager, though the Original Series and Next Generation never won in any nominated category. No Star Trek feature film has ever won a Hugo Award. In 2008, the fan-made Star Trek: New Voyages episode "World Enough and Time" was nominated for the Hugo Award for Best Short Drama.
185
+
186
+ Star Trek (2009) won the Academy Award for Best Makeup and Hairstyling, the franchise's first Academy Award. In 2016, the franchise was listed in the Guinness World Records as the most successful science fiction television franchise in the world.[134]
187
+
188
+ In 1996, TV Guide published the following as the ten best Star Trek episodes for the franchise's 30th anniversary: [135]
189
+
190
+ At the 50th Anniversary Star Trek Las Vegas (STLV) convention, in 2016, the following were voted by fans as the best episodes: [136]
191
+
192
+ Additionally, fans voted the following as the worst episodes: [137]
193
+
194
+ Star Trek began as a joint-production of Norway Productions, owned by Roddenberry, and Desilu, owned by Desi Arnaz. The profit-sharing agreement for the series split proceeds between Norway, Desilu—later Paramount Television, William Shatner's production company, and the broadcast network, NBC. However, Star Trek lost money during its initial broadcast, and NBC did not expect to recoup its losses by selling the series into syndication, nor did Paramount. With NBC's approval, Paramount offered its share of the series to Roddenberry sometime in 1970. However, Roddenberry could not raise the $150,000 (equivalent to $987,532 in 2019) offered by the studio.[19] Paramount would go on to license the series to television syndicators worldwide. NBC's remaining broadcast and distribution rights eventually returned to Paramount and Roddenberry sometime before 1986, which coincided with the development of what would become The Next Generation.
195
+
196
+ As for Desilu, the studio was acquired by Gulf+Western. It was then reorganized as the television production division of Paramount Pictures, which Gulf+Western had acquired in 1966. Gulf+Western sold its remaining industrial assets in 1989, renaming itself Paramount Communications. Sometime before 1986, Sumner Redstone had acquired a controlling stake of Viacom via his family's theater chain, National Amusements. Viacom was established in 1952 as a division of CBS responsible for syndicating the network's in-house productions, originally called CBS Films. In 1994, Viacom and Paramount Communications were merged.[19] Viacom then merged with its former parent, CBS Corporation, in 1999. National Amusements and the Redstone family increased their stake in the combined company between 1999 and 2005.
197
+
198
+ In 2005, the Redstone family reorganized Viacom, spinning off the conglomerate's assets as two independent groups: the new Viacom, and the new CBS Corporation. National Amusements and the Redstone family retained approximately 80% ownership of both CBS and Viacom.[138] Star Trek was split between the two entities. The terms of this split were not known. However, CBS held all copyrights, marks, production assets, and film negatives, to all Star Trek television series. CBS also retained the rights to all likenesses, characters, names and settings, and stories, and the right to license Star Trek, and its spin-offs, to merchandisers, and publishers, etc.[139] The rights were exercised via the new CBS Television Studios, which was carved out of the former Paramount Television.
199
+
200
+ Viacom, which housed Paramount Pictures, retained the feature film library, and exclusive rights to produce new feature films for a limited time.[citation needed] Viacom also retained home video distribution rights for all television series produced before 2005.[19][140] However, home video editions of the various television series released after the split, as well as streaming video versions of episodes available worldwide, carried variants of the new CBS Television Studios livery in addition to the original Paramount Television Studios livery. It was unclear who retained the synchronization or streaming rights.[citation needed]
201
+
202
+ Rights and distribution issues, and the fraught relationship between the leadership at CBS, Viacom, and the National Amusements' board of directors, resulted in a number of delayed and or cancelled Star Trek productions between 2005 and 2019.[141] Additionally, the development and release of the new Star Trek film, in 2009, was met with resistance by executives at CBS, as was Into Darkness (2013) and Beyond (2016), which affected merchandising, tie-in media, and promotion for the new films.[142] During this period, both CBS and Viacom continued to list Star Trek as an important asset in their prospectus to investors, and in corporate filings made to the Securities and Exchange Commission.
203
+
204
+ The competitive nature of the entertainment industry led to negotiations between Viacom and CBS on a potential merger, with CBS as the acquiring party, which would realign the stakeholders of the franchise under one corporate umbrella.[143] After several failed attempts at a merger between 2009 and 2014, negotiations restarted between CBS and Viacom in 2019, led by Shari Redstone, chairman of National Amusements, and Joe Ianniello, then CEO of Viacom.[144] On August 13, 2019, CBS and Viacom boards of directors reached an agreement to reunite the conglomerates as a single entity called ViacomCBS.[145] National Amusements' board of directors approved the merger on October 28, 2019, which was finalized on December 4.[146][147][148]
en/5887.html.txt ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Coordinates: 39°N 111°W / 39°N 111°W / 39; -111
2
+
3
+ Utah (/ˈjuːtɑː/ YOO-tah, /ˈjuːtɔː/ (listen) YOO-taw) is a state in the western United States. It is bordered by Colorado to the east, Wyoming to the northeast, Idaho to the north, Arizona to the south, and Nevada to the west. It also touches a corner of New Mexico in the southeast. Of the fifty U.S. states, Utah is the 13th-largest by area, and with a population over three million, the 30th-most-populous and 11th-least-densely populated. Urban development is mostly concentrated in two areas: the Wasatch Front in the north-central part of the state, which is home to roughly two-thirds of the population, and Washington County in the south, with more than 170,000 residents.[8] Most of the western half of Utah lies in the Great Basin.
4
+
5
+ The territory of modern Utah has been inhabited by various indigenous groups for thousands of years, including the ancient Puebloans, the Navajo, and the Ute. The Spanish were the first Europeans to arrive in the mid-16th century, though the region's difficult geography and climate made it a peripheral part of New Spain and later Mexico. Even while it was part of Mexico, many of Utah's earliest settlers were American, particularly Mormons fleeing marginalization and persecution from the United States. Following the Mexican-American War, it became part of the Utah Territory, which included what is now Colorado and Nevada. Disputes between the dominant Mormon community and the federal government delayed Utah's admission as a state; only after the outlawing of polygamy was it admitted as the 45th, in 1896.
6
+
7
+ A little more than half of all Utahns are Mormons, the vast majority of whom are members of the Church of Jesus Christ of Latter-day Saints (LDS Church), which has its world headquarters in Salt Lake City.[9] Utah is the only state where most of the population belongs to a single church.[10] The LDS Church greatly influences Utahn culture, politics, and daily life,[11] though since the 1990s the state has become more religiously diverse as well as secular.
8
+
9
+ The state has a highly diversified economy, with major sectors including transportation, education, information technology and research, government services, and mining and a major tourist destination for outdoor recreation. In 2013, the U.S. Census Bureau estimated that Utah had the second-fastest-growing population of any state.[12] St. George was the fastest-growing metropolitan area in the United States from 2000 to 2005.[13] Utah also has the 14th-highest median average income and the least income inequality of any U.S. state. A 2012 Gallup national survey found Utah overall to be the "best state to live in the future" based on 13 forward-looking measurements including various economic, lifestyle, and health-related outlook metrics.[14]
10
+
11
+ The name Utah is said to derive from the name of the Ute tribe, meaning "people of the mountains".[15] However, no such word actually exists in the Utes' language, and the Utes refer to themselves as Noochee. The meaning of Utes as "the mountain people" is attributed to the neighboring Pueblo Indians,[16] specifically from the Apache word Yuttahih, which means "one that is higher up" or "those that are higher up".[15] In Spanish it was pronounced Yuta; subsequently English-speaking people may have adapted the word as 'Utah'.[17]
12
+
13
+ Thousands of years before the arrival of European explorers, the Ancestral Puebloans and the Fremont people lived in what is now known as Utah, some of which spoke languages of the Uto-Aztecan group. Ancestral Pueblo peoples built their homes through excavations in mountains, and the Fremont people built houses of straw before disappearing from the region around the 15th century.
14
+
15
+ Another group of Native Americans, the Navajo, settled in the region around the 18th century. In the mid-18th century, other Uto-Aztecan tribes, including the Goshute, the Paiute, the Shoshone, and the Ute people, also settled in the region. These five groups were present when the first European explorers arrived.[18][19]
16
+
17
+ The southern Utah region was explored by the Spanish in 1540, led by Francisco Vázquez de Coronado, while looking for the legendary Cíbola. A group led by two Catholic priests—sometimes called the Domínguez–Escalante expedition—left Santa Fe in 1776, hoping to find a route to the coast of California. The expedition traveled as far north as Utah Lake and encountered the native residents. The Spanish made further explorations in the region but were not interested in colonizing the area because of its desert nature. In 1821, the year Mexico achieved its independence from Spain, the region became known as part of its territory of Alta California.
18
+
19
+ European trappers and fur traders explored some areas of Utah in the early 19th century from Canada and the United States. The city of Provo, Utah was named for one, Étienne Provost, who visited the area in 1825. The city of Ogden, Utah was named after Peter Skene Ogden, a Canadian explorer who traded furs in the Weber Valley.
20
+
21
+ In late 1824, Jim Bridger became the first known English-speaking person to sight the Great Salt Lake. Due to the high salinity of its waters, He thought he had found the Pacific Ocean; he subsequently learned this body of water was a giant salt lake. After the discovery of the lake, hundreds of American and Canadian traders and trappers established trading posts in the region. In the 1830s, thousands of migrants traveling from the Eastern United States to the American West began to make stops in the region of the Great Salt Lake, then known as Lake Youta.[citation needed]
22
+
23
+ Following the death of Joseph Smith in 1844, Brigham Young, as president of the Quorum of the Twelve, became the effective leader of the LDS Church in Nauvoo, Illinois.[20] To address the growing conflicts between his people and their neighbors, Young agreed with Illinois Governor Thomas Ford in October 1845 that the Mormons would leave by the following year.[21]
24
+
25
+ Young and the first band of Mormon pioneers reached the Salt Lake Valley on July 24, 1847. Over the next 22 years, more than 70,000 pioneers crossed the plains and settled in Utah.[22] For the first few years, Brigham Young and the thousands of early settlers of Salt Lake City struggled to survive. The arid desert land was deemed by the Mormons as desirable as a place where they could practice their religion without harassment.
26
+
27
+ Settlers buried thirty-six Native Americans in one grave after an outbreak of measles occurred during the winter of 1847.[23]
28
+
29
+ The first group of settlers brought African slaves with them, making Utah the only place in the western United States to have African slavery.[24] Three slaves, Green Flake, Hark Lay, and Oscar Crosby, came west with the first group of settlers in 1847.[25] The settlers also began to purchase Indian slaves in the well-established Indian slave trade,[26] as well as enslaving Indian prisoners of war.[27][28]
30
+
31
+ Utah was Mexican territory when the first pioneers arrived in 1847. Early in the Mexican–American War in late 1846, the United States had taken control of New Mexico and California. The entire Southwest became U.S. territory upon the signing of the Treaty of Guadalupe Hidalgo, February 2, 1848. The treaty was ratified by the United States Senate on March 11. Learning that California and New Mexico were applying for statehood, the settlers of the Utah area (originally having planned to petition for territorial status) applied for statehood with an ambitious plan for a State of Deseret.
32
+
33
+ The Mormon settlements provided pioneers for other settlements in the West. Salt Lake City became the hub of a "far-flung commonwealth"[29] of Mormon settlements. With new church converts coming from the East and around the world, Church leaders often assigned groups of church members as missionaries to establish other settlements throughout the West. They developed irrigation to support fairly large pioneer populations along Utah's Wasatch front (Salt Lake City, Bountiful and Weber Valley, and Provo and Utah Valley).[30] Throughout the remainder of the 19th century, Mormon pioneers established hundreds of other settlements in Utah, Idaho, Nevada, Arizona, Wyoming, California, Canada, and Mexico—including in Las Vegas, Nevada; Franklin, Idaho (the first European settlement in Idaho); San Bernardino, California; Mesa, Arizona; Star Valley, Wyoming; and Carson Valley, Nevada.
34
+
35
+ Prominent settlements in Utah included St. George, Logan, and Manti (where settlers completed the first three temples in Utah, each started after but finished many years before the larger and better known temple built in Salt Lake City was completed in 1893), as well as Parowan, Cedar City, Bluff, Moab, Vernal, Fillmore (which served as the territorial capital between 1850 and 1856), Nephi, Levan, Spanish Fork, Springville, Provo Bench (now Orem), Pleasant Grove, American Fork, Lehi, Sandy, Murray, Jordan, Centerville, Farmington, Huntsville, Kaysville, Grantsville, Tooele, Roy, Brigham City, and many other smaller towns and settlements. Young had an expansionist's view of the territory that he and the Mormon pioneers were settling, calling it Deseret—which according to the Book of Mormon was an ancient word for "honeybee". This is symbolized by the beehive on the Utah flag, and the state's motto, "Industry".[31]
36
+
37
+ The Utah Territory was much smaller than the proposed state of Deseret, but it still contained all of the present states of Nevada and Utah as well as pieces of modern Wyoming and Colorado.[32] It was created with the Compromise of 1850, and Fillmore, named after President Millard Fillmore, was designated the capital. The territory was given the name Utah after the Ute tribe of Native Americans. Salt Lake City replaced Fillmore as the territorial capital in 1856.
38
+
39
+ By 1850, there were around 100 blacks, the majority of whom were slaves.[33] In Salt Lake County, 26 slaves were counted.[34] In 1852, the territorial legislature passed the Act in Relation to Service and the Act for the relief of Indian Slaves and Prisoners formally legalizing slavery in the territory. Slavery was abolished in the territory during the Civil War.
40
+
41
+ In 1850, Salt Lake City sent out a force known as the Nauvoo Legion and engaged the Timpanogos in the Battle at Fort Utah.[35]:71
42
+
43
+ Disputes between the Mormon inhabitants and the U.S. government intensified due to the practice of plural marriage, or polygamy, among members of The Church of Jesus Christ of Latter-day Saints. The Mormons were still pushing for the establishment of a State of Deseret with the new borders of the Utah Territory. Most, if not all, of the members of the U.S. government opposed the polygamous practices of the Mormons.
44
+
45
+ Members of the LDS Church were viewed as un-American and rebellious when news of their polygamous practices spread. In 1857, particularly heinous accusations of abdication of government and general immorality were leveled by former associate justice William W. Drummond, among others. The detailed reports of life in Utah caused the administration of James Buchanan to send a secret military "expedition" to Utah. When the supposed rebellion should be quelled, Alfred Cumming would take the place of Brigham Young as territorial governor. The resulting conflict is known as the Utah War, nicknamed "Buchanan's Blunder" by the Mormon leaders.
46
+
47
+ In September 1857, about 120 American settlers of the Baker–Fancher wagon train, en route to California from Arkansas, were murdered by Utah Territorial Militia and some Paiute Native Americans in the Mountain Meadows massacre.[36]
48
+
49
+ Before troops led by Albert Sidney Johnston entered the territory, Brigham Young ordered all residents of Salt Lake City to evacuate southward to Utah Valley and sent out the Nauvoo Legion to delay the government's advance. Although wagons and supplies were burned, eventually the troops arrived in 1858, and Young surrendered official control to Cumming, although most subsequent commentators claim that Young retained true power in the territory. A steady stream of governors appointed by the president quit the position, often citing the traditions of their supposed territorial government. By agreement with Young, Johnston established Camp Floyd, 40 miles (60 km) away from Salt Lake City, to the southwest.
50
+
51
+ Salt Lake City was the last link of the First Transcontinental Telegraph, completed in October 1861. Brigham Young was among the first to send a message, along with Abraham Lincoln and other officials.
52
+
53
+ Because of the American Civil War, federal troops were pulled out of Utah Territory in 1861. This was a boon to the local economy as the army sold everything in camp for pennies on the dollar before marching back east to join the war. The territory was then left in LDS hands until Patrick E. Connor arrived with a regiment of California volunteers in 1862. Connor established Fort Douglas just 3 miles (4.8 km) east of Salt Lake City and encouraged his people to discover mineral deposits to bring more non-Mormons into the territory. Minerals were discovered in Tooele County and miners began to flock to the territory.
54
+
55
+ Beginning in 1865, Utah's Black Hawk War developed into the deadliest conflict in the territory's history. Chief Antonga Black Hawk died in 1870, but fights continued to break out until additional federal troops were sent in to suppress the Ghost Dance of 1872. The war is unique among Indian Wars because it was a three-way conflict, with mounted Timpanogos Utes led by Antonga Black Hawk fighting federal and LDS authorities.
56
+
57
+ On May 10, 1869, the First Transcontinental Railroad was completed at Promontory Summit, north of the Great Salt Lake.[37] The railroad brought increasing numbers of people into the territory and several influential businesspeople made fortunes there.
58
+
59
+ During the 1870s and 1880s laws were passed to punish polygamists due, in part, to stories from Utah. Notably, Ann Eliza Young—tenth wife to divorce Brigham Young, women's advocate, national lecturer and author of Wife No. 19 or My Life of Bondage and Mr. and Mrs. Fanny Stenhouse, authors of The Rocky Mountain Saints (T. B. H. Stenhouse, 1873) and Tell It All: My Life in Mormonism (Fanny Stenhouse, 1875). Both Ann Eliza and Fanny testify to the happiness of the very early Church members before polygamy. They independently published their books in 1875. These books and the lectures of Ann Eliza Young have been credited with the United States Congress passage of anti-polygamy laws by newspapers throughout the United States as recorded in "The Ann Eliza Young Vindicator", a pamphlet which detailed Ms Young's travels and warm reception throughout her lecture tour.
60
+
61
+ T. B. H. Stenhouse, former Utah Mormon polygamist, Mormon missionary for thirteen years and a Salt Lake City newspaper owner, finally left Utah and wrote The Rocky Mountain Saints. His book gives a witnessed account of life in Utah, both the good and the bad. He finally left Utah and Mormonism after financial ruin occurred when Brigham Young sent Stenhouse to relocate to Ogden, Utah, according to Stenhouse, to take over his thriving pro-Mormon Salt Lake Telegraph newspaper. In addition to these testimonies, The Confessions of John D. Lee, written by John D. Lee—alleged "Scape goat" for the Mountain Meadow Massacre—also came out in 1877. The corroborative testimonies coming out of Utah from Mormons and former Mormons influenced Congress and the people of the United States.
62
+
63
+ In the 1890 Manifesto, the LDS Church banned polygamy. When Utah applied for statehood again, it was accepted. One of the conditions for granting Utah statehood was that a ban on polygamy be written into the state constitution. This was a condition required of other western states that were admitted into the Union later. Statehood was officially granted on January 4, 1896.
64
+
65
+ Beginning in the early 20th century, with the establishment of such national parks as Bryce Canyon National Park and Zion National Park, Utah became known for its natural beauty. Southern Utah became a popular filming spot for arid, rugged scenes featured in the popular mid-century western film genre. From such films, most US residents recognize such natural landmarks as Delicate Arch and "the Mittens" of Monument Valley.[38] During the 1950s, 1960s, and 1970s, with the construction of the Interstate highway system, accessibility to the southern scenic areas was made easier.
66
+
67
+ Since the establishment of Alta Ski Area in 1939 and the subsequent development of several ski resorts in the state's mountains, Utah's skiing has become world-renowned. The dry, powdery snow of the Wasatch Range is considered some of the best skiing in the world (the state license plate once claimed "the Greatest Snow on Earth").[39][40] Salt Lake City won the bid for the 2002 Winter Olympic Games, and this served as a great boost to the economy. The ski resorts have increased in popularity, and many of the Olympic venues built along the Wasatch Front continue to be used for sporting events. Preparation for the Olympics spurred the development of the light-rail system in the Salt Lake Valley, known as TRAX, and the re-construction of the freeway system around the city.
68
+
69
+ In 1957, Utah created the Utah State Parks Commission with four parks. Today, Utah State Parks manages 43 parks and several undeveloped areas totaling over 95,000 acres (380 km2) of land and more than 1,000,000 acres (4,000 km2) of water. Utah's state parks are scattered throughout Utah, from Bear Lake State Park at the Utah/Idaho border to Edge of the Cedars State Park Museum deep in the Four Corners region and everywhere in between. Utah State Parks is also home to the state's off highway vehicle office, state boating office and the trails program.[41]
70
+
71
+ During the late 20th century, the state grew quickly. In the 1970s growth was phenomenal in the suburbs of the Wasatch Front. Sandy was one of the fastest-growing cities in the country at that time. Today, many areas of Utah continue to see boom-time growth. Northern Davis, southern and western Salt Lake, Summit, eastern Tooele, Utah, Wasatch, and Washington counties are all growing very quickly. Management of transportation and urbanization are major issues in politics, as development consumes agricultural land and wilderness areas and transportation is a major reason for poor air quality in Utah.
72
+
73
+ Utah is known for its natural diversity and is home to features ranging from arid deserts with sand dunes to thriving pine forests in mountain valleys. It is a rugged and geographically diverse state at the convergence of three distinct geological regions: the Rocky Mountains, the Great Basin, and the Colorado Plateau.
74
+
75
+ Utah covers an area of 84,899 sq mi (219,890 km2). It is one of the Four Corners states and is bordered by Idaho in the north, Wyoming in the north and east; by Colorado in the east; at a single point by New Mexico to the southeast; by Arizona in the south; and by Nevada in the west. Only three U.S. states (Utah, Colorado, and Wyoming) have exclusively latitude and longitude lines as boundaries.
76
+
77
+ One of Utah's defining characteristics is the variety of its terrain. Running down the middle of the state's northern third is the Wasatch Range, which rises to heights of almost 12,000 ft (3,700 m) above sea level. Utah is home to world-renowned ski resorts made popular by light, fluffy snow and winter storms that regularly dump up to three feet of it overnight. In the state's northeastern section, running east to west, are the Uinta Mountains, which rise to heights of over 13,000 feet (4,000 m). The highest point in the state, Kings Peak, at 13,528 feet (4,123 m),[42] lies within the Uinta Mountains.
78
+
79
+ At the western base of the Wasatch Range is the Wasatch Front, a series of valleys and basins that are home to the most populous parts of the state. It stretches approximately from Brigham City at the north end to Nephi at the south end. Approximately 75 percent of the state's population lives in this corridor, and population growth is rapid.
80
+
81
+ Western Utah is mostly arid desert with a basin and range topography. Small mountain ranges and rugged terrain punctuate the landscape. The Bonneville Salt Flats are an exception, being comparatively flat as a result of once forming the bed of ancient Lake Bonneville. Great Salt Lake, Utah Lake, Sevier Lake, and Rush Lake are all remnants of this ancient freshwater lake,[43] which once covered most of the eastern Great Basin. West of the Great Salt Lake, stretching to the Nevada border, lies the arid Great Salt Lake Desert. One exception to this aridity is Snake Valley, which is (relatively) lush due to large springs and wetlands fed from groundwater derived from snow melt in the Snake Range, Deep Creek Range, and other tall mountains to the west of Snake Valley. Great Basin National Park is just over the Nevada state line in the southern Snake Range. One of western Utah's most impressive, but least visited attractions is Notch Peak, the tallest limestone cliff in North America, located west of Delta.
82
+
83
+ Much of the scenic southern and southeastern landscape (specifically the Colorado Plateau region) is sandstone, specifically Kayenta sandstone and Navajo sandstone. The Colorado River and its tributaries wind their way through the sandstone, creating some of the world's most striking and wild terrain (the area around the confluence of the Colorado and Green Rivers was the last to be mapped in the lower 48 United States). Wind and rain have also sculpted the soft sandstone over millions of years. Canyons, gullies, arches, pinnacles, buttes, bluffs, and mesas are the common sight throughout south-central and southeast Utah.
84
+
85
+ This terrain is the central feature of protected state and federal parks such as Arches, Bryce Canyon, Canyonlands, Capitol Reef, and Zion national parks, Cedar Breaks, Grand Staircase-Escalante, Hovenweep, and Natural Bridges national monuments, Glen Canyon National Recreation Area (site of the popular tourist destination, Lake Powell), Dead Horse Point and Goblin Valley state parks, and Monument Valley. The Navajo Nation also extends into southeastern Utah. Southeastern Utah is also punctuated by the remote, but lofty La Sal, Abajo, and Henry mountain ranges.
86
+
87
+ Eastern (northern quarter) Utah is a high-elevation area covered mostly by plateaus and basins, particularly the Tavaputs Plateau and San Rafael Swell, which remain mostly inaccessible, and the Uinta Basin, where the majority of eastern Utah's population lives. Economies are dominated by mining, oil shale, oil, and natural gas-drilling, ranching, and recreation. Much of eastern Utah is part of the Uintah and Ouray Indian Reservation. The most popular destination within northeastern Utah is Dinosaur National Monument near Vernal.
88
+
89
+ Southwestern Utah is the lowest and hottest spot in Utah. It is known as Utah's Dixie because early settlers were able to grow some cotton there. Beaverdam Wash in far southwestern Utah is the lowest point in the state, at 2,000 feet (610 m).[42] The northernmost portion of the Mojave Desert is also located in this area. Dixie is quickly becoming a popular recreational and retirement destination, and the population is growing rapidly. Although the Wasatch Mountains end at Mount Nebo near Nephi, a complex series of mountain ranges extends south from the southern end of the range down the spine of Utah. Just north of Dixie and east of Cedar City is the state's highest ski resort, Brian Head.
90
+
91
+ Like most of the western and southwestern states, the federal government owns much of the land in Utah. Over 70 percent of the land is either BLM land, Utah State Trustland, or U.S. National Forest, U.S. National Park, U.S. National Monument, National Recreation Area or U.S. Wilderness Area.[44] Utah is the only state where every county contains some national forest.[45]
92
+
93
+ Utah features a dry, semi-arid to desert climate,[citation needed] although its many mountains feature a large variety of climates, with the highest points in the Uinta Mountains being above the timberline. The dry weather is a result of the state's location in the rain shadow of the Sierra Nevada in California. The eastern half of the state lies in the rain shadow of the Wasatch Mountains. The primary source of precipitation for the state is the Pacific Ocean, with the state usually lying in the path of large Pacific storms from October to May. In summer, the state, especially southern and eastern Utah, lies in the path of monsoon moisture from the Gulf of California.
94
+
95
+ Most of the lowland areas receive less than 12 inches (305 mm) of precipitation annually, although the I-15 corridor, including the densely populated Wasatch Front, receives approximately 15 inches (381 mm). The Great Salt Lake Desert is the driest area of the state, with less than 5 inches (127 mm). Snowfall is common in all but the far southern valleys. Although St. George receives only about 3 inches (76 mm) per year, Salt Lake City sees about 60 inches (1,524 mm), enhanced by the lake-effect snow from the Great Salt Lake, which increases snowfall totals to the south, southeast, and east of the lake.
96
+
97
+ Some areas of the Wasatch Range in the path of the lake-effect receive up to 500 inches (12,700 mm) per year. This micro climate of enhanced snowfall from the Great Salt Lake spans the entire proximity of the lake. The cottonwood canyons adjacent to Salt Lake City are located in the right position to receive more precipitation from the lake.[46] The consistently deep powder snow led Utah's ski industry to adopt the slogan "the Greatest Snow on Earth" in the 1980s. In the winter, temperature inversions are a common phenomenon across Utah's low basins and valleys, leading to thick haze and fog that can last for weeks at a time, especially in the Uintah Basin. Although at other times of year its air quality is good, winter inversions give Salt Lake City some of the worst wintertime pollution in the country.
98
+
99
+ Previous studies have indicated a widespread decline in snowpack over Utah accompanied by a decline in the snow–precipitation ratio while anecdotal evidence claims have been put forward that measured changes in Utah's snowpack are spurious and do not reflect actual change. A 2012 study[47] found that the proportion of winter (January–March) precipitation falling as snow has decreased by nine percent during the last half century, a combined result from a significant increase in rainfall and a minor decrease in snowfall. Meanwhile, observed snow depth across Utah has decreased and is accompanied by consistent decreases in snow cover and surface albedo. Weather systems with the potential to produce precipitation in Utah have decreased in number with those producing snowfall decreasing at a considerably greater rate.[48]
100
+
101
+ Utah's temperatures are extreme, with cold temperatures in winter due to its elevation, and very hot summers statewide (with the exception of mountain areas and high mountain valleys). Utah is usually protected from major blasts of cold air by mountains lying north and east of the state, although major Arctic blasts can occasionally reach the state. Average January high temperatures range from around 30 °F (−1 °C) in some northern valleys to almost 55 °F (13 °C) in St. George.
102
+
103
+ Temperatures dropping below 0 °F (−18 °C) should be expected on occasion in most areas of the state most years, although some areas see it often (for example, the town of Randolph averages about fifty days per year with temperatures that low). In July, average highs range from about 85 to 100 °F (29 to 38 °C). However, the low humidity and high elevation typically leads to large temperature variations, leading to cool nights most summer days. The record high temperature in Utah was 118 °F (48 °C), recorded south of St. George on July 4, 2007,[49] and the record low was −69 °F (−56 °C), recorded at Peter Sinks in the Bear River Mountains of northern Utah on February 1, 1985.[50] However, the record low for an inhabited location is −49 °F (−45 °C) at Woodruff on December 12, 1932.[51]
104
+
105
+ Utah, like most of the western United States, has few days of thunderstorms. On average there are fewer than 40 days of thunderstorm activity during the year, although these storms can be briefly intense when they do occur. They are most likely to occur during monsoon season from about mid-July through mid-September, especially in southern and eastern Utah. Dry lightning strikes and the general dry weather often spark wildfires in summer, while intense thunderstorms can lead to flash flooding, especially in the rugged terrain of southern Utah. Although spring is the wettest season in northern Utah, late summer is the wettest period for much of the south and east of the state. Tornadoes are uncommon in Utah, with an average of two striking the state yearly, rarely higher than EF1 intensity.[52]
106
+
107
+ One exception of note, however, was the unprecedented F2 Salt Lake City Tornado which moved directly across downtown Salt Lake City on August 11, 1999, killing one person, injuring sixty others, and causing approximately $170 million in damage.[53] The only other reported tornado fatality in Utah's history was a 7-year-old girl who was killed while camping in Summit County on July 6, 1884. The last tornado of above (E)F0 intensity occurred on September 8, 2002, when an F2 tornado hit Manti. On August 11, 1993, an F3 tornado hit the Uinta Mountains north of Duchesne at an elevation of 10,500 feet (3,200 m), causing some damage to a Boy Scouts campsite. This is the strongest tornado ever recorded in Utah.[citation needed]
108
+
109
+ Utah is home to more than 600 vertebrate animals[54] as well as numerous invertebrates and insects.[55]
110
+
111
+ Mammals are found in every area of Utah. Non-predatory larger mammals include the wood bison, elk, moose, mountain goat, mule deer, pronghorn, and multiple types of bighorn sheep. Non-predatory small mammals include muskrat, and nutria. Predatory mammals include the brown and black bear, cougar, Canada lynx, bobcat, fox (gray, red, and kit), coyote, badger, gray wolf, black-footed ferret, mink, stoat, long-tailed weasel, raccoon, and otter.
112
+
113
+ There are many different insects found in Utah. One of the most rare is the Coral Pink Sand Dunes tiger beetle, found only in Coral Pink Sand Dunes State Park, near Kanab.[56] It was proposed in 2012 to be listed as a threatened species,[57] but the proposal was not accepted.[58]
114
+
115
+ In February 2009, Africanized honeybees were found in southern Utah.[59][60] The bees had spread into eight counties in Utah, as far north as Grand and Emery counties by May 2017.[61]
116
+
117
+ The white-lined sphinx moth is common to most of the United States, but there have been reported outbreaks of large groups of their larvae damaging tomato, grape and garden crops in Utah.[62]
118
+
119
+ Several thousand plants are native to Utah.[63]
120
+
121
+ The United States Census Bureau estimates that the population of Utah was 3,205,958 on July 1, 2019, an 16.00% increase since the 2010 United States Census.[5] The center of population of Utah is located in Utah County in the city of Lehi.[64] Much of the population lives in cities and towns along the Wasatch Front, a metropolitan region that runs north–south with the Wasatch Mountains rising on the eastern side. Growth outside the Wasatch Front is also increasing. The St. George metropolitan area is currently the second fastest-growing in the country after the Las Vegas metropolitan area, while the Heber micropolitan area is also the second fastest-growing in the country (behind Palm Coast, Florida).[65]
122
+
123
+ Utah contains five metropolitan areas (Logan, Ogden-Clearfield, Salt Lake City, Provo-Orem, and St. George), and six micropolitan areas (Brigham City, Heber, Vernal, Price, Richfield, and Cedar City).
124
+
125
+ Utah ranks among the highest in total fertility rate, 47th in teenage pregnancy, lowest in percentage of births out of wedlock, lowest in number of abortions per capita, and lowest in percentage of teen pregnancies terminated in abortion. However, statistics relating to pregnancies and abortions may also be artificially low from teenagers going out of state for abortions because of parental notification requirements.[66][67] Utah has the lowest child poverty rate in the country, despite its young demographics.[68] According to the Gallup-Healthways Global Well-Being Index as of 2012[update], Utahns ranked fourth in overall well-being in the United States.[69] A 2002 national prescription drug study determined that antidepressant drugs were "prescribed in Utah more often than in any other state, at a rate nearly twice the national average".[70] The data shows that depression rates in Utah are no higher than the national average.[71]
126
+
127
+ At the 2010 Census, 86.1% of the population was non-Hispanic White,[72] down from 93.8% in 1990,[73] 1% non-Hispanic Black or African American, 1.2% non-Hispanic Native American and Alaska Native, 2% non-Hispanic Asian, 0.9% non-Hispanic Native Hawaiian and Other Pacific Islander, 0.1% from some other race (non-Hispanic) and 1.8% of two or more races (non-Hispanic). 13.0% of Utah's population was of Hispanic, Latino, or Spanish origin (of any race).
128
+
129
+ The largest ancestry groups in the state are:
130
+
131
+ Most Utahns are of Northern European descent.[76] In 2011 one-third of Utah's workforce was reported to be bilingual, developed through a program of acquisition of second languages beginning in elementary school, and related to Mormonism's missionary goals for its young people.[77]
132
+
133
+ In 2011, 28.6% of Utah's population younger than the age of one were ethnic minorities, meaning they had at least one parent who was of a race other than non-Hispanic white.[78]
134
+
135
+ As of 2017, 62.8% of Utahns are counted as members of the LDS Church.[80][81] This declined to 61.2% in 2018[82] and to 60.7% in 2019.[83] Members of the LDS Church currently make up between 34%–41% of the population within Salt Lake City. However, many of the other major population centers such as Provo, Logan, Tooele, and St. George tend to be predominantly LDS, along with many suburban and rural areas. The LDS Church has the largest number of congregations, numbering 4,815 wards.[84]
136
+
137
+ Though the LDS Church officially maintains a policy of neutrality in regard to political parties,[85] the church's doctrine has a strong regional influence on politics.[86] Another doctrine effect can be seen in Utah's high birth rate (25 percent higher than the national average; the highest for a state in the U.S.).[87] The Mormons in Utah tend to have conservative views when it comes to most political issues and the majority of voter-age Utahns are unaffiliated voters (60%) who vote overwhelmingly Republican.[88] Mitt Romney received 72.8% of the Utahn votes in 2012, while John McCain polled 62.5% in the 2008 United States presidential election and 70.9% for George W. Bush in 2004. In 2010 the Association of Religion Data Archives (ARDA) reported that the three largest denominational groups in Utah are the LDS Church with 1,910,504 adherents; the Catholic Church with 160,125 adherents, and the Southern Baptist Convention with 12,593 adherents.[89] There is a small but growing Jewish presence in the state.[90][91]
138
+
139
+ According to results from the 2010 United States Census, combined with official LDS Church membership statistics, church members represented 62.1% of Utah's total population. The Utah county with the lowest percentage of church members was Grand County, at 26.5%, while the county with the highest percentage was Morgan County, at 86.1%. In addition, the result for the most populated county, Salt Lake County, was 51.4%.[11]
140
+
141
+ According to a Gallup poll, Utah had the third-highest number of people reporting as "Very Religious" in 2015, at 55% (trailing only Mississippi and Alabama). However, it was near the national average of people reporting as "Nonreligious" (31%), and featured the smallest percentage of people reporting as "Moderately Religious" (15%) of any state, being eight points lower than second-lowest state Vermont.[92] In addition, it had the highest average weekly church attendance of any state, at 51%.[93]
142
+
143
+ The official language in the state of Utah is English. Utah English is primarily a merger of Northern and Midland American dialects carried west by LDS Church members, whose original New York dialect later incorporated features from southern Ohio and central Illinois. Conspicuous in the speech of some in the central valley, although less frequent now in Salt Lake City, is a reversal of vowels, so that farm and barn sound like form and born and, conversely, form and born sound like farm and barn.[citation needed]
144
+
145
+ In 2000, 87.5% of all state residents five years of age or older spoke only English at home, a decrease from 92.2% in 1990.
146
+
147
+ Utah has the highest total birth rate[87] and accordingly, the youngest population of any U.S. state. In 2010, the state's population was 50.2% male and 49.8% female. The life expectancy is 79.3 years.
148
+
149
+ According to the Bureau of Economic Analysis, the gross state product of Utah in 2012 was US$130.5 billion, or 0.87% of the total United States GDP of US$14.991 trillion for the same year.[96] The per capita personal income was $45,700 in 2012. Major industries of Utah include: mining, cattle ranching, salt production, and government services.
150
+
151
+ According to the 2007 State New Economy Index, Utah is ranked the top state in the nation for Economic Dynamism, determined by "the degree to which state economies are knowledge-based, globalized, entrepreneurial, information technology-driven and innovation-based". In 2014, Utah was ranked number one in Forbes' list of "Best States For Business".[97] A November 2010 article in Newsweek magazine highlighted Utah and particularly the Salt Lake City area's economic outlook, calling it "the new economic Zion", and examined how the area has been able to bring in high-paying jobs and attract high-tech corporations to the area during a recession.[98] As of September 2014[update], the state's unemployment rate was 3.5%.[99] In terms of "small business friendliness", in 2014 Utah emerged as number one, based on a study drawing upon data from more than 12,000 small business owners.[100]
152
+
153
+ In eastern Utah petroleum production is a major industry.[101] Near Salt Lake City, petroleum refining is done by a number of oil companies. In central Utah, coal production accounts for much of the mining activity.
154
+
155
+ According to Internal Revenue Service tax returns, Utahns rank first among all U.S. states in the proportion of income given to charity by the wealthy. This is due to the standard ten percent of all earnings that Mormons give to the LDS Church.[68] According to the Corporation for National and Community Service, Utah had an average of 884,000 volunteers between 2008 and 2010, each of whom contributed 89.2 hours per volunteer. This figure equates to $3.8 billion of service contributed, ranking Utah number one for volunteerism in the nation.[102]
156
+
157
+ Utah collects personal income tax; since 2008 the tax has been a flat five percent for all taxpayers.[103] The state sales tax has a base rate of 6.45 percent,[104] with cities and counties levying additional local sales taxes that vary among the municipalities. Property taxes are assessed and collected locally. Utah does not charge intangible property taxes and does not impose an inheritance tax.
158
+
159
+ Tourism is a major industry in Utah. With five national parks (Arches, Bryce Canyon, Canyonlands, Capitol Reef, and Zion), Utah has the third most national parks of any state after Alaska and California. In addition, Utah features eight national monuments (Cedar Breaks, Dinosaur, Grand Staircase-Escalante, Hovenweep, Natural Bridges, Bears Ears, Rainbow Bridge, and Timpanogos Cave), two national recreation areas (Flaming Gorge and Glen Canyon), seven national forests (Ashley, Caribou-Targhee, Dixie, Fishlake, Manti-La Sal, Sawtooth, and Uinta-Wasatch-Cache), and numerous state parks and monuments.
160
+
161
+ The Moab area, in the southeastern part of the state, is known for its challenging mountain biking trails, including Slickrock. Moab also hosts the famous Moab Jeep Safari semiannually.
162
+
163
+ Utah has seen an increase in tourism since the 2002 Winter Olympics. Park City is home to the United States Ski Team. Utah's ski resorts are primarily located in northern Utah near Salt Lake City, Park City, Ogden, and Provo. Between 2007 and 2011 Deer Valley in Park City, has been ranked the top ski resort in North America in a survey organized by Ski Magazine.[105]
164
+
165
+ Utah has many significant ski resorts. The 2009 Ski Magazine reader survey concluded that six of the top ten resorts deemed most "accessible", and six of the top ten with the best snow conditions, were located in Utah.[106] In Southern Utah, Brian Head Ski Resort is located in the mountains near Cedar City. Former Olympic venues including Utah Olympic Park and Utah Olympic Oval are still in operation for training and competition and allows the public to participate in numerous activities including ski jumping, bobsleigh, and speed skating.
166
+
167
+ Utah features many cultural attractions such as Temple Square, the Sundance Film Festival, the Red Rock Film Festival, the DOCUTAH Film Festival, the Utah Data Center, and the Utah Shakespearean Festival. Temple Square is ranked as the 16th most visited tourist attraction in the United States by Forbes magazine, with more than five million annual visitors.[107]
168
+
169
+ Other attractions include Monument Valley, the Great Salt Lake, the Bonneville Salt Flats, and Lake Powell.
170
+
171
+ The state of Utah relies heavily on income from tourists and travelers visiting the state's parks and ski resorts, and thus the need to "brand" Utah and create an impression of the state throughout the world has led to several state slogans, the most famous of which being "The Greatest Snow on Earth", which has been in use in Utah officially since 1975 (although the slogan was in unofficial use as early as 1962) and now adorns nearly 50 percent of the state's license plates. In 2001, Utah Governor Mike Leavitt approved a new state slogan, "Utah! Where Ideas Connect", which lasted until March 10, 2006, when the Utah Travel Council and the office of Governor Jon Huntsman announced that "Life Elevated" would be the new state slogan.[108]
172
+
173
+ Beginning in the late 19th century with the state's mining boom (including the Bingham Canyon Mine, among the world's largest open pit mines), companies attracted large numbers of immigrants with job opportunities. Since the days of the Utah Territory mining has played a major role in Utah's economy. Historical mining towns include Mercur in Tooele County, Silver Reef in Washington County, Eureka in Juab County, Park City in Summit County and numerous coal mining camps throughout Carbon County such as Castle Gate, Spring Canyon, and Hiawatha.[109]
174
+
175
+ These settlements were characteristic of the boom and bust cycle that dominated mining towns of the American West. Park City, Utah, and Alta, Utah were a boom towns in the early twentieth centuries. Rich silver mines in the mountains adjacent to the towns led to many people flocking to the towns in search of wealth. During the early part of the Cold War era, uranium was mined in eastern Utah. Today mining activity still plays a major role in the state's economy. Minerals mined in Utah include copper, gold, silver, molybdenum, zinc, lead, and beryllium. Fossil fuels including coal, petroleum, and natural gas continue to play a large role in Utah's economy, especially in the eastern part of the state in counties such as Carbon, Emery, Grand, and Uintah.[109]
176
+
177
+ In 2007, nine people were killed at the Crandall Canyon Mine collapse.
178
+
179
+ On March 22, 2013, one miner died and another was injured after they became trapped in a cave-in at a part of the Castle Valley Mining Complex, about 16 kilometres (9.9 mi) west of the small mining town of Huntington in Emery County.[110]
180
+
181
+
182
+
183
+ Utah has the potential to generate 31.6 TWh/year from 13.1 GW of wind power, and 10,290 TWh/year from solar power using 4,048 GW of photovoltaic (PV), including 5.6 GW of rooftop photovoltaic, and 1,638 GW of concentrated solar power.[116]
184
+
185
+ I-15 and I-80 are the main interstate highways in the state, where they intersect and briefly merge near downtown Salt Lake City. I-15 traverses the state north-to-south, entering from Arizona near St. George, paralleling the Wasatch Front, and crossing into Idaho near Portage. I-80 spans northern Utah east-to-west, entering from Nevada at Wendover, crossing the Wasatch Mountains east of Salt Lake City, and entering Wyoming near Evanston. I-84 West enters from Idaho near Snowville (from Boise) and merges with I-15 from Tremonton to Ogden, then heads southeast through the Wasatch Mountains before terminating at I-80 near Echo Junction.
186
+
187
+ I-70 splits from I-15 at Cove Fort in central Utah and heads east through mountains and rugged desert terrain, providing quick access to the many national parks and national monuments of southern Utah, and has been noted for its beauty. The 103 mi (166 km) stretch from Salina to Green River is the country's longest stretch of interstate without services and, when completed in 1970, was the longest stretch of entirely new highway constructed in the U.S. since the Alaska Highway was completed in 1943.
188
+
189
+ TRAX, a light rail system in the Salt Lake Valley, consists of three lines. The Blue Line (formerly Salt Lake/Sandy Line) begins in the suburb of Draper and ends in Downtown Salt Lake City. The Red Line (Mid-Jordan/University Line) begins in the Daybreak Community of South Jordan, a southwestern valley suburb, and ends at the University of Utah. The Green Line begins in West Valley City, passes through downtown Salt Lake City, and ends at Salt Lake City International Airport.
190
+
191
+ The Utah Transit Authority (UTA), which operates TRAX, also operates a bus system that stretches across the Wasatch Front, west into Grantsville, and east into Park City. In addition, UTA provides winter service to the ski resorts east of Salt Lake City, Ogden, and Provo. Several bus companies also provide access to the ski resorts in winter, and local bus companies also serve the cities of Cedar City, Logan, Park City, and St. George. A commuter rail line known as FrontRunner, also operated by UTA, runs between Ogden and Provo via Salt Lake City. Amtrak's California Zephyr, with one train in each direction daily, runs east–west through Utah with stops in Green River, Helper, Provo, and Salt Lake City.
192
+
193
+ Salt Lake City International Airport is the only international airport in the state and serves as one of the hubs for Delta Air Lines. The airport has consistently ranked first in on-time departures and had the fewest cancellations among U.S. airports.[117] The airport has non-stop service to more than a hundred destinations throughout the United States, Canada, and Mexico, as well as to Amsterdam, London and Paris. Canyonlands Field (near Moab), Cedar City Regional Airport, Ogden-Hinckley Airport, Provo Municipal Airport, St. George Regional Airport, and Vernal Regional Airport all provide limited commercial air service. A new regional airport at St. George opened on January 12, 2011. SkyWest Airlines is also headquartered in St. George and maintains a hub at Salt Lake City.
194
+
195
+ Utah government is divided into three branches: executive, legislative, and judicial. The current governor of Utah is Gary Herbert,[118] who was sworn in on August 11, 2009. The governor is elected for a four-year term. The Utah State Legislature consists of a Senate and a House of Representatives. State senators serve four-year terms and representatives two-year terms. The Utah Legislature meets each year in January for an annual 45-day session.
196
+
197
+ The Utah Supreme Court is the court of last resort in Utah. It consists of five justices, who are appointed by the governor, and then subject to retention election. The Utah Court of Appeals handles cases from the trial courts.[119] Trial level courts are the district courts and justice courts. All justices and judges, like those on the Utah Supreme Court, are subject to retention election after appointment.
198
+
199
+ Utah is divided into political jurisdictions designated as counties. Since 1918 there have been 29 counties in the state, ranging from 298 to 7,819 square miles (772 to 20,300 km2).
200
+
201
+ Utah granted full voting rights to women in 1870, 26 years before becoming a state. Among all U.S. states, only Wyoming granted suffrage to women earlier.[121] However, in 1887 the initial Edmunds-Tucker Act was passed by Congress in an effort to curtail Mormon influence in the territorial government. One of the provisions of the Act was the repeal of women's suffrage; full suffrage was not returned until Utah was admitted to the Union in 1896.
202
+
203
+ Utah is one of the 15 states that have not ratified the U.S. Equal Rights Amendment.[122]
204
+
205
+ In March 2018, Utah passed America's first "free-range parenting" bill. The bill was signed into law by Republican Governor Gary Herbert and states that parents who allow their children to engage in certain activities without supervision are not considered neglectful.[123][124]
206
+
207
+ The constitution of Utah was enacted in 1895. Notably, the constitution outlawed polygamy, as requested by Congress when Utah had applied for statehood, and reestablished the territorial practice of women's suffrage. Utah's Constitution has been amended many times since its inception.[125]
208
+
209
+ Utah's laws in regard to alcohol, tobacco and gambling are strict. Utah is an alcoholic beverage control state. The Utah Department of Alcoholic Beverage Control regulates the sale of alcohol; wine and spirituous liquors may be purchased only at state liquor stores, and local laws may prohibit the sale of beer and other alcoholic beverages on Sundays. The state bans the sale of fruity alcoholic drinks at grocery stores and convenience stores. The law states that such drinks must now have new state-approved labels on the front of the products that contain capitalized letters in bold type telling consumers the drinks contain alcohol and at what percentage. The Utah Indoor Clean Air Act is a statewide smoking ban that prohibits it in many public places.[126] Utah and Hawaii are the only two states in the United States to outlaw all forms of gambling.
210
+
211
+ Same-sex marriage became legal in Utah on December 20, 2013 when judge Robert J. Shelby of the United States District Court for the District of Utah issued a ruling in Kitchen v. Herbert.[127][128] As of close of business December 26, more than 1,225 marriage licenses were issued, with at least 74 percent, or 905 licenses, issued to gay and lesbian couples.[129] The state Attorney General's office was granted a stay of the ruling by the United States Supreme Court on January 6, 2014 while the Tenth Circuit Court of Appeals considered the case.[130] On Monday October 6, 2014, the Supreme Court of the United States declined a Writ of Certiorari, and the 10th Circuit Court issued their mandate later that day, lifting their stay. Same-sex marriages commenced again in Utah that day.[131]
212
+
213
+ In the late 19th century, the federal government took issue with polygamy in the LDS Church. The LDS Church discontinued plural marriage in 1890, and in 1896 Utah gained admission to the Union. Many new people settled the area soon after the Mormon pioneers. Relations have often been strained between the LDS population and the non-LDS population.[132] These tensions have played a large part in Utah's history (Liberal Party vs. People's Party).
214
+
215
+ Utah votes predominantly Republican. Self-identified Latter-day Saints are more likely to vote for the Republican ticket than non-Mormons. Utah is one of the most Republican states in the nation.[133][134] Utah was the single most Republican-leaning state in the country in every presidential election from 1976 to 2004, measured by the percentage point margin between the Republican and Democratic candidates. In 2008 Utah was only the third-most Republican state (after Wyoming and Oklahoma), but in 2012, with Mormon Mitt Romney atop the Republican ticket, Utah returned to its position as the most Republican state. However, the 2016 presidential election result saw Republican Donald Trump carry the state (marking the thirteenth consecutive win by the Republican presidential candidate) with only a plurality, the first time this happened since 1992.
216
+
217
+ Both Utah's U.S. Senators, Mitt Romney and Mike Lee, are Republican. Three more Republicans—Rob Bishop, Chris Stewart, and John Curtis—represent Utah in the United States House of Representatives. Ben McAdams, the sole Democratic member of the Utah delegation, represents the 4th congressional district. After Jon Huntsman Jr. resigned to serve as U.S. Ambassador to China, Gary Herbert was sworn in as governor on August 11, 2009. Herbert was elected to serve out the remainder of the term in a special election in 2010, defeating Democratic nominee Salt Lake County Mayor Peter Corroon with 64% of the vote. He won election to a full four-year term in 2012, defeating the Democrat Peter Cooke with 68% of the vote.
218
+
219
+ The LDS Church maintains an official policy of neutrality with regard to political parties and candidates.[85]
220
+
221
+ In the 1970s, then-Apostle Ezra Taft Benson was quoted by the Associated Press that it would be difficult for a faithful Latter-day Saint to be a liberal Democrat.[135] Although the LDS Church has officially repudiated such statements on many occasions, Democratic candidates—including LDS Democrats—believe Republicans capitalize on the perception that the Republican Party is doctrinally superior.[136] Political scientist and pollster Dan Jones explains this disparity by noting that the national Democratic Party is associated with liberal positions on gay marriage and abortion, both of which the LDS Church is against.[137] The Republican Party in heavily Mormon Utah County presents itself as the superior choice for Latter-day Saints. Even though Utah Democratic candidates are predominantly LDS, socially conservative, and pro-life, no Democrat has won in Utah County since 1994.[138]
222
+
223
+ David Magleby, dean of Social and Behavioral Sciences at Brigham Young University, a lifelong Democrat and a political analyst, asserts that the Republican Party actually has more conservative positions than the LDS Church. Magleby argues that the locally conservative Democrats are in better accord with LDS doctrine.[139] For example, the Republican Party of Utah opposes almost all abortions while Utah Democrats take a more liberal approach, although more conservative than their national counterparts. On Second Amendment issues, the state GOP has been at odds with the LDS Church position opposing concealed firearms in places of worship and in public spaces.
224
+
225
+ In 1998 the church expressed concern that Utahns perceived the Republican Party as an LDS institution and authorized lifelong Democrat and Seventy Marlin Jensen to promote LDS bipartisanship.[135]
226
+
227
+ Utah is much more conservative than the United States as a whole, particularly on social issues. Compared to other Republican-dominated states in the Mountain West such as Wyoming, Utah politics have a more moralistic and less libertarian character, according to David Magleby.[140]
228
+
229
+ About 80% of Utah's Legislature are members of The Church of Jesus Christ of Latter-day Saints,[141] while members account for 61 percent of the population.[142] Since becoming a state in 1896, Utah has had only two non-Mormon governors.[143]
230
+
231
+ In 2006, the legislature passed legislation aimed at banning joint-custody for a non-biological parent of a child. The custody measure passed the legislature and was vetoed by the governor, a reciprocal benefits supporter.
232
+
233
+ Carbon County's Democrats are generally made up of members of the large Greek, Italian, and Southeastern European communities, whose ancestors migrated in the early 20th century to work in the extensive mining industry. The views common amongst this group are heavily influenced by labor politics, particularly of the New Deal Era.[144]
234
+
235
+ The state's most Republican areas tend to be Utah County, which is the home to Brigham Young University in the city of Provo, and nearly all the rural counties.[145][146] These areas generally hold socially conservative views in line with that of the national Religious Right. The most Democratic areas of the state lie currently in and around Salt Lake City proper.
236
+
237
+ The state has not voted for a Democrat for president since 1964. Historically, Republican presidential nominees score one of their best margins of victory here. Utah was the Republicans' best state in the 1976,[147] 1980,[148] 1984,[149] 1988,[150] 1996,[151] 2000,[152] and 2004[153] elections. In 1992, Utah was the only state in the nation where Democratic candidate Bill Clinton finished behind both Republican candidate George HW Bush and Independent candidate Ross Perot.[154] In 2004, Republican George W. Bush won every county in the state and Utah gave him his largest margin of victory of any state. He won the state's five electoral votes by a margin of 46 percentage points with 71.5% of the vote. In the 1996 Presidential elections the Republican candidate received a smaller 54% of the vote while the Democrat earned 34%.[155]
238
+
239
+ Utah's population is concentrated in two areas, the Wasatch Front in the north-central part of the state, with over two million; and Washington County, in southwestern Utah, locally known as "Dixie", with more than 150,000 residents in the metropolitan area.
240
+
241
+ According to the 2010 Census, Utah was the second fastest-growing state (at 23.8 percent) in the United States between 2000 and 2010 (behind Nevada). St. George, in the southwest, is the second fastest-growing metropolitan area in the United States, trailing Greeley, Colorado.
242
+
243
+ The three fastest-growing counties from 2000 to 2010 were Wasatch County (54.7%), Washington County (52.9%), and Tooele County (42.9%). However, Utah County added the most people (148,028). Between 2000 and 2010, Saratoga Springs (1,673%), Herriman (1,330%), Eagle Mountain (893%), Cedar Hills (217%), South Willard (168%), Nibley (166%), Syracuse (159%), West Haven (158%), Lehi (149%), Washington (129%), and Stansbury Park (116%) all at least doubled in population. West Jordan (35,376), Lehi (28,379), St. George (23,234), South Jordan (20,981), West Valley City (20,584), and Herriman (20,262) all added at least 20,000 people.[156]
244
+
245
+ Utah is the second-least populous U.S. state to have a major professional sports league franchise, after the Vegas Golden Knights joined the National Hockey League in 2017. The Utah Jazz of the National Basketball Association play at Vivint Smart Home Arena[157] in Salt Lake City. The team moved to the city from New Orleans in 1979 and has been one of the most consistently successful teams in the league (although they have yet to win a championship). Salt Lake City was previously host to the Utah Stars, who competed in the ABA from 1970–76 and won one championship, and to the Utah Starzz of the WNBA from 1997 to 2003.
246
+
247
+ Real Salt Lake of Major League Soccer was founded in 2005 and play their home matches at Rio Tinto Stadium in Sandy. RSL remains the only Utah major league sports team to have won a national championship, having won the MLS Cup in 2009.[158] RSL currently operates three adult teams in addition to the MLS side. Real Monarchs, competing in the second-level USL Championship, is the official reserve side for RSL. The team began play in the 2015 season at Rio Tinto Stadium,[159] remaining there until moving to Zions Bank Stadium, located at RSL's training center in Herriman, for the 2018 season and beyond.[160] Utah Royals FC, which shares ownership with RSL and also plays at Rio Tinto Stadium, has played in the National Women's Soccer League, the top level of U.S. women's soccer, since 2018.[161] Before the creation of the Royals, RSL's main women's side had been Real Salt Lake Women, which began play in the Women's Premier Soccer League in 2008 and moved to United Women's Soccer in 2016. RSL Women currently play at Utah Valley University in Orem.
248
+
249
+ The Utah Blaze began play in the original version of the Arena Football League in 2006, and remained in the league until it folded in 2009. The Blaze returned to the league at its relaunch in 2010, playing until the team's demise in 2013. They competed originally at the Maverik Center in West Valley City, and later at Vivint Smart Home Arena when it was known as EnergySolutions Arena.
250
+
251
+ Utah's highest level minor league baseball team is the Salt Lake Bees, who play at Smith's Ballpark in Salt Lake City and are part of the AAA level Pacific Coast League. Utah also has one minor league hockey team, the Utah Grizzlies, who play at the Maverik Center and compete in the ECHL.
252
+
253
+ Utah has six universities that compete in Division I of the NCAA, with a seventh set to move to Division I in 2020. Three of the schools have football programs that participate in the top-level Football Bowl Subdivision: Utah in the Pac-12 Conference, Utah State in the Mountain West Conference, and BYU as an independent (although BYU competes in the non-football West Coast Conference for most other sports). In addition, Weber State and Southern Utah (SUU) compete in the Big Sky Conference of the FCS. Utah Valley, which has no football program, is a full member of the Western Athletic Conference (WAC). Dixie State, currently a member of NCAA Division II, will begin a four-year transition to Division I in 2020 as a member of the WAC. Since the WAC has been a non-football conference since 2013, Dixie State football will play as an FCS independent.
254
+
255
+ Salt Lake City hosted the 2002 Winter Olympics. After early financial struggles and scandal, the 2002 Olympics eventually became among the most successful Winter Olympics in history from a marketing and financial standpoint.[citation needed] Watched by more than two billion viewers, the Games ended up with a profit of $100 million.[162]
256
+
257
+ Utah has hosted professional golf tournaments such as the Uniting Fore Care Classic and currently the Utah Championship.
258
+
259
+ Rugby has been growing quickly in the state of Utah, growing from 17 teams in 2009 to 70 as of 2013[update] with more than 3,000 players, and more than 55 high school varsity teams.[163][164] The growth has been inspired in part by the 2008 movie Forever Strong.[164] Utah fields two of the most competitive teams in the nation in college rugby—BYU and Utah.[163] BYU has won the National Championship in 2009, 2012, 2013, 2014, and 2015. Formed in 2017, Utah Warriors is a Major League Rugby team based in Salt Lake City.[165]
260
+
261
+ Utah is the setting of or the filming location for many books, films,[166] television series,[166] music videos, and video games.
262
+
263
+ Utah's capitol Salt Lake City is the final location in the video game The Last of Us.[167]
264
+
265
+ Monument Valley in southeastern Utah. This area was used to film many Hollywood Westerns.
266
+
267
+ The otherworldly look of the Bonneville Salt Flats has been used in many movies and commercials.
268
+
269
+ This article incorporates public domain material from the website of the Division of Utah State Parks and Recreation.
en/5888.html.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ You can also find me on Wikipedia in french, Vikidia in french, and Vikidia in english. --CRH (talk) 05:51, 1 May 2015 (UTC)
en/5889.html.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Other reasons this message may be displayed:
en/589.html.txt ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A boat is a watercraft of a large range of types and sizes, but generally smaller than a ship, which is distinguished by its larger size, shape, cargo or passenger capacity, or its ability to carry boats.
4
+
5
+ Small boats are typically found on inland waterways such as rivers and lakes, or in protected coastal areas. However, some boats, such as the whaleboat, were intended for use in an offshore environment. In modern naval terms, a boat is a vessel small enough to be carried aboard a ship. Anomalous definitions exist, as lake freighters 1,000 feet (300 m) long on the Great Lakes are called "boats".
6
+
7
+ Boats vary in proportion and construction methods with their intended purpose, available materials, or local traditions. Canoes have been used since prehistoric times and remain in use throughout the world for transportation, fishing, and sport. Fishing boats vary widely in style partly to match local conditions. Pleasure craft used in recreational boating include ski boats, pontoon boats, and sailboats. House boats may be used for vacationing or long-term residence. Lighters are used to convey cargo to and from large ships unable to get close to shore. Lifeboats have rescue and safety functions.
8
+
9
+ Boats can be propelled by manpower (e.g. rowboats and paddle boats), wind (e.g. sailboats), and motor (including gasoline, diesel, and electric).
10
+
11
+ Boats have served as transportation since the earliest times.[1] Circumstantial evidence, such as the early settlement of Australia over 40,000 years ago, findings in Crete dated 130,000 years ago,[2] and in Flores dated to 900,000 years ago,[3] suggest that boats have been used since prehistoric times. The earliest boats are thought to have been dugouts,[4] and the oldest boats found by archaeological excavation date from around 7,000–10,000 years ago. The oldest recovered boat in the world, the Pesse canoe, found in the Netherlands, is a dugout made from the hollowed tree trunk of a Pinus sylvestris that was constructed somewhere between 8200 and 7600 BC. This canoe is exhibited in the Drents Museum in Assen, Netherlands.[5][6] Other very old dugout boats have also been recovered.[7][8][9]
12
+ Rafts have operated for at least 8,000 years.[10]
13
+ A 7,000-year-old seagoing reed boat has been found in Kuwait.[11]
14
+ Boats were used between 4000 and 3000 BC in Sumer,[1] ancient Egypt[12] and in the Indian Ocean.[1]
15
+
16
+ Boats played an important role in the commerce between the Indus Valley Civilization and Mesopotamia.[13] Evidence of varying models of boats has also been discovered at various Indus Valley archaeological sites.[14][15]
17
+ Uru craft originate in Beypore, a village in south Calicut, Kerala, in southwestern India. This type of mammoth wooden ship was constructed[when?] solely of teak, with a transport capacity of 400 tonnes. The ancient Arabs and Greeks used such boats as trading vessels.[16]
18
+
19
+ The historians Herodotus, Pliny the Elder and Strabo record the use of boats for commerce, travel, and military purposes.[14]
20
+
21
+ Boats can be categorized into three main types:
22
+
23
+ The hull is the main, and in some cases only, structural component of a boat. It provides both capacity and buoyancy. The keel is a boat's "backbone", a lengthwise structural member to which the perpendicular frames are fixed. On most boats a deck covers the hull, in part or whole. While a ship often has several decks, a boat is unlikely to have more than one. Above the deck are often lifelines connected to stanchions, bulwarks perhaps topped by gunnels, or some combination of the two. A cabin may protrude above the deck forward, aft, along the centerline, or covering much of the length of the boat. Vertical structures dividing the internal spaces are known as bulkheads.
24
+
25
+ The forward end of a boat is called the bow, the aft end the stern. Facing forward the right side is referred to as starboard and the left side as port.
26
+
27
+ Until the mid-19th century most boats were made of natural materials, primarily wood, although reed, bark and animal skins were also used. Early boats include the bound-reed style of boat seen in Ancient Egypt, the birch bark canoe, the animal hide-covered kayak[17] and coracle and the dugout canoe made from a single log.
28
+
29
+ By the mid-19th century, many boats had been built with iron or steel frames but still planked in wood. In 1855 ferro-cement boat construction was patented by the French, who coined the name "ferciment". This is a system by which a steel or iron wire framework is built in the shape of a boat's hull and covered over with cement. Reinforced with bulkheads and other internal structure it is strong but heavy, easily repaired, and, if sealed properly, will not leak or corrode.[18]
30
+
31
+ As the forests of Britain and Europe continued to be over-harvested to supply the keels of larger wooden boats, and the Bessemer process (patented in 1855) cheapened the cost of steel, steel ships and boats began to be more common. By the 1930s boats built entirely of steel from frames to plating were seen replacing wooden boats in many industrial uses and fishing fleets. Private recreational boats of steel remain uncommon. In 1895 WH Mullins produced steel boats of galvanized iron and by 1930 became the world's largest producer of pleasure boats.
32
+
33
+ Mullins also offered boats in aluminum from 1895 through 1899 and once again in the 1920s,[19][1] but it wasn't until the mid-20th century that aluminium gained widespread popularity. Though much more expensive than steel, aluminum alloys exist that do not corrode in salt water, allowing a similar load carrying capacity to steel at much less weight.
34
+
35
+ Around the mid-1960s, boats made of fiberglass (aka "glassfibre") became popular, especially for recreational boats. Fiberglass is also known as "GRP" (glass-reinforced plastic) in the UK, and "FRP" (for fiber-reinforced plastic) in the US. Fiberglass boats are strong, and do not rust, corrode, or rot. Instead, they are susceptible to structural degradation from sunlight and extremes in temperature over their lifespan. Fiberglass structures can be made stiffer with sandwich panels, where the fiberglass encloses a lightweight core such as balsa[20] or foam.
36
+
37
+ Cold moulding is a modern construction method, using wood as the structural component. In cold moulding very thin strips of wood are layered over a form. Each layer is coated with resin, followed by another directionally alternating layer laid on top. Subsequent layers may be stapled or otherwise mechanically fastened to the previous, or weighted or vacuum bagged to provide compression and stabilization until the resin sets.
38
+
39
+ The most common means of boat propulsion are as follows:
40
+
41
+ A boat displaces its weight in water, regardless whether it is made of wood, steel, fiberglass, or even concrete. If weight is added to the boat, the volume of the hull drawn below the waterline will increase to keep the balance above and below the surface equal. Boats have a natural or designed level of buoyancy. Exceeding it will cause the boat first to ride lower in the water, second to take on water more readily than when properly loaded, and ultimately, if overloaded by any combination of structure, cargo, and water, sink.
42
+
43
+ As commercial vessels must be correctly loaded to be safe, and as the sea becomes less buoyant in brackish areas such as the Baltic, the Plimsoll line was introduced to prevent overloading.
44
+
45
+ Since 1998 all new leisure boats and barges built in Europe between 2.5m and 24m must comply with the EU's Recreational Craft Directive (RCD). The Directive establishes four categories that permit the allowable wind and wave conditions for vessels in each class:[21]
46
+
47
+ A boat on the Ganges River
48
+
49
+ Babur crossing river Son; folio from an illustrated manuscript of ‘Babur-Namah’, Mughal, Akbar Period, AD 1598
50
+
51
+ A tugboat is used for towing or pushing another larger ship
52
+
53
+ A ship's derelict lifeboat, built of steel, rusting away in the wetlands of Folly Island, South Carolina, United States
54
+
55
+ A boat in an Egyptian tomb, painted around 1450 BC
56
+
57
+ Dugout boats in the courtyard of the Old Military Hospital in the Historic Center of Quito
58
+
59
+ Ming Dynasty Chinese painting of the Wanli Emperor enjoying a boat ride on a river with an entourage of guards and courtiers
60
+
61
+ Worlds longest dragon boat on display in Phnom Penh, Cambodia
62
+
63
+ At 17 metres long, the Severn-class lifeboats are the largest operational lifeboats in the UK
64
+
65
+ Aluminum flat-bottomed boats ashore for storage
66
+
67
+ A boat shaped like a sauce bottle that was sailed across the Atlantic Ocean by Tom McClean
68
+
69
+ Anchored boats in Portovenere, Italy
70
+
71
+ A boat in Utrecht, Netherlands
en/5890.html.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Other reasons this message may be displayed:
en/5891.html.txt ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Utrecht (/ˈjuːtrɛkt/ YOO-trekt, also UK: /juːˈtrɛxt/ yoo-TREKHT,[6][7] Dutch: [ˈytrɛxt] (listen)) is the fourth-largest city and a municipality of the Netherlands, capital and most populous city of the province of Utrecht. It is located in the eastern corner of the Randstad conurbation, in the very centre of mainland Netherlands; it had a population of 357,179 as of 2019.[8]
4
+
5
+ Utrecht's ancient city centre features many buildings and structures, several dating as far back as the High Middle Ages. It has been the religious centre of the Netherlands since the 8th century. It lost the status of prince-bishopric but remains the main religious centre in the country. Utrecht was the most important city in the Netherlands until the Dutch Golden Age, when it was surpassed by Amsterdam as the country's cultural centre and most populous city.
6
+
7
+ Utrecht is host to Utrecht University, the largest university in the Netherlands, as well as several other institutions of higher education. Due to its central position within the country, it is an important transport hub for both rail and road transport; the busiest train station in the Netherlands, Utrecht Centraal, is in the city of Utrecht. It has the second highest number of cultural events in the Netherlands, after Amsterdam.[9]
8
+ In 2012, Lonely Planet included Utrecht in the top 10 of the world's unsung places.[10]
9
+
10
+ Although there is some evidence of earlier inhabitation in the region of Utrecht, dating back to the Stone Age (app. 2200 BCE) and settling in the Bronze Age (app. 1800–800 BCE),[11] the founding date of the city is usually related to the construction of a Roman fortification (castellum), probably built in around 50 CE.
11
+ A series of such fortresses was built after the Roman emperor Claudius decided the empire should not expand further north. To consolidate the border, the Limes Germanicus defense line was constructed[12] along the main branch of the river Rhine, which at that time flowed through a more northern bed compared to today (what is now the Kromme Rijn). These fortresses were designed to house a cohort of about 500 Roman soldiers. Near the fort, settlements would grow housing artisans, traders and soldiers' wives and children.
12
+
13
+ In Roman times, the name of the Utrecht fortress was simply Traiectum, denoting its location at a possible Rhine crossing. Traiectum became Dutch Trecht; with the U from Old Dutch "uut" (downriver) added to distinguish U-trecht from Maas-tricht.[13][14] In 11th-century official documents, it was Latinized as Ultra Traiectum.
14
+ Around the year 200, the wooden walls of the fortification were replaced by sturdier tuff stone walls,[15] remnants of which are still to be found below the buildings around Dom Square.
15
+
16
+ From the middle of the 3rd century, Germanic tribes regularly invaded the Roman territories. Around 275 the Romans could no longer maintain the northern border and Utrecht was abandoned.[12] Little is known about the next period 270–650. Utrecht is first spoken of again several centuries after the Romans left. Under the influence of the growing realms of the Franks, during Dagobert I's reign in the 7th century, a church was built within the walls of the Roman fortress.[12] In ongoing border conflicts with the Frisians, this first church was destroyed.
17
+
18
+ By the mid-7th century, English and Irish missionaries set out to convert the Frisians. Pope Sergius I appointed their leader, Saint Willibrordus, as bishop of the Frisians. The tenure of Willibrordus is generally considered to be the beginning of the Bishopric of Utrecht.[12] In 723, the Frankish leader Charles Martel bestowed the fortress in Utrecht and the surrounding lands as the base of the bishops. From then on Utrecht became one of the most influential seats of power for the Roman Catholic Church in the Netherlands.
19
+ The archbishops of Utrecht were based at the uneasy northern border of the Carolingian Empire. In addition, the city of Utrecht had competition from the nearby trading centre Dorestad.[12] After the fall of Dorestad around 850, Utrecht became one of the most important cities in the Netherlands.[16] The importance of Utrecht as a centre of Christianity is illustrated by the election of the Utrecht-born Adriaan Florenszoon Boeyens as pope in 1522 (the last non-Italian pope before John Paul II).
20
+
21
+ When the Frankish rulers established the system of feudalism, the Bishops of Utrecht came to exercise worldly power as prince-bishops.[12] The territory of the bishopric not only included the modern province of Utrecht (Nedersticht, 'lower Sticht'), but also extended to the northeast. The feudal conflict of the Middle Ages heavily affected Utrecht. The prince-bishopric was involved in almost continuous conflicts with the Counts of Holland and the Dukes of Guelders.[17] The Veluwe region was seized by Guelders, but large areas in the modern province of Overijssel remained as the Oversticht.
22
+
23
+ Several churches and monasteries were built inside, or close to, the city of Utrecht. The most dominant of these was the Cathedral of Saint Martin, inside the old Roman fortress. The construction of the present Gothic building was begun in 1254 after an earlier romanesque construction had been badly damaged by fire. The choir and transept were finished from 1320 and were followed then by the ambitious Dom tower.[12] The last part to be constructed was the central nave, from 1420. By that time, however, the age of the great cathedrals had come to an end and declining finances prevented the ambitious project from being finished, the construction of the central nave being suspended before the planned flying buttresses could be finished.[12]
24
+ Besides the cathedral there were four collegiate churches in Utrecht: St. Salvator's Church (demolished in the 16th century), on the Dom square, dating back to the early 8th century.[18] Saint John (Janskerk), originating in 1040;[19] Saint Peter, building started in 1039[20] and Saint Mary's church building started around 1090 (demolished in the early 19th century, cloister survives).[21]
25
+ Besides these churches, the city housed St. Paul's Abbey,[22] the 15th-century beguinage of St. Nicholas, and a 14th-century chapter house of the Teutonic Knights.[23]
26
+
27
+ Besides these buildings which belonged to the bishopric, an additional four parish churches were constructed in the city: the Jacobikerk (dedicated to Saint James), founded in the 11th century, with the current Gothic church dating back to the 14th century;[24] the Buurkerk (Neighbourhood-church) of the 11th-century parish in the centre of the city; Nicolaichurch (dedicated to Saint Nicholas), from the 12th century[25] and the 13th-century Geertekerk (dedicated to Saint Gertrude of Nivelles).[26]
28
+
29
+ Its location on the banks of the river Rhine allowed Utrecht to become an important trade centre in the Northern Netherlands. The growing town was granted city rights by Henry V in 1122.
30
+ When the main flow of the Rhine moved south, the old bed which still flowed through the heart of the town became ever more canalized; and the wharf system was built as an inner city harbour system.[27] On the wharfs, storage facilities (werfkelders) were built, on top of which the main street, including houses, was constructed. The wharfs and the cellars are accessible from a platform at water level with stairs descending from the street level to form a unique structure.[nb 2][28] The relations between the bishop, who controlled many lands outside of the city, and the citizens of Utrecht was not always easy.[12] The bishop, for example dammed the Kromme Rijn at Wijk bij Duurstede to protect his estates from flooding. This threatened shipping for the city and led the city of Utrecht to commission a canal to ensure access to the town for shipping trade: the Vaartse Rijn, connecting Utrecht to the Hollandse IJssel at IJsselstein.
31
+
32
+ In 1528 the bishop lost secular power over both Neder- and Oversticht – which included the city of Utrecht – to Charles V, Holy Roman Emperor. Charles V combined the Seventeen Provinces (the current Benelux and the northern parts of France) as a personal union. This ended the prince-bishopric of Utrecht, as the secular rule was now the lordship of Utrecht, with the religious power remaining with the bishop, although Charles V had gained the right to appoint new bishops. In 1559 the bishopric of Utrecht was raised to archbishopric to make it the religious centre of the Northern ecclesiastical province in the Seventeen Provinces.
33
+
34
+ The transition from independence to a relatively minor part of a larger union was not easily accepted. To quell uprisings, Charles V struggled to exert his power over the city's citizens who had struggled to gain a certain level of independence from the bishops and were not willing to cede this to their new lord. The heavily fortified castle Vredenburg was built to house a large garrison whose main task was to maintain control over the city. The castle would last less than 50 years before it was demolished in an uprising in the early stages of the Dutch Revolt.
35
+
36
+ In 1579 the northern seven provinces signed the Union of Utrecht, in which they decided to join forces against Spanish rule. The Union of Utrecht is seen as the beginning of the Dutch Republic. In 1580, the new and predominantly Protestant state abolished the bishoprics, including the archbishopric of Utrecht. The stadtholders disapproved of the independent course of the Utrecht bourgeoisie and brought the city under much more direct control of the republic, shifting the power towards its dominant province Holland. This was the start of a long period of stagnation of trade and development in Utrecht. Utrecht remained an atypical city in the new republic being about 40% Catholic in the mid-17th-century, and even more so among the elite groups, who included many rural nobility and gentry with town houses there.[29]
37
+
38
+ The fortified city temporarily fell to the French invasion in 1672 (the Disaster Year); where the French invasion was stopped just west of Utrecht at the Old Hollandic Waterline. In 1674, only two years after the French left, the centre of Utrecht was struck by a tornado. The halt to building before construction of flying buttresses in the 15th century now proved to be the undoing of the cathedral of St Martin church's central section which collapsed, creating the current Dom square between the tower and choir. In 1713, Utrecht hosted one of the first international peace negotiations when the Treaty of Utrecht settled the War of the Spanish Succession. Beginning in 1723, Utrecht became the centre of the non-Roman Old Catholic Churches in the world.
39
+
40
+ In the early 19th century, the role of Utrecht as a fortified town had become obsolete. The fortifications of the Nieuwe Hollandse Waterlinie were moved east of Utrecht. The town walls could now be demolished to allow for expansion. The moats remained intact and formed an important feature of the Zocher plantsoen, an English style landscape park that remains largely intact today. Growth of the city increased when, in 1843, a railway connecting Utrecht to Amsterdam was opened. After that, Utrecht gradually became the main hub of the Dutch railway network. With the industrial revolution finally gathering speed in the Netherlands and the ramparts taken down, Utrecht began to grow far beyond its medieval centre. When the Dutch government allowed the bishopric of Utrecht to be reinstated by Rome in 1853, Utrecht became the centre of Dutch Catholicism once more. From the 1880s onward, neighbourhoods such as Oudwijk, Wittevrouwen, Vogelenbuurt to the East, and Lombok to the West were developed. New middle-class residential areas, such as Tuindorp and Oog in Al, were built in the 1920s and 1930s. During this period, several Jugendstil houses and office buildings were built, followed by Rietveld who built the Rietveld Schröder House (1924), and Dudok's construction of the city theater (1941).
41
+
42
+ During World War II, Utrecht was held by the Germans until the general German surrender of the Netherlands on 5 May 1945. British and Canadian troops that had surrounded the city entered it after that surrender, on 7 May 1945. Following the end of World War II, the city grew considerably when new neighbourhoods such as Overvecht, Kanaleneiland, Hoograven [nl] and Lunetten were built. Around 2000, the Leidsche Rijn housing area was developed as an extension of the city to the west.
43
+
44
+ The area surrounding Utrecht Centraal railway station and the station itself were developed following modernist ideas of the 1960s, in a brutalist style. This development led to the construction of the shopping mall Hoog Catharijne [nl], the music centre Vredenburg (Hertzberger, 1979), and conversion of part of the ancient canal structure into a highway (Catherijnebaan). Protest against further modernisation of the city centre followed even before the last buildings were finalised. In the early 21st century, the whole area is undergoing change again. The redeveloped music centre TivoliVredenburg opened in 2014 with the original Vredenburg and Tivoli concert and rock and jazz halls brought together in a single building.
45
+
46
+ Utrecht experiences a temperate oceanic climate (Köppen: Cfb) similar to all of the Netherlands.
47
+
48
+ Utrecht city had a population of 296,305 in 2007. It is a growing municipality and projections are that the population will surpass 392,000 by 2025.[32] As of November 2019, the city of Utrecht has a population of 357,179.[8]
49
+
50
+ Utrecht has a young population, with many inhabitants in the age category from 20 and 30 years, due to the presence of a large university. About 52% of the population is female, 48% is male. The majority of households (52.5%) in Utrecht are single-person households. About 29% of people living in Utrecht are either married, or have another legal partnership. About 3% of the population of Utrecht is divorced.[32]
51
+
52
+ For 69% of the population of Utrecht both parents were born in the Netherlands. Approximately 10% of the population consists of people with a recent migration background from Western countries, while 21% of the population has at least one parent who is of 'non-Western origin' (9% from Morocco, 5% Turkey, 3% Surinam and Dutch Caribbean and 5% of other countries).[32] Some of the city's boroughs have a relatively high percentage of originally people with a migration background – i.e. Kanaleneiland (83%) and Overvecht (57%). Like Rotterdam, Amsterdam, The Hague and other large Dutch cities, Utrecht faces some socio-economic problems. About 38% percent of its population either earns a minimum income or is dependent on social welfare (17% of all households). Boroughs such as Kanaleneiland, Overvecht and Hoograven consist primarily of high-rise housing developments, and are known for relatively high poverty and crime rate.[citation needed]
53
+
54
+ Utrecht has been the religious centre of the Netherlands since the 8th century. Currently it is the see of the Metropolitan Archbishop of Utrecht, the most senior Dutch Roman Catholic leader.[34][35] His ecclesiastical province covers the whole kingdom.
55
+
56
+ Utrecht is also the see of the archbishop of the Old Catholic church, titular head of the Union of Utrecht, and the location of the offices of the Protestant Church in the Netherlands, the main Dutch Protestant church.
57
+
58
+ As of 2013, the largest religion is Christianity with 28% of the population being Christian, followed by Islam with 9.5% and Hinduism with 0.8%.
59
+
60
+ Religions in Utrecht (2013)[36]
61
+
62
+ The city of Utrecht is subdivided into 10 city quarters, all of which have their own neighbourhood council and service centre for civil affairs.
63
+
64
+ Utrecht is the centre of a densely populated area, a fact which makes concise definitions of its agglomeration difficult, and somewhat arbitrary. The smaller Utrecht agglomeration of continuously built-up areas counts some 420,000 inhabitants and includes Nieuwegein, IJsselstein and Maarssen. It is sometimes argued that the close by municipalities De Bilt, Zeist, Houten, Vianen, Driebergen-Rijsenburg (Utrechtse Heuvelrug), and Bunnik should also be counted towards the Utrecht agglomeration, bringing the total to 640,000 inhabitants. The larger region, including slightly more remote towns such as Woerden and Amersfoort, counts up to 820,000 inhabitants.[37]
65
+
66
+ Utrecht's cityscape is dominated by the Dom Tower, the tallest belfry in the Netherlands and originally part of the Cathedral of Saint Martin.[38] An ongoing debate is over whether any building in or near the centre of town should surpass the Dom Tower in height (112 m (367 ft)). Nevertheless, some tall buildings are now being constructed that will become part of the skyline of Utrecht. The second tallest building of the city, the Rabobank-tower, was completed in 2010 and stands 105 metres (344 feet) tall.[39] Two antennas will increase that height to 120 metres (394 feet). Two other buildings were constructed around the Nieuw Galgenwaard stadium (2007). These buildings, the 'Kantoortoren Galghenwert' and 'Apollo Residence', stand 85.5 metres (280.5 feet) and 64.5 metres (211.6 feet) high respectively.
67
+
68
+ Another landmark is the old centre and the canal structure in the inner city. The Oudegracht is a curved canal, partly following the ancient main branch of the Rhine. It is lined with the unique wharf-basement structures that create a two-level street along the canals.[40] The inner city has largely retained its medieval structure,[41] and the moat ringing the old town is largely intact.[42] Because of the role of Utrecht as a fortified city, construction outside the medieval centre and its city walls was restricted until the 19th century. Surrounding the medieval core there is a ring of late 19th- and early 20th-century neighbourhoods, with newer neighbourhoods positioned farther out.[43] The eastern part of Utrecht remains fairly open. The Dutch Water Line, moved east of the city in the early 19th century, required open lines of fire, thus prohibiting all permanent constructions until the middle of the 20th century on the east side of the city.[44]
69
+
70
+ Due to the past importance of Utrecht as a religious centre, several monumental churches were erected, many of which have survived.[45] Most prominent is the Dom Church. Other notable churches include the romanesque St Peter's and St John's churches; the gothic churches of St James and St Nicholas; and the Buurkerk, now converted into a museum for automatically playing musical instruments.
71
+
72
+ Because of its central location, Utrecht is well connected to the rest of the Netherlands and has a well-developed public transport network.
73
+
74
+ Utrecht Centraal is the main railway station of Utrecht and is the largest in the country. There are regular intercity services to all major Dutch cities; direct services to Schiphol Airport. Utrecht Centraal is a station on the night service, providing 7 days a week an all-night service to (among others) Schiphol Airport, Amsterdam and Rotterdam. International InterCityExpress (ICE) services to Germany (and further) through Arnhem call at Utrecht Centraal. Regular local trains to all areas surrounding Utrecht also depart from Utrecht Centraal; and service several smaller stations: Utrecht Lunetten; Utrecht Vaartsche Rijn; Utrecht Overvecht; Utrecht Leidsche Rijn; Utrecht Terwijde; Utrecht Zuilen and Vleuten. A former station Utrecht Maliebaan closed in 1939 and has since been converted into the Dutch Railway Museum.
75
+
76
+ The Utrecht sneltram is a light rail scheme running southwards from Utrecht Centraal to the suburbs of IJsselstein, Kanaleneiland, Lombok and Nieuwegein. The sneltram began operations in 1983 and is currently operated by the private transport company Qbuzz. On the 16th of December 2019 the new tram line to the Uithof started operating, creating a direct mass transit connection from the central station to the main Utrecht university campus.[46]
77
+
78
+ Utrecht is the location of the headquarters of Nederlandse Spoorwegen (English: Dutch Railways) – the largest rail operator in the Netherlands – and ProRail – the state-owned company responsible for the construction and maintenance of the country's rail infrastructure.
79
+
80
+ The main local and regional bus station of Utrecht is located adjacent to Utrecht Centraal railway station, at the East and West entrances. Due to large-scale renovation and construction works at the railway station, the station's bus stops are changing frequently. As a general rule, westbound buses depart from the bus station on the west entrance, other buses from the east side station. Local buses in Utrecht are operated by Qbuzz – its services include a high-frequency service to the Uithof university district. The local bus fleet is one of Europe's cleanest, using only buses compliant with the Euro-VI standard as well as electric buses for inner city transport. Regional buses from the city are operated by Arriva and Connexxion.
81
+
82
+ The Utrecht Centraal railway station is also served by the pan-European services of Eurolines. Furthermore, it acts as departure and arrival place of many coach companies serving holiday resorts in Spain and France – and during winter in Austria and Switzerland.
83
+
84
+ Like most Dutch cities, Utrecht has an extensive network of cycle paths, making cycling safe and popular. 33% of journeys within the city are by bicycle, more than any other mode of transport.[47] (Cars, for example, account for 30% of trips). Bicycles are used by young and old people, and by individuals and families. They are mostly traditional, upright, steel-framed bicycles, with few gears. There are also barrow bikes, for carrying shopping or small children. In 2014, the City Council decided to build the world's largest bicycle parking station, near the Central Railway Station. This 3-floor construction will cost an estimated 48 million Euro and will hold 12,500 bicycles. The bicycle parking station was finally opened on August 19, 2019.[48]
85
+
86
+ Utrecht is well-connected to the Dutch road network. Two of the most important major roads serve the city of Utrecht: the A12 and A2 motorways connect Amsterdam, Arnhem, The Hague and Maastricht, as well as Belgium and Germany. Other major motorways in the area are the Almere–Breda A27 and the Utrecht–Groningen A28.[49] Due to the increasing traffic and the ancient city plan, traffic congestion is a common phenomenon in and around Utrecht, causing elevated levels of air pollutants. This has led to a passionate debate in the city about the best way to improve the city's air quality.
87
+
88
+ Utrecht has an industrial port located on the Amsterdam-Rijnkanaal.[50] The container terminal has a capacity of 80,000 containers a year. In 2003, the port facilitated the transport of four million tons of cargo; mostly sand, gravel, fertiliser and fodder.[51] Additionally, some tourist boat trips are organised from various places on the Oudegracht; and the city is connected to touristic shipping routes through sluices.[52][53][54]
89
+
90
+ Production industry constitutes a small part of the economy of Utrecht.
91
+ The economy of Utrecht depends for a large part on the several large institutions located in the city. It is the centre of the Dutch railroad network and the location of the head office of Nederlandse Spoorwegen. ProRail is headquartered in The De Inktpot [nl] (The Inkwell) – the largest brick building in the Netherlands[55] (the "UFO" featured on its façade stems from an art program in 2000). Rabobank, a large bank, has its headquarters in Utrecht.[56]
92
+
93
+ Utrecht hosts several large institutions of higher education. The most prominent of these is Utrecht University (est. 1636), the largest university of the Netherlands with 30,449 students (as of 2012[update]). The university is partially based in the inner city as well as in the Uithof campus area, to the east of the city. According to Shanghai Jiaotong University's university ranking in 2014, it is the 57th best university in the world.[57] Utrecht also houses the much smaller University of Humanistic Studies, which houses about 400 students.[58]
94
+
95
+ Utrecht is home of one of the locations of TIAS School for Business and Society, focused on post-experience management education and the largest management school of its kind in the Netherlands. In 2008, its executive MBA program was rated the 24th best program in the world by the Financial Times.[59]
96
+
97
+ Utrecht is also home to two other large institutions of higher education: the vocational university Hogeschool Utrecht (37,000 students),[60] with locations in the city and the Uithof campus; and the HKU Utrecht School of the Arts (3,000 students).
98
+
99
+ There are many schools for primary and secondary education, allowing parents to select from different philosophies and religions in the school as is inherent in the Dutch school system.
100
+
101
+ Utrecht city has an active cultural life, and in the Netherlands is second only to Amsterdam.[9] There are several theatres and theatre companies. The 1941 main city theatre was built by Dudok. In addition to theatres, there is a large number of cinemas including three arthouse cinemas. Utrecht is host to the international Early Music Festival (Festival Oude Muziek, for music before 1800) and the Netherlands Film Festival. The city has an important classical music hall Vredenburg (1979 by Herman Hertzberger). Its acoustics are considered among the best of the 20th-century original music halls.[citation needed] The original Vredenburg music hall has been redeveloped as part of the larger station area redevelopment plan and in 2014 gained additional halls that allowed its merger with the rock club Tivoli and the SJU jazzpodium. There are several other venues for music throughout the city. Young musicians are educated in the conservatory, a department of the Utrecht School of the Arts. There is a specialised museum of automatically playing musical instruments.
102
+
103
+ There are many art galleries in Utrecht. There are also several foundations to support art and artists. Training of artists is done at the Utrecht School of the Arts. The Centraal Museum has many exhibitions on the arts, including a permanent exhibition on the works of Utrecht resident illustrator Dick Bruna, who is best known for creating Miffy ("Nijntje", in Dutch). BAK, basis voor actuele kunst offers contemporary art exhibitions and public events, as well as a Fellowship program for practitioners involved in contemporary arts, theory and activisms. Although street art is illegal in Utrecht, the Utrechtse Kabouter, a picture of a gnome with a red hat, became a common sight in 2004.[61] Utrecht also houses one of the landmarks of modern architecture, the 1924 Rietveld Schröder House, which is listed on UNESCO's world heritage sites.
104
+
105
+ Every Saturday, a paviour adds another letter to The Letters of Utrecht, an endless poem in the cobblestones of the Oude Gracht in Utrecht. With the Letters, Utrecht has a social sculpture as a growing monument created for the benefit of future people.
106
+
107
+ To promote culture, Utrecht city organizes cultural Sundays. During a thematic Sunday, several organisations create a program which is open to everyone without, or with a very much reduced, admission fee. There are also initiatives for amateur artists. The city subsidises an organisation for amateur education in arts aimed at all inhabitants (Utrechts Centrum voor de Kunsten), as does the university for its staff and students. Additionally there are also several private initiatives. The city council provides coupons for discounts to inhabitants who receive welfare to be used with many of the initiatives.
108
+
109
+ In 2017 Utrecht was named as a UNESCO City of Literature.
110
+
111
+ Utrecht is home to the premier league (professional) football club FC Utrecht, which plays in Stadium Nieuw Galgenwaard. It is also the home of Kampong, the largest (amateur) sportsclub in the Netherlands (4,500 members), SV Kampong.[62] Kampong features field hockey, association football, cricket, tennis, squash and boules. Kampong's men and women top hockey squads play in the highest Dutch hockey league, the Rabohoofdklasse. Utrecht is also home to baseball and softball club UVV, which plays in the highest Dutch baseball league: de Hoofdklasse. Utrecht's waterways are used by several rowing clubs. Viking is a large club open to the general public, and the student clubs Orca and Triton compete in the Varsity each year.
112
+
113
+ In July 2013, Utrecht hosted the European Youth Olympic Festival, in which more than 2,000 young athletes competed in nine different olympic sports. In July 2015, Utrecht hosted the Grand Départ and first stage of the Tour de France.[63]
114
+
115
+ Utrecht has several smaller and larger museums. Many of those are located in the southern part of the old town, the Museumkwartier.
116
+
117
+ The city has several music venues such as TivoliVredenburg, Tivoli De Helling, ACU, Moira, EKKO, DB's and RASA. Utrecht hosts the yearly Utrecht Early Music Festival (Festival Oude Muziek).[72] In Jaarbeurs it hosts Trance Energy. Every summer there used to be the Summer Darkness festival, which celebrated goth culture and music.[73] In November the Le Guess Who? festival, focused on indie rock, art rock and experimental rock, takes place in many of the city's venues.
118
+
119
+ There are two main theaters in the city, the Theater Kikker [nl][74] and the Stadsschouwburg Utrecht [nl][75] De parade, a travelling theatre festival, performs in Utrecht in summer. The city also hosts the yearly Festival a/d Werf which offers a selection of contemporary international theatre, together with visual arts, public art and music.
120
+
121
+ Over the ages famous people have been born and/or raised in Utrecht.
122
+ Among the most famous Utrechters are:
123
+
124
+ Utrecht is twinned with:
en/5892.html.txt ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Vaccination is the administration of a vaccine to help the immune system develop protection from a disease. Vaccines contain a microorganism or virus in a weakened, live or killed state, or proteins or toxins from the organism. In stimulating the body's adaptive immunity, they help prevent sickness from an infectious disease. When a sufficiently large percentage of a population has been vaccinated, herd immunity results. The effectiveness of vaccination has been widely studied and verified.[1][2][3] Vaccination is the most effective method of preventing infectious diseases;[4][5][6][7] widespread immunity due to vaccination is largely responsible for the worldwide eradication of smallpox and the elimination of diseases such as polio and tetanus from much of the world.
4
+
5
+ The first disease people tried to prevent by inoculation was most likely smallpox, with the first recorded cases occurring in the 16th century in China.[8] It was also the first disease for which a vaccine was produced.[9][10] Although at least six people had used the same principles years earlier, the smallpox vaccine was invented in 1796 by English physician Edward Jenner. He was the first to publish evidence that it was effective and to provide advice on its production.[11] Louis Pasteur furthered the concept through his work in microbiology. The immunization was called vaccination because it was derived from a virus affecting cows (Latin: vacca 'cow').[9][11] Smallpox was a contagious and deadly disease, causing the deaths of 20–60% of infected adults and over 80% of infected children.[12] When smallpox was finally eradicated in 1979, it had already killed an estimated 300–500 million people in the 20th century.[13][14][15]
6
+
7
+ Vaccination and immunization have a similar meaning in everyday language. This is distinct from inoculation, which uses unweakened live pathogens. Vaccination efforts have been met with some reluctance on scientific, ethical, political, medical safety, and religious grounds, although no major religions oppose vaccination, and some consider it an obligation due to the potential to save lives.[16] In the United States, people may receive compensation for alleged injuries under the National Vaccine Injury Compensation Program. Early success brought widespread acceptance, and mass vaccination campaigns have greatly reduced the incidence of many diseases in numerous geographic regions.
8
+
9
+ Vaccines are a way of artificially activating the immune system to protect against infectious disease. The activation occurs through priming the immune system with an immunogen. Stimulating immune responses with an infectious agent is known as immunization. Vaccination includes various ways of administering immunogens.[17]
10
+
11
+ Most vaccines are administered before a patient has contracted a disease to help increase future protection. However, some vaccines are administered after the patient already has contracted a disease. Vaccines given after exposure to smallpox are reported to offer some protection from disease or may reduce the severity of disease.[18] The first rabies immunization was given by Louis Pasteur to a child after he was bitten by a rabid dog. Since its discovery, the rabies vaccine have been proven effective in preventing rabies in humans when administered several times over 14 days along with rabies immune globulin and wound care.[19] Other examples include experimental AIDS, cancer[20] and Alzheimer's disease vaccines.[21] Such immunizations aim to trigger an immune response more rapidly and with less harm than natural infection.[22]
12
+
13
+ Most vaccines are given by injection as they are not absorbed reliably through the intestines. Live attenuated polio, rotavirus, some typhoid, and some cholera vaccines are given orally to produce immunity in the bowel. While vaccination provides a lasting effect, it usually takes several weeks to develop. This differs from passive immunity (the transfer of antibodies, such as in breastfeeding), which has immediate effect.[23]
14
+
15
+ A vaccine failure is when an organism contracts a disease in spite of being vaccinated against it. Primary vaccine failure occurs when an organism's immune system does not produce antibodies when first vaccinated. Vaccines can fail when several series are given and fail to produce an immune response. The term "vaccine failure" does not necessarily imply that the vaccine is defective. Most vaccine failures are simply from individual variations in immune response.[24]
16
+
17
+ The term inoculation is often used interchangeably with vaccination. However, the terms are not synonymous. Dr Byron Plant explains: "Vaccination is the more commonly used term, which actually consists of a 'safe' injection of a sample taken from a cow suffering from cowpox... Inoculation, a practice probably as old as the disease itself, is the injection of the variola virus taken from a pustule or scab of a smallpox sufferer into the superficial layers of the skin, commonly on the upper arm of the subject. Often inoculation was done 'arm-to-arm' or, less effectively, 'scab-to-arm'..." Inoculation oftentimes caused the patient to become infected with smallpox, and in some cases the infection turned into a severe case.[25][26]
18
+
19
+ Confirmed applications of inoculation for smallpox happened in China in the 1550s.
20
+
21
+ Vaccinations began in the 18th century with the work of Edward Jenner and the smallpox vaccine.[27][28][29]
22
+
23
+ Just like any medication or procedure, no vaccine can be 100% safe or effective for everyone because each person's body can react differently.[30][31] While minor side effects, such as soreness or low grade fever, are relatively common, serious side effects are very rare and occur in about 1 out of every 100,000 vaccinations and typically involve allergic reactions that can cause hives or difficulty breathing.[32][33] However, vaccines are the safest they ever have been in history and each vaccine undergoes rigorous clinical trials to ensure their safety and efficacy before FDA approval.[34] Prior to human testing, vaccines are run through computer algorithms to model how they will interact with the immune system and are tested on cells in a culture.[32][34] During the next round of testing, researchers study vaccines in animals, including mice, rabbits, guinea pigs, and monkeys.[32] Vaccines that pass each of these stages of testing are then approved by the FDA to start a three-phase series of human testing, advancing to higher phases only if they are deemed safe and effective at the previous phase. The people in these trials participate voluntarily and are required to prove they understand the purpose of the study and the potential risks.[34] During phase I trials, a vaccine is tested in a group of about 20 people with the primary goal of assessing the vaccine's safety.[32] Phase II trials expand the testing to include 50 to several hundred people. During this stage, the vaccine's safety continues to be evaluated and researchers also gather data on the effectiveness and the ideal dose of the vaccine.[32] Vaccines determined to be safe and efficacious then advance to phase III trials, which focuses on the efficacy of the vaccine in hundreds to thousands of volunteers. This phase can take several years to complete and researchers use this opportunity to compare the vaccinated volunteers to those who have not been vaccinated to highlight any true reactions to the vaccine that occur.[34]
24
+ If a vaccine passes all of the phases of testing, the manufacturer can then apply for licensure of the vaccine through the FDA. Before the FDA approves use in the general public, they extensively review the results to the clinical trials, safety tests, purity tests, and manufacturing methods and establish that the manufacturer itself is up to government standards in many other areas.[32] However, safety testing of the vaccines never ends even after FDA approval. The FDA continues to monitor the manufacturing protocols, batch purity, and the manufacturing facility itself. Additionally, most vaccines also undergo phase IV trials, which monitors the safe and efficacy of vaccines in tens of thousands of people, or more, across many years.[32] This allows for delayed or very rare reactions to be detected and evaluated.
25
+
26
+ The Centers for Disease Control and Prevention (CDC) has compiled a list of vaccines and their possible side effects.[33] The risk of side effects varies from one vaccine to the next, but below are examples of side effects and their approximate rate of occurrence with the diphtheria, tetanus, and acellular pertussis (DTaP) vaccine, a common childhood vaccine.[33]
27
+
28
+ Mild side effects (common)
29
+
30
+ Moderate side effects (uncommon)
31
+
32
+ Severe side effects (rare)
33
+
34
+ The ingredients of vaccines can vary greatly from one to the next and no two vaccines are the same. The CDC has compiled a list of vaccines and their ingredients that is readily accessible on their website.[35]
35
+
36
+ Aluminium is an adjuvant ingredient in some vaccines. An adjuvant is a certain type of ingredient that is used to help the body's immune system create a stronger immune response after receiving the vaccination.[36] Aluminium is in a salt form and is used in the following compounds: aluminium hydroxide, aluminium phosphate, and aluminium potassium sulfate. In chemistry, a salt is the ionic version of an element; another example is table salt: Na+ (sodium) and Cl− (chloride). For a given element, the ion form has different properties from the elemental form. Although it is possible to have aluminium toxicity, aluminium salts have been used effectively and safely since the 1930s when they were first used with the diphtheria and tetanus vaccines.[36] Although there is a small increase in the chance of having a local reaction to a vaccine with an aluminium salt (redness, soreness, swelling), there is no increased risk of any serious reactions.[37][38]
37
+
38
+ Certain vaccines contain a compound called thimerosal, which is an organic compound that contains mercury. Mercury is commonly found in two forms that differ by the number of carbon groups in its chemical structure. Methylmercury (one carbon group) is found in fish and is the form that people usually ingest, while ethylmercury (two carbon groups) is the form that is in thimerosal.[39] Although the two have similar chemical compounds, they do not have the same chemical properties and interact with the human body differently. Ethylmercury is cleared from the body faster than methylmercury and is less likely to cause toxic effects.[39]
39
+
40
+ Thimerosal is used to prevent the growth of bacteria and fungi in vials that contain more than one dose of a vaccine.[39] This helps reduce the risk of potential infections and or serious illness that could occur from contamination of a vaccine vial. Although there is a small increase in risk of injection site redness and swelling with vaccines containing thimerosal, there is no increased risk of serious harm, including autism.[40][41] Even though evidence supports the safety and efficacy of thimerosal in vaccines, thimerosal was removed from childhood vaccines in the United States in 2001 as a precaution.[39]
41
+
42
+ The administration protocols, efficacy, and adverse events of vaccines are monitored by organizations of the federal government, including the CDC and FDA, and independent agencies are constantly re-evaluating vaccine practices.[42][51] As with all medications, vaccine use is determined by public health research, surveillance, and reporting to governments and the public.[42][51]
43
+
44
+ The World Health Organization (WHO) estimate that vaccination averts 2–3 million deaths per year (in all age groups), and up to 1.5 million children die each year due to diseases that could have been prevented by vaccination.[54] They estimate that 29% of deaths of children under five years old in 2013 were vaccine preventable. In other developing parts of the world, they are faced with the challenge of having a decreased availability of resources and vaccinations. Countries such as those in Sub-Saharan Africa cannot afford to provide the full range of childhood vaccinations.[55]
45
+
46
+ Vaccines have led to major decreases in the prevalence of infectious diseases in the United States. In 2007, studies regarding the effectiveness of vaccines on mortality or morbidity rates of those exposed to various diseases have shown almost 100% decreases in death rates, and about a 90% decrease in exposure rates.[56] This has allowed specific organizations and states to adopt standards for recommended early childhood vaccinations. Lower income families who are unable to otherwise afford vaccinations are supported by these organizations and specific government laws. The Vaccine for Children Program and the Social Security Act are two major players in supporting lower socioeconomic groups.[57][58]
47
+
48
+ In 2000, the CDC declared that measles had been eliminated in the US (defined as no disease transmission for 12 continuous months).[59] However, with the growing anti-vaccine movement, the US has seen a resurgence of certain vaccine-preventable diseases. The measles virus has now lost its elimination status in the US as the number of measles cases continues to rise in recent years with a total of 17 outbreaks in 2018 and 465 outbreaks in 2019 (as of April 4, 2019).[60]
49
+
50
+ It is known that the process of inoculation against smallpox was used by Chinese physicians in the 10th century.[61] The mention of inoculation in the Sact'eya Grantham, an Ayurvedic text, was noted by the French scholar Henri Marie Husson in the journal Dictionaire des sciences médicales.[62] However, the idea that inoculation originated in India has been challenged, as few of the ancient Sanskrit medical texts described the process of inoculation.[63] Accounts of inoculation against smallpox in China can be found as early as the late 10th century and was reportedly widely practised in China in the reign of the Longqing Emperor (r. 1567–72) during the Ming Dynasty (1368–1644).[64] Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Dr. Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers.[65] According to Voltaire (1742), the Turks derived their use of inoculation to neighbouring Circassia. Voltaire does not speculate on where the Circassians derived their technique from, though he reports that the Chinese have practiced it "these hundred years".[66]
51
+
52
+ The Greek physicians Emmanuel Timonis (1669–1720) from the island of Chios and Jacob Pylarinos (1659–1718) from Cephalonia practised smallpox inoculation at Constantinople in the beginning of 18th century[67] and published their work in Philosophical Transactions of the Royal Society in 1714.[68][69] This kind of inoculation and other forms of variolation were introduced into England by Lady Montagu, a famous English letter-writer and wife of the English ambassador at Istanbul between 1716 and 1718, who almost died from smallpox as a young adult and was physically scarred from it. Inoculation was adopted both in England and in America nearly half a century before Jenner's famous smallpox vaccine of 1796[70] but the death rate of about 2% from this method meant that it was mainly used during dangerous outbreaks of the disease and remained controversial.[61]
53
+ It was noticed during the 18th century that people who had suffered from the less virulent cowpox were immune to smallpox, and the first recorded use of this idea was by Benjamin Jesty, a farmer at Yetminster in Dorset, who had suffered the disease and deliberately transmitted it to his own family in 1774, his sons subsequently not getting the mild version of smallpox when later inoculated in 1789.
54
+
55
+ It was Edward Jenner, a doctor in Berkeley in Gloucestershire, who established the procedure by introducing material from a cowpox vesicle on Sarah Nelmes, a milkmaid, into the arm of a boy named James Phipps. Two months later he inoculated the boy with smallpox and the disease did not develop. In 1798 Jenner published An Inquiry into the Causes and Effects of the Variolae Vacciniae which created widespread interest. He distinguished 'true' and 'spurious' cowpox (which did not give the desired effect) and developed an "arm-to-arm" method of propagating the vaccine from the vaccinated individual's pustule. Early attempts at confirmation were confounded by contamination with smallpox, but despite controversy within the medical profession and religious opposition to the use of animal material, by 1801 his report was translated into six languages and over 100,000 people were vaccinated.[61] The term vaccination was coined in 1800 by the surgeon Richard Dunning in his text Some observations on vaccination.[71]
56
+
57
+ Since then vaccination campaigns have spread throughout the globe, sometimes prescribed by law or regulations (See Vaccination Acts). Vaccines are now used against a wide variety of diseases. Louis Pasteur further developed the technique during the 19th century, extending its use to killed agents protecting against anthrax and rabies. The method Pasteur used entailed treating the agents for those diseases so they lost the ability to infect, whereas inoculation was the hopeful selection of a less virulent form of the disease, and Jenner's vaccination entailed the substitution of a different and less dangerous disease. Pasteur adopted the name vaccine as a generic term in honour of Jenner's discovery.
58
+
59
+ Maurice Hilleman was the most prolific vaccine inventor, developing successful vaccines for measles, mumps, hepatitis A, hepatitis B, chickenpox, meningitis, pneumonia and Haemophilus influenzae.[72]
60
+
61
+ In modern times, the first vaccine-preventable disease targeted for eradication was smallpox. The World Health Organization (WHO) coordinated this global eradication effort. The last naturally occurring case of smallpox occurred in Somalia in 1977. In 1988, the governing body of WHO targeted polio for eradication by 2000. Although the target was missed, cases have been reduced by 99.99%.
62
+
63
+ In 2000, the Global Alliance for Vaccines and Immunization was established to strengthen routine vaccinations and introduce new and under-used vaccines in countries with a per capita GDP of under US $1000.
64
+
65
+ To eliminate the risk of outbreaks of some diseases, at various times governments and other institutions have employed policies requiring vaccination for all people. For example, an 1853 law required universal vaccination against smallpox in England and Wales, with fines levied on people who did not comply.[73] Common contemporary U.S. vaccination policies require that children receive recommended vaccinations before entering public school.[74]
66
+
67
+ Beginning with early vaccination in the nineteenth century, these policies were resisted by a variety of groups, collectively called antivaccinationists, who object on scientific, ethical, political, medical safety, religious, and other grounds.[75] Common objections are that vaccinations do not work, that compulsory vaccination constitutes excessive government intervention in personal matters, or that the proposed vaccinations are not sufficiently safe.[76] Many modern vaccination policies allow exemptions for people who have compromised immune systems, allergies to the components used in vaccinations or strongly held objections.[77]
68
+
69
+ In countries with limited financial resources, limited vaccination coverage results in greater morbidity and mortality due to infectious disease.[78] More affluent countries are able to subsidize vaccinations for at-risk groups, resulting in more comprehensive and effective coverage. In Australia, for example, the Government subsidizes vaccinations for seniors and indigenous Australians.[79]
70
+
71
+ Public Health Law Research, an independent US based organization, reported in 2009 that there is insufficient evidence to assess the effectiveness of requiring vaccinations as a condition for specified jobs as a means of reducing incidence of specific diseases among particularly vulnerable populations;[80] that there is sufficient evidence supporting the effectiveness of requiring vaccinations as a condition for attending child care facilities and schools;[81] and that there is strong evidence supporting the effectiveness of standing orders, which allow healthcare workers without prescription authority to administer vaccine as a public health intervention.[82]
72
+
73
+ La vaccine or Le préjugé vaincu by Louis-Léopold Boilly, 1807
74
+
75
+ A doctor vaccinating a small girl, other girls with loosened blouses wait their turn apprehensively by Lance Calkin
76
+
77
+ German caricature showing von Behring extracting the serum with a tap.
78
+
79
+ Les Malheurs de la Vaccine (The history of vaccination seen from an economic point of view: A pharmacy up for sale; an outmoded inoculist selling his premises; Jenner, to the left, pursues a skeleton with a lancet)
80
+
81
+ Allegations of vaccine injuries in recent decades have appeared in litigation in the U.S. Some families have won substantial awards from sympathetic juries, even though most public health officials have said that the claims of injuries were unfounded.[83] In response, several vaccine makers stopped production, which the US government believed could be a threat to public health, so laws were passed to shield manufacturers from liabilities stemming from vaccine injury claims.[83] The safety and side effects of multiple vaccines have been tested in order to uphold the viability of vaccines as a barrier against disease. The influenza vaccine was tested in controlled trials and proven to have negligible side effects equal to that of a placebo.[84] Some concerns from families might have arisen from social beliefs and norms that cause them to mistrust or refuse vaccinations, contributing to this discrepancy in side effects that were unfounded.[85]
82
+
83
+ Opposition to vaccination, from a wide array of vaccine critics, has existed since the earliest vaccination campaigns.[76] It is widely accepted that the benefits of preventing serious illness and death from infectious diseases greatly outweigh the risks of rare serious adverse effects following immunization.[87] Some studies have claimed to show that current vaccine schedules increase infant mortality and hospitalization rates;[88][89] those studies, however, are correlational in nature and therefore cannot demonstrate causal effects, and the studies have also been criticized for cherry picking the comparisons they report, for ignoring historical trends that support an opposing conclusion, and for counting vaccines in a manner that is "completely arbitrary and riddled with mistakes".[90][91]
84
+
85
+ Various disputes have arisen over the morality, ethics, effectiveness, and safety of vaccination. Some vaccination critics say that vaccines are ineffective against disease[92] or that vaccine safety studies are inadequate.[92] Some religious groups do not allow vaccination,[93] and some political groups oppose mandatory vaccination on the grounds of individual liberty.[76] In response, concern has been raised that spreading unfounded information about the medical risks of vaccines increases rates of life-threatening infections, not only in the children whose parents refused vaccinations, but also in those who cannot be vaccinated due to age or immunodeficiency, who could contract infections from unvaccinated carriers (see herd immunity).[94] Some parents believe vaccinations cause autism, although there is no scientific evidence to support this idea.[95] In 2011, Andrew Wakefield, a leading proponent of the theory that MMR vaccine causes autism, was found to have been financially motivated to falsify research data and was subsequently stripped of his medical license.[96] In the United States people who refuse vaccines for non-medical reasons have made up a large percentage of the cases of measles, and subsequent cases of permanent hearing loss and death caused by the disease.[97]
86
+
87
+ Many parents do not vaccinate their children because they feel that diseases are no longer present due to vaccination.[98] This is a false assumption, since diseases held in check by immunization programs can and do still return if immunization is dropped. These pathogens could possibly infect vaccinated people, due to the pathogen's ability to mutate when it is able to live in unvaccinated hosts.[99][100] In 2010, California had the worst whooping cough outbreak in 50 years. A possible contributing factor was parents choosing not to vaccinate their children.[101] There was also a case in Texas in 2012 where 21 members of a church contracted measles because they chose not to immunize.[101]
88
+
89
+ The notion of a connection between vaccines and autism originated in a 1998 paper published in The Lancet whose lead author was the physician Andrew Wakefield. His study concluded that eight of the 12 patients (ages 3–10) developed behavioral symptoms consistent with autism following the MMR vaccine (an immunization against measles, mumps, and rubella).[102] The article was widely criticized for lack of scientific rigor and it was proven that Wakefield falsified data in the article.[102] In 2004, 10 of the original 12 co-authors (not including Wakefield) published a retraction of the article and stated the following: "We wish to make it clear that in this paper no causal link was established between MMR vaccine and autism as the data were insufficient."[103] In 2010, The Lancet officially retracted the article stating that several elements of the article were incorrect, including falsified data and protocols. This Lancet article has sparked a much greater anti-vaccination movement, particularly in the United States. Even though the article was fraudulent and was retracted, 1 in 4 parents still believe vaccines can cause autism.[104]
90
+
91
+ To date, all validated and definitive studies have shown that there is no correlation between vaccines and autism.[105] One of the studies published in 2015 confirms there is no link between autism and the MMR vaccine. Infants were given a health plan, that included an MMR vaccine, and were continuously studied until they reached 5 years old. There was no link between the vaccine and children who had a normally developed sibling or a sibling that had autism making them a higher risk for developing autism themselves.[106]
92
+
93
+ It can be difficult to correct the memory of humans when wrong information is received prior to correct information. Even though there is much evidence to go against the Wakefield study and most of the co-authors publishing retractions, many continue to believe and base decisions off of it as it still lingers in their memory. Studies and research are being conducted to determine effective ways to correct misinformation in the public memory.[107] Since the Wakefield study was released over 20 years ago, it may prove easier for newer generations to be properly educated on vaccinations. A very small percentage of people have adverse reactions to vaccines, and if there is a reaction it is often mild. These reactions do not include autism.
94
+
95
+ A vaccine administration may be oral, by injection (intramuscular, intradermal, subcutaneous), by puncture, transdermal or intranasal.[108] Several recent clinical trials have aimed to deliver the vaccines via mucosal surfaces to be up-taken by the common mucosal immunity system, thus avoiding the need for injections.[109]
96
+
97
+ Health is often used as one of the metrics for determining the economic prosperity of a country. This is because healthier individuals are generally better suited to contributing to the economic development of a country than the sick.[110] There are many reasons for this. A person who is vaccinated for influenza, not only protects himself from the risk of influenza, but, simultaneously, prevents himself from infecting those around him.[111] This leads to a healthier society, which allows individuals to be more economically productive. Children are consequently able to attend school more often and have been shown to do better academically. Similarly, adults are able to work more often, more efficiently, and more effectively.[110][112]
98
+
99
+ On the whole, vaccinations induce a net benefit to society. Vaccines are often noted for their high return on investment (ROI) values, especially when considering the long-term effects.[113] Some vaccines have much higher ROI values than others. Studies have shown that the ratios of vaccination benefits to costs can differ substantially—from 27:1 for diphtheria/pertussis, to 13.5:1 for measles, 4.76:1 for varicella, and 0.68–1.1 : 1 for pneumococcal conjugate.[111] Some governments choose to subsidize the costs of vaccines, due to some of the high ROI values attributed to vaccinations. The United States subsidizes over half of all vaccines for children, which costs between $400 and $600 each. Although most children do get vaccinated, the adult population of the USA is still below the recommended immunization levels. Many factors can be attributed to this issue. Many adults who have other health conditions are unable to be safely immunized, whereas others opt not to be immunized for the sake of private financial benefits. Many Americans are underinsured, and, as such, are required to pay for vaccines out-of-pocket. Others are responsible for paying high deductibles and co-pays. Although vaccinations usually induce long-term economic benefits, many governments struggle to pay the high short-term costs associated with labor and production. Consequently, many countries neglect to provide such services.[111]
100
+
101
+ The Coalition for Epidemic Preparedness Innovations published a study in The Lancet in 2018 which estimated the costs of developing vaccines for diseases that could escalate into global humanitarian crises. They focused on 11 diseases which cause relatively few deaths at present and primarily strike the poor which have been highlighted as pandemic risks:
102
+
103
+ They estimated that it would cost between $2.8 billion and $3.7 billion to develop at least one vaccine for each of them. This should be set against the potential cost of an outbreak. The 2003 SARS outbreak in East Asia cost $54 billion.[114]
104
+
105
+ Dr Jenner performing his first vaccination on James Phipps, a boy of age 8. May 14, 1796. Painting by Ernest Board (early 20th century)
106
+
107
+ James Gillray's The Cow-Pock—or—the Wonderful Effects of the New Inoculation!, an 1802 caricature of vaccinated patients who feared it would make them sprout cowlike appendages
108
+
109
+ Poster for vaccination against smallpox
110
+
en/5893.html.txt ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Cattle, or cows (female) and bulls (male), are the most common type of large domesticated ungulates. They are a prominent modern member of the subfamily Bovinae, are the most widespread species of the genus Bos, and are most commonly classified collectively as Bos taurus.
4
+
5
+ Cattle are commonly raised as livestock for meat (beef or veal, see beef cattle), for milk (see dairy cattle), and for hides, which are used to make leather. They are used as riding animals and draft animals (oxen or bullocks, which pull carts, plows and other implements). Another product of cattle is their dung, which can be used to create manure or fuel. In some regions, such as parts of India, cattle have significant religious meaning. Cattle, mostly small breeds such as the Miniature Zebu, are also kept as pets.
6
+
7
+ Around 10,500 years ago, cattle were domesticated from as few as 80 progenitors in central Anatolia, the Levant and Western Iran.[1] According to an estimate from 2011, there are 1.4 billion cattle in the world.[2] In 2009, cattle became one of the first livestock animals to have a fully mapped genome.[3]
8
+
9
+ Cattle were originally identified as three separate species: Bos taurus, the European or "taurine" cattle (including similar types from Africa and Asia); Bos indicus, the zebu; and the extinct Bos primigenius, the aurochs. The aurochs is ancestral to both zebu and taurine cattle.[4] These have been reclassified as one species, Bos taurus, with three subspecies: Bos taurus primigenius, Bos taurus indicus, and Bos taurus taurus.[5][6]
10
+
11
+ Complicating the matter is the ability of cattle to interbreed with other closely related species. Hybrid individuals and even breeds exist, not only between taurine cattle and zebu (such as the sanga cattle, Bos taurus africanus), but also between one or both of these and some other members of the genus Bos – yaks (the dzo or yattle[7]), banteng, and gaur. Hybrids such as the beefalo breed can even occur between taurine cattle and either species of bison, leading some authors to consider them part of the genus Bos, as well.[8] The hybrid origin of some types may not be obvious – for example, genetic testing of the Dwarf Lulu breed, the only taurine-type cattle in Nepal, found them to be a mix of taurine cattle, zebu, and yak.[9] However, cattle cannot be successfully hybridized with more distantly related bovines such as water buffalo or African buffalo.
12
+
13
+ The aurochs originally ranged throughout Europe, North Africa, and much of Asia. In historical times, its range became restricted to Europe, and the last known individual died in Mazovia, Poland, in about 1627.[10] Breeders have attempted to recreate cattle of similar appearance to aurochs by crossing traditional types of domesticated cattle, creating the Heck cattle breed.
14
+
15
+ Cattle did not originate as the term for bovine animals. It was borrowed from Anglo-Norman catel, itself from medieval Latin capitale 'principal sum of money, capital', itself derived in turn from Latin caput 'head'. Cattle originally meant movable personal property, especially livestock of any kind, as opposed to real property (the land, which also included wild or small free-roaming animals such as chickens—they were sold as part of the land).[11] The word is a variant of chattel (a unit of personal property) and closely related to capital in the economic sense.[12] The term replaced earlier Old English feoh 'cattle, property', which survives today as fee (cf. German: Vieh, Dutch: vee, Gothic: faihu).
16
+
17
+ The word "cow" came via Anglo-Saxon cū (plural cȳ), from Common Indo-European gʷōus (genitive gʷowés) = "a bovine animal", compare Persian: gâv, Sanskrit: go-, Welsh: buwch.[13] The plural cȳ became ki or kie in Middle English, and an additional plural ending was often added, giving kine, kien, but also kies, kuin and others. This is the origin of the now archaic English plural, "kine". The Scots language singular is coo or cou, and the plural is "kye".
18
+
19
+ In older English sources such as the King James Version of the Bible, "cattle" refers to livestock, as opposed to "deer" which refers to wildlife. "Wild cattle" may refer to feral cattle or to undomesticated species of the genus Bos. Today, when used without any other qualifier, the modern meaning of "cattle" is usually restricted to domesticated bovines.[14]
20
+
21
+ In general, the same words are used in different parts of the world, but with minor differences in the definitions. The terminology described here contrasts the differences in definition between the United Kingdom and other British-influenced parts of the world such as Canada, Australia, New Zealand, Ireland and the United States.[15]
22
+
23
+ "Cattle" can only be used in the plural and not in the singular: it is a plurale tantum.[26] Thus one may refer to "three cattle" or "some cattle", but not "one cattle". "One head of cattle" is a valid though periphrastic way to refer to one animal of indeterminate or unknown age and sex; otherwise no universally used single-word singular form of cattle exists in modern English, other than the sex- and age-specific terms such as cow, bull, steer and heifer. Historically, "ox" was not a sex-specific term for adult cattle, but generally this is now used only for working cattle, especially adult castrated males. The term is also incorporated into the names of other species, such as the musk ox and "grunting ox" (yak), and is used in some areas to describe certain cattle products such as ox-hide and oxtail.[27]
24
+
25
+ Cow is in general use as a singular for the collective cattle. The word cow is easy to use when a singular is needed and the sex is unknown or irrelevant—when "there is a cow in the road", for example. Further, any herd of fully mature cattle in or near a pasture is statistically likely to consist mostly of cows, so the term is probably accurate even in the restrictive sense. Other than the few bulls needed for breeding, the vast majority of male cattle are castrated as calves and are used as oxen or slaughtered for meat before the age of three years. Thus, in a pastured herd, any calves or herd bulls usually are clearly distinguishable from the cows due to distinctively different sizes and clear anatomical differences. Merriam-Webster and Oxford Living Dictionaries recognize the sex-nonspecific use of cow as an alternate definition,[28][29] whereas Collins and the OED do not.
26
+
27
+ Colloquially, more general nonspecific terms may denote cattle when a singular form is needed. Head of cattle is usually used only after a numeral. Australian, New Zealand and British farmers use the term beast or cattle beast. Bovine is also used in Britain. The term critter is common in the western United States and Canada, particularly when referring to young cattle.[30] In some areas of the American South (particularly the Appalachian region), where both dairy and beef cattle are present, an individual animal was once called a "beef critter", though that term is becoming archaic.
28
+
29
+ Cattle raised for human consumption are called beef cattle. Within the beef cattle industry in parts of the United States, the term beef (plural beeves) is still used in its archaic sense to refer to an animal of either sex. Cows of certain breeds that are kept for the milk they give are called dairy cows or milking cows (formerly milch cows). Most young male offspring of dairy cows are sold for veal, and may be referred to as veal calves.
30
+
31
+ The term dogies is used to describe orphaned calves in the context of ranch work in the American West, as in "Keep them dogies moving".[31] In some places, a cow kept to provide milk for one family is called a "house cow". Other obsolete terms for cattle include "neat" (this use survives in "neatsfoot oil", extracted from the feet and legs of cattle), and "beefing" (young animal fit for slaughter).
32
+
33
+ An onomatopoeic term for one of the most common sounds made by cattle is moo (also called lowing). There are a number of other sounds made by cattle, including calves bawling, and bulls bellowing. Bawling is most common for cows after weaning of a calf. The bullroarer makes a sound similar to a bull's territorial call.[32]
34
+
35
+ Cattle are large quadrupedal ungulate mammals with cloven hooves. Most breeds have horns, which can be as large as the Texas Longhorn or small like a scur. Careful genetic selection has allowed polled (hornless) cattle to become widespread.
36
+
37
+ Cattle are ruminants, meaning their digestive system is highly specialized to allow the use of poorly digestible plants as food. Cattle have one stomach with four compartments, the rumen, reticulum, omasum, and abomasum, with the rumen being the largest compartment.
38
+ The reticulum, the smallest compartment, is known as the "honeycomb". The omasum's main function is to absorb water and nutrients from the digestible feed. The omasum is known as the "many plies". The abomasum is like the human stomach; this is why it is known as the "true stomach".
39
+
40
+ Cattle are known for regurgitating and re-chewing their food, known as cud chewing, like most ruminants. While the animal is feeding, the food is swallowed without being chewed and goes into the rumen for storage until the animal can find a quiet place to continue the digestion process. The food is regurgitated, a mouthful at a time, back up to the mouth, where the food, now called the cud, is chewed by the molars, grinding down the coarse vegetation to small particles. The cud is then swallowed again and further digested by specialized microorganisms in the rumen. These microbes are primarily responsible for decomposing cellulose and other carbohydrates into volatile fatty acids cattle use as their primary metabolic fuel. The microbes inside the rumen also synthesize amino acids from non-protein nitrogenous sources, such as urea and ammonia. As these microbes reproduce in the rumen, older generations die and their cells continue on through the digestive tract. These cells are then partially digested in the small intestines, allowing cattle to gain a high-quality protein source. These features allow cattle to thrive on grasses and other tough vegetation.
41
+
42
+ The gestation period for a cow is about nine months long. A newborn calf's size can vary among breeds, but a typical calf weighs 25 to 45 kg (55 to 99 lb). Adult size and weight vary significantly among breeds and sex. Steers are generally killed before reaching 750 kg (1,650 lb). Breeding stock may be allowed a longer lifespan, occasionally living as long as 25 years. The oldest recorded cow, Big Bertha, died at the age of 48 in 1993.
43
+
44
+ On farms it is very common to use artificial insemination (AI), a medically assisted reproduction technique consisting of the artificial deposition of semen in the female's genital tract.[33] It is used in cases where the spermatozoa can not reach the fallopian tubes or simply by choice of the owner of the animal. It consists of transferring, to the uterine cavity, spermatozoa previously collected and processed, with the selection of morphologically more normal and mobile spermatozoa.
45
+
46
+ A cow's udder contains two pairs of mammary glands, (commonly referred to as teats) creating four "quarters".[34] The front ones are referred to as fore quarters and the rear ones rear quarters.[35]
47
+
48
+ Bulls become fertile at about seven months of age. Their fertility is closely related to the size of their testicles, and one simple test of fertility is to measure the circumference of the scrotum: a young bull is likely to be fertile once this reaches 28 centimetres (11 in); that of a fully adult bull may be over 40 centimetres (16 in).[36][37]
49
+
50
+ A bull has a fibro-elastic penis. Given the small amount of erectile tissue, there is little enlargement after erection. The penis is quite rigid when non-erect, and becomes even more rigid during erection. Protrusion is not affected much by erection, but more by relaxation of the retractor penis muscle and straightening of the sigmoid flexure.[38][39][40] Induced ovulation can be manipulated to produce farming benefits. For example, to synchronise ovulation of the cattle to benefit dairy farming.
51
+
52
+ The weight of adult cattle varies, depending on the breed. Smaller kinds, such as Dexter and Jersey adults, range between 272 to 454 kg (600 to 1,000 lb). Large Continental breeds, such as Charolais, Marchigiana, Belgian Blue and Chianina, adults range from 635 to 1,134 kg (1,400 to 2,500 lb). British breeds, such as Hereford, Angus, and Shorthorn, mature between 454 to 907 kg (1,000 to 2,000 lb), occasionally higher, particularly with Angus and Hereford.[41] Bulls are larger than cows of the same breed by up to a few hundred kilograms. Chianina bulls can weigh up to 1,500 kg (3,300 lb); British bulls, such as Angus and Hereford, can weigh as little as 907 kg (2,000 lb) to as much as 1,361 kg (3,000 lb).[citation needed]
53
+
54
+ The world record for the heaviest bull was 1,740 kg (3,840 lb), a Chianina named Donetto, when he was exhibited at the Arezzo show in 1955.[42] The heaviest steer was eight-year-old 'Old Ben', a Shorthorn/Hereford cross weighing in at 2,140 kg (4,720 lb) in 1910.[43]
55
+
56
+ In the United States, the average weight of beef cattle has steadily increased, especially since the 1970s, requiring the building of new slaughterhouses able to handle larger carcasses. New packing plants in the 1980s stimulated a large increase in cattle weights.[44] Before 1790 beef cattle averaged only 160 kg (350 lb) net; and thereafter weights climbed steadily.[45][46]
57
+
58
+ In laboratory studies, young cattle are able to memorize the locations of several food sources and retain this memory for at least 8 hours, although this declined after 12 hours.[47] Fifteen-month-old heifers learn more quickly than adult cows which have had either one or two calvings, but their longer-term memory is less stable.[48] Mature cattle perform well in spatial learning tasks and have a good long-term memory in these tests. Cattle tested in a radial arm maze are able to remember the locations of high-quality food for at least 30 days. Although they initially learn to avoid low-quality food, this memory diminishes over the same duration.[49] Under less artificial testing conditions, young cattle showed they were able to remember the location of feed for at least 48 days.[50] Cattle can make an association between a visual stimulus and food within 1 day—memory of this association can be retained for 1 year, despite a slight decay.[51]
59
+
60
+ Calves are capable of discrimination learning[52] and adult cattle compare favourably with small mammals in their learning ability in the Closed-field Test.[53]
61
+
62
+ They are also able to discriminate between familiar individuals, and among humans. Cattle can tell the difference between familiar and unfamiliar animals of the same species (conspecifics). Studies show they behave less aggressively toward familiar individuals when they are forming a new group.[54] Calves can also discriminate between humans based on previous experience, as shown by approaching those who handled them positively and avoiding those who handled them aversively.[55] Although cattle can discriminate between humans by their faces alone, they also use other cues such as the color of clothes when these are available.[56]
63
+
64
+ In audio play-back studies, calves prefer their own mother's vocalizations compared to the vocalizations of an unfamiliar mother.[57]
65
+
66
+ In laboratory studies using images, cattle can discriminate between images of the heads of cattle and other animal species.[58] They are also able to distinguish between familiar and unfamiliar conspecifics. Furthermore, they are able to categorize images as familiar and unfamiliar individuals.[54]
67
+
68
+ When mixed with other individuals, cloned calves from the same donor form subgroups, indicating that kin discrimination occurs and may be a basis of grouping behaviour. It has also been shown using images of cattle that both artificially inseminated and cloned calves have similar cognitive capacities of kin and non-kin discrimination.[59]
69
+
70
+ Cattle can recognize familiar individuals. Visual individual recognition is a more complex mental process than visual discrimination. It requires the recollection of the learned idiosyncratic identity of an individual that has been previously encountered and the formation of a mental representation.[60] By using 2-dimensional images of the heads of one cow (face, profiles, ​3⁄4 views), all the tested heifers showed individual recognition of familiar and unfamiliar individuals from their own breed. Furthermore, almost all the heifers recognized unknown individuals from different breeds, although this was achieved with greater difficulty. Individual recognition was most difficult when the visual features of the breed being tested were quite different from the breed in the image, for example, the breed being tested had no spots whereas the image was of a spotted breed.[61]
71
+
72
+ Cattle use visual/brain lateralisation in their visual scanning of novel and familiar stimuli.[62] Domestic cattle prefer to view novel stimuli with the left eye, i.e. using the right brain hemisphere (similar to horses, Australian magpies, chicks, toads and fish) but use the right eye, i.e. using the left hemisphere, for viewing familiar stimuli.[63]
73
+
74
+ In cattle, temperament can affect production traits such as carcass and meat quality or milk yield as well as affecting the animal's overall health and reproduction. Cattle temperament is defined as "the consistent behavioral and physiological difference observed between individuals in response to a stressor or environmental challenge and is used to describe the relatively stable difference in the behavioral predisposition of an animal, which can be related to psychobiological mechanisms".[65] Generally, cattle temperament is assumed to be multidimensional. Five underlying categories of temperament traits have been proposed:[66]
75
+
76
+ In a study on Holstein–Friesian heifers learning to press a panel to open a gate for access to a food reward, the researchers also recorded the heart rate and behavior of the heifers when moving along the race towards the food. When the heifers made clear improvements in learning, they had higher heart rates and tended to move more vigorously along the race. The researchers concluded this was an indication that cattle may react emotionally to their own learning improvement.[67]
77
+
78
+ Negative emotional states are associated with a bias toward negative responses towards ambiguous cues in judgement tasks. After separation from their mothers, Holstein calves showed such a cognitive bias indicative of low mood.[68] A similar study showed that after hot-iron disbudding (dehorning), calves had a similar negative bias indicating that post-operative pain following this routine procedure results in a negative change in emotional state.[69]
79
+
80
+ In studies of visual discrimination, the position of the ears has been used as an indicator of emotional state.[54] When cattle are stressed other cattle can tell by the chemicals released in their urine.[70]
81
+
82
+ Cattle are very gregarious and even short-term isolation is considered to cause severe psychological stress. When Aubrac and Friesian heifers are isolated, they increase their vocalizations and experience increased heart rate and plasma cortisol concentrations. These physiological changes are greater in Aubracs. When visual contact is re-instated, vocalizations rapidly decline, regardless of the familiarity of the returning cattle, however, heart rate decreases are greater if the returning cattle are familiar to the previously-isolated individual.[71] Mirrors have been used to reduce stress in isolated cattle.[72]
83
+
84
+ Cattle use all of the five widely recognized sensory modalities. These can assist in some complex behavioural patterns, for example, in grazing behaviour. Cattle eat mixed diets, but when given the opportunity, show a partial preference of approximately 70% clover and 30% grass. This preference has a diurnal pattern, with a stronger preference for clover in the morning, and the proportion of grass increasing towards the evening.[73]
85
+
86
+ Vision is the dominant sense in cattle and they obtain almost 50% of their information visually.
87
+ [74]
88
+
89
+ Cattle are a prey animal and to assist predator detection, their eyes are located on the sides of their head rather than the front. This gives them a wide field of view of 330° but limits binocular vision (and therefore stereopsis) to 30° to 50° compared to 140° in humans.[54][75] This means they have a blind spot directly behind them. Cattle have good visual acuity,[54] but compared to humans, their visual accommodation is poor.[clarification needed][74]
90
+
91
+ Cattle have two kinds of color receptors in the cone cells of their retinas. This means that cattle are dichromatic, as are most other non-primate land mammals.[76][77] There are two to three rods per cone in the fovea centralis but five to six near the optic papilla.[75] Cattle can distinguish long wavelength colors (yellow, orange and red) much better than the shorter wavelengths (blue, grey and green). Calves are able to discriminate between long (red) and short (blue) or medium (green) wavelengths, but have limited ability to discriminate between the short and medium. They also approach handlers more quickly under red light.[78] Whilst having good color sensitivity, it is not as good as humans or sheep.[54]
92
+
93
+ A common misconception about cattle (particularly bulls) is that they are enraged by the color red (something provocative is often said to be "like a red flag to a bull"). This is a myth. In bullfighting, it is the movement of the red flag or cape that irritates the bull and incites it to charge.[79]
94
+
95
+ Cattle have a well-developed sense of taste and can distinguish the four primary tastes (sweet, salty, bitter and sour). They possess around 20,000 taste buds. The strength of taste perception depends on the individual's current food requirements. They avoid bitter-tasting foods (potentially toxic) and have a marked preference for sweet (high calorific value) and salty foods (electrolyte balance). Their sensitivity to sour-tasting foods helps them to maintain optimal ruminal pH.[74]
96
+
97
+ Plants have low levels of sodium and cattle have developed the capacity of seeking salt by taste and smell. If cattle become depleted of sodium salts, they show increased locomotion directed to searching for these. To assist in their search, the olfactory and gustatory receptors able to detect minute amounts of sodium salts increase their sensitivity as biochemical disruption develops with sodium salt depletion.[80][81]
98
+
99
+ Cattle hearing ranges from 23 Hz to 35 kHz. Their frequency of best sensitivity is 8 kHz and they have a lowest threshold of −21 db (re 20 μN/m−2), which means their hearing is more acute than horses (lowest threshold of 7 db).[82] Sound localization acuity thresholds are an average of 30°. This means that cattle are less able to localise sounds compared to goats (18°), dogs (8°) and humans (0.8°).[83] Because cattle have a broad foveal fields of view covering almost the entire horizon, they may not need very accurate locus information from their auditory systems to direct their gaze to a sound source.
100
+
101
+ Vocalizations are an important mode of communication amongst cattle and can provide information on the age, sex, dominance status and reproductive status of the caller. Calves can recognize their mothers using vocalizations; vocal behaviour may play a role by indicating estrus and competitive display by bulls.[84]
102
+
103
+ Cattle have a range of odiferous glands over their body including interdigital, infraorbital, inguinal and sebaceous glands, indicating that olfaction probably plays a large role in their social life. Both the primary olfactory system using the olfactory bulbs, and the secondary olfactory system using the vomeronasal organ are used.[85] This latter olfactory system is used in the flehmen response. There is evidence that when cattle are stressed, this can be recognised by other cattle and this is communicated by alarm substances in the urine.[70] The odour of dog faeces induces behavioural changes prior to cattle feeding, whereas the odours of urine from either stressed or non-stressed conspecifics and blood have no effect.[86]
104
+
105
+ In the laboratory, cattle can be trained to recognise conspecific individuals using olfaction only.[85]
106
+
107
+ In general, cattle use their sense of smell to "expand" on information detected by other sensory modalities. However, in the case of social and reproductive behaviours, olfaction is a key source of information.[74]
108
+
109
+ Cattle have tactile sensations detected mainly by mechanoreceptors, thermoreceptors and nociceptors in the skin and muzzle. These are used most frequently when cattle explore their environment.[74]
110
+
111
+ There is conflicting evidence for magnetoreception in cattle. One study reported that resting and grazing cattle tend to align their body axes in the geomagnetic north–south (N-S) direction.[87] In a follow-up study, cattle exposed to various magnetic fields directly beneath or in the vicinity of power lines trending in various magnetic directions exhibited distinct patterns of alignment.[88] However, in 2011, a group of Czech researchers reported their failed attempt to replicate the finding using Google Earth images.[89]
112
+
113
+ Under natural conditions, calves stay with their mother until weaning at 8 to 11 months. Heifer and bull calves are equally attached to their mothers in the first few months of life.[90] Cattle are considered to be "hider" type animals,[clarification needed] but in the artificial environment of small calving pens, close proximity between cow and calf is maintained by the mother at the first three calvings but this changes to being mediated by the calf after these. Primiparous dams show a higher incidence of abnormal maternal behavior.[91]
114
+
115
+ Beef-calves reared on the range suckle an average of 5.0 times every 24 hours with an average total time of 46 min spent suckling. There is a diurnal rhythm in suckling activity with peaks between 05:00–07:00, 10:00–13:00 and 17:00–21:00.[92]
116
+
117
+ Studies on the natural weaning of zebu cattle (Bos indicus) have shown that the cow weans her calves over a 2-week period, but after that, she continues to show strong affiliatory behavior with her offspring and preferentially chooses them for grooming and as grazing partners for at least 4–5 years.[93]
118
+
119
+ Semi-wild Highland cattle heifers first give birth at 2 or 3 years of age, and the timing of birth is synchronized with increases in natural food quality. Average calving interval is 391 days, and calving mortality within the first year of life is 5%.[94]
120
+
121
+ One study showed that over a 4-year period, dominance relationships within a herd of semi-wild highland cattle were very firm. There were few overt aggressive conflicts and the majority of disputes were settled by agonistic (non-aggressive, competitive) behaviors that involved no physical contact between opponents (e.g. threatening and spontaneous withdrawing). Such agonistic behavior reduces the risk of injury. Dominance status depended on age and sex, with older animals generally being dominant to young ones and males dominant to females. Young bulls gained superior dominance status over adult cows when they reached about 2 years of age.[94]
122
+
123
+ As with many animal dominance hierarchies, dominance-associated aggressiveness does not correlate with rank position, but is closely related to rank distance between individuals.[94]
124
+
125
+ Dominance is maintained in several ways. Cattle often engage in mock fights where they test each other's strength in a non-aggressive way. Licking is primarily performed by subordinates and received by dominant animals. Mounting is a playful behavior shown by calves of both sexes and by bulls and sometimes by cows in estrus,[95] however, this is not a dominance related behavior as has been found in other species.[94]
126
+
127
+ The horns of cattle are "honest signals" used in mate selection. Furthermore, horned cattle attempt to keep greater distances between themselves and have fewer physical interactions than hornless cattle. This leads to more stable social relationships.[96]
128
+
129
+ In calves, the frequency of agonistic behavior decreases as space allowance increases, but this does not occur for changes in group size. However, in adult cattle, the number of agonistic encounters increases as the group size increases.[97]
130
+
131
+ When grazing, cattle vary several aspects of their bite, i.e. tongue and jaw movements, depending on characteristics of the plant they are eating. Bite area decreases with the density of the plants but increases with their height. Bite area is determined by the sweep of the tongue; in one study observing 750-kilogram (1,650 lb) steers, bite area reached a maximum of approximately 170 cm2 (30 sq in). Bite depth increases with the height of the plants. By adjusting their behavior, cattle obtain heavier bites in swards that are tall and sparse compared with short, dense swards of equal mass/area.[98] Cattle adjust other aspects of their grazing behavior in relation to the available food; foraging velocity decreases and intake rate increases in areas of abundant palatable forage.[99]
132
+
133
+ Cattle avoid grazing areas contaminated by the faeces of other cattle more strongly than they avoid areas contaminated by sheep,[100] but they do not avoid pasture contaminated by rabbit faeces.[101]
134
+
135
+ In 24 April 2009, edition of the journal Science, a team of researchers led by the National Institutes of Health and the US Department of Agriculture reported having mapped the bovine genome.[102] The scientists found cattle have about 22,000 genes, and 80% of their genes are shared with humans, and they share about 1000 genes with dogs and rodents, but are not found in humans. Using this bovine "HapMap", researchers can track the differences between the breeds that affect the quality of meat and milk yields.[103]
136
+
137
+ Behavioral traits of cattle can be as heritable as some production traits, and often, the two can be related.[104] The heritability of fear varies markedly in cattle from low (0.1) to high (0.53); such high variation is also found in pigs and sheep, probably due to differences in the methods used.[105] The heritability of temperament (response to isolation during handling) has been calculated as 0.36 and 0.46 for habituation to handling.[106] Rangeland assessments show that the heritability of aggressiveness in cattle is around 0.36.[107]
138
+
139
+ Quantitative trait loci (QTLs) have been found for a range of production and behavioral characteristics for both dairy and beef cattle.[108]
140
+
141
+ Cattle occupy a unique role in human history, having been domesticated since at least the early neolithic age.
142
+
143
+ Archeozoological and genetic data indicate that cattle were first domesticated from wild aurochs (Bos primigenius) approximately 10,500 years ago. There were two major areas of domestication: one in the Near East (specifically central Anatolia, the Levant and Western Iran), giving rise to the taurine line, and a second in the area that is now Pakistan, resulting in the indicine line.[109] Modern mitochondrial DNA variation indicates the taurine line may have arisen from as few as 80 aurochs tamed in the upper reaches of Mesopotamia near the villages of Çayönü Tepesi in what is now southeastern Turkey and Dja'de el-Mughara in what is now northern Iraq.[1]
144
+
145
+ Although European cattle are largely descended from the taurine lineage, gene flow from African cattle (partially of indicine origin) contributed substantial genomic components to both southern European cattle breeds and their New World descendants.[109] A study on 134 breeds showed that modern taurine cattle originated from Africa, Asia, North and South America, Australia, and Europe.[110] Some researchers have suggested that African taurine cattle are derived from a third independent domestication from North African aurochsen.[109]
146
+
147
+ As early as 9000 BC both grain and cattle were used as money or as barter (the first grain remains found, considered to be evidence of pre-agricultural practice date to 17,000 BC).[111][112][113] Some evidence also exists to suggest that other animals, such as camels and goats, may have been used as currency in some parts of the world.[114] One of the advantages of using cattle as currency is that it allows the seller to set a fixed price. It even created the standard pricing. For example, two chickens were traded for one cow as cows were deemed to be more valuable than chickens.[112]
148
+
149
+ Cattle are often raised by allowing herds to graze on the grasses of large tracts of rangeland. Raising cattle in this manner allows the use of land that might be unsuitable for growing crops. The most common interactions with cattle involve daily feeding, cleaning and milking. Many routine husbandry practices involve ear tagging, dehorning, loading, medical operations, vaccinations and hoof care, as well as training for agricultural shows and preparations. Also, some cultural differences occur in working with cattle; the cattle husbandry of Fulani men rests on behavioural techniques, whereas in Europe, cattle are controlled primarily by physical means, such as fences.[115] Breeders use cattle husbandry to reduce M. bovis infection susceptibility by selective breeding and maintaining herd health to avoid concurrent disease.[116]
150
+
151
+ Cattle are farmed for beef, veal, dairy, and leather. They are less commonly used for conservation grazing, or simply to maintain grassland for wildlife, such as in Epping Forest, England. They are often used in some of the most wild places for livestock. Depending on the breed, cattle can survive on hill grazing, heaths, marshes, moors and semidesert. Modern cattle are more commercial than older breeds and, having become more specialized, are less versatile. For this reason, many smaller farmers still favor old breeds, such as the Jersey dairy breed.
152
+ In Portugal, Spain, southern France and some Latin American countries, bulls are used in the activity of bullfighting; Jallikattu in India is a bull taming sport radically different from European bullfighting, humans are unarmed and bulls are not killed. In many other countries bullfighting is illegal. Other activities such as bull riding are seen as part of a rodeo, especially in North America. Bull-leaping, a central ritual in Bronze Age Minoan culture (see Sacred Bull), still exists in southwestern France. In modern times, cattle are also entered into agricultural competitions. These competitions can involve live cattle or cattle carcases in hoof and hook events.
153
+
154
+ In terms of food intake by humans, consumption of cattle is less efficient than of grain or vegetables with regard to land use, and hence cattle grazing consumes more area than such other agricultural production when raised on grains.[117] Nonetheless, cattle and other forms of domesticated animals can sometimes help to use plant resources in areas not easily amenable to other forms of agriculture. Bulls are sometimes used as guard animals.[118][119]
155
+
156
+ The average sleep time of a domestic cow is about 4 hours a day.[120] Cattle do have a stay apparatus,[121] but do not sleep standing up,[122] they lie down to sleep deeply.[123] In spite of the urban legend, cows cannot be tipped over by people pushing on them.[124]
157
+
158
+ The meat of adult cattle is known as beef, and that of calves is veal. Other animal parts are also used as food products, including blood, liver, kidney, heart and oxtail. Cattle also produce milk, and dairy cattle are specifically bred to produce the large quantities of milk processed and sold for human consumption. Cattle today are the basis of a multibillion-dollar industry worldwide. The international trade in beef for 2000 was over $30 billion and represented only 23% of world beef production.[125] Approximately 300 million cattle, including dairy cattle, are slaughtered each year for food.[126] The production of milk, which is also made into cheese, butter, yogurt, and other dairy products, is comparable in economic size to beef production, and provides an important part of the food supply for many of the world's people. Cattle hides, used for leather to make shoes, couches and clothing, are another widespread product. Cattle remain broadly used as draft animals in many developing countries, such as India. Cattle are also used in some sporting games, including rodeo and bullfighting.
159
+
160
+ Source: Helgi Library,[127] World Bank, FAOSTAT
161
+
162
+ About half the world's meat comes from cattle.[128]
163
+
164
+ Certain breeds of cattle, such as the Holstein-Friesian, are used to produce milk,[129][130] which can be processed into dairy products such as milk, cheese or yogurt. Dairy cattle are usually kept on specialized dairy farms designed for milk production. Most cows are milked twice per day, with milk processed at a dairy, which may be onsite at the farm or the milk may be shipped to a dairy plant for eventual sale of a dairy product.[131] For dairy cattle to continue producing milk, they must give birth to one calf per year. If the calf is male, it generally is slaughtered at a young age to produce veal.[132] They will continue to produce milk until three weeks before birth.[130] Over the last fifty years, dairy farming has become more intensive to increase the yield of milk produced by each cow. The Holstein-Friesian is the breed of dairy cow most common in the UK, Europe and the United States. It has been bred selectively to produce the highest yields of milk of any cow. Around 22 litres per day is average in the UK.[129][130]
165
+
166
+ Most cattle are not kept solely for hides, which are usually a by-product of beef production. Hides are most commonly used for leather, which can be made into a variety of product, including shoes. In 2012 India was the world's largest producer of cattle hides.[133]
167
+
168
+ Feral cattle are defined as being 'cattle that are not domesticated or cultivated'.[134] Populations of feral cattle are known to come from and exist in: Australia, United States of America,[135] Colombia, Argentina, Spain, France and many islands, including New Guinea, Hawaii, Galapagos, Juan Fernández Islands, Hispaniola (Dominican Republic and Haiti), Tristan da Cunha and Île Amsterdam,[136] two islands of Kuchinoshima[137] and Kazura Island next to Naru Island in Japan.[138][139] Chillingham cattle is sometimes regarded as a feral breed.[140] Aleutian wild cattles can be found on Aleutian Islands.[141] The "Kinmen cattle" which is dominantly found on Kinmen Island, Taiwan is mostly domesticated while smaller portion of the population is believed to live in the wild due to accidental releases.[142]
169
+
170
+ Other notable examples include cattle in the vicinity of Hong Kong (in the Shing Mun Country Park,[143] among Sai Kung District[144] and Lantau Island[145] and on Grass Island[146]), and semi-feral animals in Yangmingshan, Taiwan.[147]
171
+
172
+ (2003)
173
+
174
+ & Hoekstra
175
+
176
+ (2003)
177
+
178
+ (2003)
179
+
180
+ Gut flora in cattle include methanogens that produce methane as a byproduct of enteric fermentation, which cattle belch out. The same volume of atmospheric methane has a higher global warming potential than atmospheric carbon dioxide.[151][152] Methane belching from cattle can be reduced with genetic selection, immunization, rumen defaunation, diet modification, decreased antibiotic use, and grazing management, among others.[153][154][155][156]
181
+
182
+ A report from the Food and Agriculture Organization (FAO) states that the livestock sector is "responsible for 18% of greenhouse gas emissions".[157] The IPCC estimates that cattle and other livestock emit about 80 to 93 Megatonnes of methane per year,[158] accounting for an estimated 37% of anthropogenic methane emissions,[157] and additional methane is produced by anaerobic fermentation of manure in manure lagoons and other manure storage structures.[159] The net change in atmospheric methane content was recently about 1 Megatonne per year,[160] and in some recent years there has been no increase in atmospheric methane content.[161] While cattle fed forage actually produce more methane than grain-fed cattle, the increase may be offset by the increased carbon recapture of pastures, which recapture three times the CO2 of cropland used for grain.[162]
183
+
184
+ One of the cited changes suggested to reduce greenhouse gas emissions is intensification of the livestock industry, since intensification leads to less land for a given level of production. This assertion is supported by studies of the US beef production system, suggesting practices prevailing in 2007 involved 8.6% less fossil fuel use, 16.3% less greenhouse gas emissions, 12.1% less water use, and 33.0% less land use, per unit mass of beef produced, than those used in 1977.[163] The analysis took into account not only practices in feedlots, but also feed production (with less feed needed in more intensive production systems), forage-based cow-calf operations and back-grounding before cattle enter a feedlot (with more beef produced per head of cattle from those sources, in more intensive systems), and beef from animals derived from the dairy industry.
185
+
186
+ The number of American cattle kept in confined feedlot conditions fluctuates. From 1 January 2002 through 1 January 2012, there was no significant overall upward or downward trend in the number of US cattle on feed for slaughter, which averaged about 14.046 million head over that period.[164][165] Previously, the number had increased; it was 12.453 million in 1985.[166] Cattle on feed (for slaughter) numbered about 14.121 million on 1 January 2012, i.e. about 15.5% of the estimated inventory of 90.8 million US cattle (including calves) on that date. Of the 14.121 million, US cattle on feed (for slaughter) in operations with 1000 head or more were estimated to number 11.9 million.[165] Cattle feedlots in this size category correspond to the regulatory definition of "large" concentrated animal feeding operations (CAFOs) for cattle other than mature dairy cows or veal calves.[167] Significant numbers of dairy, as well as beef cattle, are confined in CAFOs, defined as "new and existing operations which stable or confine and feed or maintain for a total of 45 days or more in any 12-month period more than the number of animals specified"[168] where "[c]rops, vegetation, forage growth, or post-harvest residues are not sustained in the normal growing season over any portion of the lot or facility."[169] They may be designated as small, medium and large. Such designation of cattle CAFOs is according to cattle type (mature dairy cows, veal calves or other) and cattle numbers, but medium CAFOs are so designated only if they meet certain discharge criteria, and small CAFOs are designated only on a case-by-case basis.[170]
187
+
188
+ A CAFO that discharges pollutants is required to obtain a permit, which requires a plan to manage nutrient runoff, manure, chemicals, contaminants, and other wastewater pursuant to the US Clean Water Act.[171] The regulations involving CAFO permitting have been extensively litigated.[172]
189
+ Commonly, CAFO wastewater and manure nutrients are applied to land at agronomic rates for use by forages or crops, and it is often assumed that various constituents of wastewater and manure, e.g. organic contaminants and pathogens, will be retained, inactivated or degraded on the land with application at such rates; however, additional evidence is needed to test reliability of such assumptions
190
+ .[173] Concerns raised by opponents of CAFOs have included risks of contaminated water due to feedlot runoff,[174] soil erosion, human and animal exposure to toxic chemicals, development of antibiotic resistant bacteria and an increase in E. coli contamination.[175] While research suggests some of these impacts can be mitigated by developing wastewater treatment systems[174] and planting cover crops in larger setback zones,[176] the Union of Concerned Scientists released a report in 2008 concluding that CAFOs are generally unsustainable and externalize costs.[162]
191
+
192
+ An estimated 935,000 cattle operations were operating in the US in 2010.[177] In 2001, the US Environmental Protection Agency (EPA) tallied 5,990 cattle CAFOs then regulated, consisting of beef (2,200), dairy (3,150), heifer (620) and veal operations (20).[178] Since that time, the EPA has established CAFOs as an enforcement priority. EPA enforcement highlights for fiscal year 2010 indicated enforcement actions against 12 cattle CAFOs for violations that included failures to obtain a permit, failures to meet the terms of a permit, and discharges of contaminated water.[179]
193
+
194
+ Another concern is manure, which if not well-managed, can lead to adverse environmental consequences. However, manure also is a valuable source of nutrients and organic matter when used as a fertilizer.[180] Manure was used as a fertilizer on about 6,400,000 hectares (15.8 million acres) of US cropland in 2006, with manure from cattle accounting for nearly 70% of manure applications to soybeans and about 80% or more of manure applications to corn, wheat, barley, oats and sorghum.[181] Substitution of manure for synthetic fertilizers in crop production can be environmentally significant, as between 43 and 88 megajoules of fossil fuel energy would be used per kg of nitrogen in manufacture of synthetic nitrogenous fertilizers.[182]
195
+
196
+ Grazing by cattle at low intensities can create a favourable environment for native herbs and forbs by mimicking the native grazers who they displaced; in many world regions, though, cattle are reducing biodiversity due to overgrazing.[183] A survey of refuge managers on 123 National Wildlife Refuges in the US tallied 86 species of wildlife considered positively affected and 82 considered negatively affected by refuge cattle grazing or haying.[184] Proper management of pastures, notably managed intensive rotational grazing and grazing at low intensities can lead to less use of fossil fuel energy, increased recapture of carbon dioxide, fewer ammonia emissions into the atmosphere, reduced soil erosion, better air quality, and less water pollution.[162]
197
+
198
+ The veterinary discipline dealing with cattle and cattle diseases (bovine veterinary) is called buiatrics.[185] Veterinarians and professionals working on cattle health issues are pooled in the World Association for Buiatrics, founded in 1960.[186] National associations and affiliates also exist.[187]
199
+
200
+ Cattle diseases were in the center of attention in the 1980s and 1990s when the Bovine spongiform encephalopathy (BSE), also known as mad cow disease, was of concern. Cattle might catch and develop various other diseases, like blackleg, bluetongue, foot rot too.[188][189][190]
201
+
202
+ In most states, as cattle health is not only a veterinarian issue, but also a public health issue, public health and food safety standards and farming regulations directly affect the daily work of farmers who keep cattle.[191] However, said rules change frequently and are often debated. For instance, in the U.K., it was proposed in 2011 that milk from tuberculosis-infected cattle should be allowed to enter the food chain.[192] Internal food safety regulations might affect a country's trade policy as well. For example, the United States has just reviewed its beef import rules according to the "mad cow standards"; while Mexico forbids the entry of cattle who are older than 30 months.[193]
203
+
204
+ Cow urine is commonly used in India for internal medical purposes.[194][195] It is distilled and then consumed by patients seeking treatment for a wide variety of illnesses.[196] At present, no conclusive medical evidence shows this has any effect.[197] However, an Indian medicine containing cow urine has already obtained U.S. patents.[198]
205
+
206
+ Digital dermatitis is caused by the bacteria from the genus Treponema. It differs from foot rot and can appear under unsanitary conditions such as poor hygiene or inadequate hoof trimming, among other causes. It primarily affects dairy cattle and has been known to lower the quantity of milk produced, however the milk quality remains unaffected. Cattle are also susceptible to ringworm caused by the fungus, Trichophyton verrucosum, a contagious skin disease which may be transferred to humans exposed to infected cows.[199]
207
+
208
+ Stocking density refers to the number of animals within a specified area. When stocking density reaches high levels, the behavioural needs of the animals may not be met. This can negatively influence health, welfare and production performance.[200]
209
+
210
+ The effect of overstocking in cows can have a negative effect on milk production and reproduction rates which are two very important traits for dairy farmers. Overcrowding of cows in barns has been found to reduced feeding, resting and rumination.[200] Although they consume the same amount of dry matter within the span of a day, they consume the food at a much more rapid rate, and this behaviour in cows can lead to further complications.[201] The feeding behaviour of cows during their post-milking period is very important as it has been proven that the longer animals can eat after milking, the longer they will be standing up and therefore causing less contamination to the teat ends.[202] This is necessary to reduce the risk of mastitis as infection has been shown to increase the chances of embryonic loss.[203] Sufficient rest is important for dairy cows because it is during this period that their resting blood flow increases up to 50%, this is directly proportionate to milk production.[202] Each additional hour of rest can be seen to translate to 2 to 3.5 more pounds of milk per cow daily. Stocking densities of anything over 120% have been shown to decrease the amount of time cows spend lying down.[204]
211
+
212
+ Cortisol is an important stress hormone; its plasma concentrations increase greatly when subjected to high levels of stress.[205] Increased concentration levels of cortisol have been associated with significant increases in gonadotrophin levels and lowered progestin levels. Reduction of stress is important in the reproductive state of cows as an increase in gonadotrophin and lowered progesterone levels may impinge on the ovulatory and lutenization process and to reduce the chances of successful implantation.[206] A high cortisol level will also stimulate the degradation of fats and proteins which may make it difficult for the animal to sustain its pregnancy if implanted successfully.[205]
213
+
214
+ Animal rights activists have criticized the treatment of cattle, claiming that common practices in cattle husbandry, slaughter, and entertainment unnecessarily cause cattle fear, stress, and pain. They advocate for abstaining from the consumption of cattle-related animal products (such as beef, cow's milk, veal, and leather) and cattle-based entertainment (such as rodeos and bullfighting) in order to end one's participation in the cruelty, claiming that the animals are only treated this way due to market forces and popular demand.
215
+
216
+ The following practices have been criticized by animal welfare and animal rights groups:[207] branding,[208] castration,[209] dehorning,[210] ear tagging,[211] nose ringing,[212] restraint,[213] tail docking,[214] the use of veal crates,[215] and cattle prods.[216] Further, the stress induced by high stocking density (such as in feedlots, auctions, and during transport) is known to negatively affect the health of cattle,[217][218] and has also been criticized.[219][220]
217
+
218
+ While the treatment of dairy cows is similar to that of beef cattle, especially towards the end of their life, it has faced additional criticism.[221] To produce milk from dairy cattle, most calves are separated from their mothers soon after birth and fed milk replacement in order to retain the cows' milk for human consumption.[222] Animal welfare advocates point out that this breaks the natural bond between the mother and her calf.[222] Unwanted male calves are either slaughtered at birth or sent for veal production.[222] To prolong lactation, dairy cows are almost permanently kept pregnant through artificial insemination.[222] Because of this, some feminists state that dairy production is based on the sexual exploitation of cows.[223][224] Although cows' natural life expectancy is about twenty years,[225] after about five years the cows' milk production has dropped; they are then considered "spent" and are sent to slaughter, which is considered cruel by some.[226][227]
219
+
220
+ While leather is often a by-product of slaughter, in some countries, such as India and Bangladesh, cows are raised primarily for their leather. These leather industries often make their cows walk long distances across borders to be killed in neighboring provinces and countries where cattle slaughter is legal. Some cows die along the long journey, and exhausted animals are often beaten and have chili and tobacco rubbed into their eyes to make them keep walking.[228] These practices have faced backlash from various animal rights groups.[229][230]
221
+
222
+ There has been a long history of protests against rodeos,[231] with the opposition saying that rodeos are unnecessary and cause stress, injury, and death to the animals.[232][233]
223
+
224
+ The running of the bulls faces opposition due to the stress and injuries incurred by the bulls during the event.[234]
225
+
226
+ Bullfighting is considered by many people, including animal rights and animal welfare advocates, to be a cruel, barbaric blood sport in which bulls are forced to suffer severe stress and a slow, torturous death.[235] A number of animal rights and animal welfare groups are involved in anti-bullfighting activities.[236]
227
+
228
+ Oxen (singular ox) are cattle trained as draft animals. Often they are adult, castrated males of larger breeds, although females and bulls are also used in some areas. Usually, an ox is over four years old due to the need for training and to allow it to grow to full size. Oxen are used for plowing, transport, hauling cargo, grain-grinding by trampling or by powering machines, irrigation by powering pumps, and wagon drawing. Oxen were commonly used to skid logs in forests, and sometimes still are, in low-impact, select-cut logging. Oxen are most often used in teams of two, paired, for light work such as carting, with additional pairs added when more power is required, sometimes up to a total of 20 or more.
229
+
230
+ Oxen can be trained to respond to a teamster's signals. These signals are given by verbal commands or by noise (whip cracks). Verbal commands vary according to dialect and local tradition. Oxen can pull harder and longer than horses. Though not as fast as horses, they are less prone to injury because they are more sure-footed.
231
+
232
+ Many oxen are used worldwide, especially in developing countries. About 11.3 million draft oxen are used in sub-Saharan Africa.[237] In India, the number of draft cattle in 1998 was estimated at 65.7 million head.[238] About half the world's crop production is thought to depend on land preparation (such as plowing) made possible by animal traction.[239]
233
+
234
+ The cow is mentioned often in the Quran. The second and longest surah of the Quran is named Al-Baqara ("The Cow"). Out of the 286 verses of the surah, seven mention cows (Al Baqarah 67–73).[240][241] The name of the surah derives from this passage in which Moses orders his people to sacrifice a cow in order to resurrect a man murdered by an unknown person.[242]
235
+
236
+ Cattle are venerated within the Hindu religion of India. In the Vedic period they were a symbol of plenty [243]:130 and were frequently slaughtered. In later times they gradually acquired their present status. According to the Mahabharata, they are to be treated with the same respect 'as one's mother'.[244] In the middle of the first millennium, the consumption of beef began to be disfavoured by lawgivers.[243]:144 Although there has never been any cow-goddesses or temples dedicated to them,[243]:146 cows appear in numerous stories from the Vedas and Puranas. The deity Krishna was brought up in a family of cowherders, and given the name Govinda (protector of the cows). Also, Shiva is traditionally said to ride on the back of a bull named Nandi.
237
+
238
+ Milk and milk products were used in Vedic rituals.[243]:130 In the postvedic period products of the cow—milk, curd, ghee, but also cow dung and urine (gomutra), or the combination of these five (panchagavya)—began to assume an increasingly important role in ritual purification and expiation.[243]:130–131
239
+
240
+ Veneration of the cow has become a symbol of the identity of Hindus as a community,[243]:20 especially since the end of the 19th century. Slaughter of cows (including oxen, bulls and calves) is forbidden by law in several states of the Indian Union. McDonald's outlets in India do not serve any beef burgers. In Maharaja Ranjit Singh's empire of the early 19th century, the killing of a cow was punishable by death.[245]
241
+
242
+ Cattle are typically represented in heraldry by the bull.
243
+
244
+ Arms of the Azores
245
+
246
+ Arms of Mecklenburg region, Germany
247
+
248
+ Arms of Turin, Italy
249
+
250
+ Arms of Kaunas, Lithuania
251
+
252
+ Arms of Bielsk Podlaski, Poland
253
+
254
+ Arms of Ciołek, Poland
255
+
256
+ Arms of Turek, Poland
257
+
258
+ For 2013, the FAO estimated global cattle numbers at 1.47 billion.[249] Regionally, the FAO estimate for 2013 includes: Asia 497 million; South America 350 million; Africa 307 million; Europe 122 million; North America 102 million; Central America 47 million; Oceania 40 million; and Caribbean 9 million.
259
+
260
+ Didactic model of Bovine
261
+
262
+ Bovine anatomical model
263
+
264
+ Didactic model of a bovine muscular system
265
+
en/5894.html.txt ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Cattle, or cows (female) and bulls (male), are the most common type of large domesticated ungulates. They are a prominent modern member of the subfamily Bovinae, are the most widespread species of the genus Bos, and are most commonly classified collectively as Bos taurus.
4
+
5
+ Cattle are commonly raised as livestock for meat (beef or veal, see beef cattle), for milk (see dairy cattle), and for hides, which are used to make leather. They are used as riding animals and draft animals (oxen or bullocks, which pull carts, plows and other implements). Another product of cattle is their dung, which can be used to create manure or fuel. In some regions, such as parts of India, cattle have significant religious meaning. Cattle, mostly small breeds such as the Miniature Zebu, are also kept as pets.
6
+
7
+ Around 10,500 years ago, cattle were domesticated from as few as 80 progenitors in central Anatolia, the Levant and Western Iran.[1] According to an estimate from 2011, there are 1.4 billion cattle in the world.[2] In 2009, cattle became one of the first livestock animals to have a fully mapped genome.[3]
8
+
9
+ Cattle were originally identified as three separate species: Bos taurus, the European or "taurine" cattle (including similar types from Africa and Asia); Bos indicus, the zebu; and the extinct Bos primigenius, the aurochs. The aurochs is ancestral to both zebu and taurine cattle.[4] These have been reclassified as one species, Bos taurus, with three subspecies: Bos taurus primigenius, Bos taurus indicus, and Bos taurus taurus.[5][6]
10
+
11
+ Complicating the matter is the ability of cattle to interbreed with other closely related species. Hybrid individuals and even breeds exist, not only between taurine cattle and zebu (such as the sanga cattle, Bos taurus africanus), but also between one or both of these and some other members of the genus Bos – yaks (the dzo or yattle[7]), banteng, and gaur. Hybrids such as the beefalo breed can even occur between taurine cattle and either species of bison, leading some authors to consider them part of the genus Bos, as well.[8] The hybrid origin of some types may not be obvious – for example, genetic testing of the Dwarf Lulu breed, the only taurine-type cattle in Nepal, found them to be a mix of taurine cattle, zebu, and yak.[9] However, cattle cannot be successfully hybridized with more distantly related bovines such as water buffalo or African buffalo.
12
+
13
+ The aurochs originally ranged throughout Europe, North Africa, and much of Asia. In historical times, its range became restricted to Europe, and the last known individual died in Mazovia, Poland, in about 1627.[10] Breeders have attempted to recreate cattle of similar appearance to aurochs by crossing traditional types of domesticated cattle, creating the Heck cattle breed.
14
+
15
+ Cattle did not originate as the term for bovine animals. It was borrowed from Anglo-Norman catel, itself from medieval Latin capitale 'principal sum of money, capital', itself derived in turn from Latin caput 'head'. Cattle originally meant movable personal property, especially livestock of any kind, as opposed to real property (the land, which also included wild or small free-roaming animals such as chickens—they were sold as part of the land).[11] The word is a variant of chattel (a unit of personal property) and closely related to capital in the economic sense.[12] The term replaced earlier Old English feoh 'cattle, property', which survives today as fee (cf. German: Vieh, Dutch: vee, Gothic: faihu).
16
+
17
+ The word "cow" came via Anglo-Saxon cū (plural cȳ), from Common Indo-European gʷōus (genitive gʷowés) = "a bovine animal", compare Persian: gâv, Sanskrit: go-, Welsh: buwch.[13] The plural cȳ became ki or kie in Middle English, and an additional plural ending was often added, giving kine, kien, but also kies, kuin and others. This is the origin of the now archaic English plural, "kine". The Scots language singular is coo or cou, and the plural is "kye".
18
+
19
+ In older English sources such as the King James Version of the Bible, "cattle" refers to livestock, as opposed to "deer" which refers to wildlife. "Wild cattle" may refer to feral cattle or to undomesticated species of the genus Bos. Today, when used without any other qualifier, the modern meaning of "cattle" is usually restricted to domesticated bovines.[14]
20
+
21
+ In general, the same words are used in different parts of the world, but with minor differences in the definitions. The terminology described here contrasts the differences in definition between the United Kingdom and other British-influenced parts of the world such as Canada, Australia, New Zealand, Ireland and the United States.[15]
22
+
23
+ "Cattle" can only be used in the plural and not in the singular: it is a plurale tantum.[26] Thus one may refer to "three cattle" or "some cattle", but not "one cattle". "One head of cattle" is a valid though periphrastic way to refer to one animal of indeterminate or unknown age and sex; otherwise no universally used single-word singular form of cattle exists in modern English, other than the sex- and age-specific terms such as cow, bull, steer and heifer. Historically, "ox" was not a sex-specific term for adult cattle, but generally this is now used only for working cattle, especially adult castrated males. The term is also incorporated into the names of other species, such as the musk ox and "grunting ox" (yak), and is used in some areas to describe certain cattle products such as ox-hide and oxtail.[27]
24
+
25
+ Cow is in general use as a singular for the collective cattle. The word cow is easy to use when a singular is needed and the sex is unknown or irrelevant—when "there is a cow in the road", for example. Further, any herd of fully mature cattle in or near a pasture is statistically likely to consist mostly of cows, so the term is probably accurate even in the restrictive sense. Other than the few bulls needed for breeding, the vast majority of male cattle are castrated as calves and are used as oxen or slaughtered for meat before the age of three years. Thus, in a pastured herd, any calves or herd bulls usually are clearly distinguishable from the cows due to distinctively different sizes and clear anatomical differences. Merriam-Webster and Oxford Living Dictionaries recognize the sex-nonspecific use of cow as an alternate definition,[28][29] whereas Collins and the OED do not.
26
+
27
+ Colloquially, more general nonspecific terms may denote cattle when a singular form is needed. Head of cattle is usually used only after a numeral. Australian, New Zealand and British farmers use the term beast or cattle beast. Bovine is also used in Britain. The term critter is common in the western United States and Canada, particularly when referring to young cattle.[30] In some areas of the American South (particularly the Appalachian region), where both dairy and beef cattle are present, an individual animal was once called a "beef critter", though that term is becoming archaic.
28
+
29
+ Cattle raised for human consumption are called beef cattle. Within the beef cattle industry in parts of the United States, the term beef (plural beeves) is still used in its archaic sense to refer to an animal of either sex. Cows of certain breeds that are kept for the milk they give are called dairy cows or milking cows (formerly milch cows). Most young male offspring of dairy cows are sold for veal, and may be referred to as veal calves.
30
+
31
+ The term dogies is used to describe orphaned calves in the context of ranch work in the American West, as in "Keep them dogies moving".[31] In some places, a cow kept to provide milk for one family is called a "house cow". Other obsolete terms for cattle include "neat" (this use survives in "neatsfoot oil", extracted from the feet and legs of cattle), and "beefing" (young animal fit for slaughter).
32
+
33
+ An onomatopoeic term for one of the most common sounds made by cattle is moo (also called lowing). There are a number of other sounds made by cattle, including calves bawling, and bulls bellowing. Bawling is most common for cows after weaning of a calf. The bullroarer makes a sound similar to a bull's territorial call.[32]
34
+
35
+ Cattle are large quadrupedal ungulate mammals with cloven hooves. Most breeds have horns, which can be as large as the Texas Longhorn or small like a scur. Careful genetic selection has allowed polled (hornless) cattle to become widespread.
36
+
37
+ Cattle are ruminants, meaning their digestive system is highly specialized to allow the use of poorly digestible plants as food. Cattle have one stomach with four compartments, the rumen, reticulum, omasum, and abomasum, with the rumen being the largest compartment.
38
+ The reticulum, the smallest compartment, is known as the "honeycomb". The omasum's main function is to absorb water and nutrients from the digestible feed. The omasum is known as the "many plies". The abomasum is like the human stomach; this is why it is known as the "true stomach".
39
+
40
+ Cattle are known for regurgitating and re-chewing their food, known as cud chewing, like most ruminants. While the animal is feeding, the food is swallowed without being chewed and goes into the rumen for storage until the animal can find a quiet place to continue the digestion process. The food is regurgitated, a mouthful at a time, back up to the mouth, where the food, now called the cud, is chewed by the molars, grinding down the coarse vegetation to small particles. The cud is then swallowed again and further digested by specialized microorganisms in the rumen. These microbes are primarily responsible for decomposing cellulose and other carbohydrates into volatile fatty acids cattle use as their primary metabolic fuel. The microbes inside the rumen also synthesize amino acids from non-protein nitrogenous sources, such as urea and ammonia. As these microbes reproduce in the rumen, older generations die and their cells continue on through the digestive tract. These cells are then partially digested in the small intestines, allowing cattle to gain a high-quality protein source. These features allow cattle to thrive on grasses and other tough vegetation.
41
+
42
+ The gestation period for a cow is about nine months long. A newborn calf's size can vary among breeds, but a typical calf weighs 25 to 45 kg (55 to 99 lb). Adult size and weight vary significantly among breeds and sex. Steers are generally killed before reaching 750 kg (1,650 lb). Breeding stock may be allowed a longer lifespan, occasionally living as long as 25 years. The oldest recorded cow, Big Bertha, died at the age of 48 in 1993.
43
+
44
+ On farms it is very common to use artificial insemination (AI), a medically assisted reproduction technique consisting of the artificial deposition of semen in the female's genital tract.[33] It is used in cases where the spermatozoa can not reach the fallopian tubes or simply by choice of the owner of the animal. It consists of transferring, to the uterine cavity, spermatozoa previously collected and processed, with the selection of morphologically more normal and mobile spermatozoa.
45
+
46
+ A cow's udder contains two pairs of mammary glands, (commonly referred to as teats) creating four "quarters".[34] The front ones are referred to as fore quarters and the rear ones rear quarters.[35]
47
+
48
+ Bulls become fertile at about seven months of age. Their fertility is closely related to the size of their testicles, and one simple test of fertility is to measure the circumference of the scrotum: a young bull is likely to be fertile once this reaches 28 centimetres (11 in); that of a fully adult bull may be over 40 centimetres (16 in).[36][37]
49
+
50
+ A bull has a fibro-elastic penis. Given the small amount of erectile tissue, there is little enlargement after erection. The penis is quite rigid when non-erect, and becomes even more rigid during erection. Protrusion is not affected much by erection, but more by relaxation of the retractor penis muscle and straightening of the sigmoid flexure.[38][39][40] Induced ovulation can be manipulated to produce farming benefits. For example, to synchronise ovulation of the cattle to benefit dairy farming.
51
+
52
+ The weight of adult cattle varies, depending on the breed. Smaller kinds, such as Dexter and Jersey adults, range between 272 to 454 kg (600 to 1,000 lb). Large Continental breeds, such as Charolais, Marchigiana, Belgian Blue and Chianina, adults range from 635 to 1,134 kg (1,400 to 2,500 lb). British breeds, such as Hereford, Angus, and Shorthorn, mature between 454 to 907 kg (1,000 to 2,000 lb), occasionally higher, particularly with Angus and Hereford.[41] Bulls are larger than cows of the same breed by up to a few hundred kilograms. Chianina bulls can weigh up to 1,500 kg (3,300 lb); British bulls, such as Angus and Hereford, can weigh as little as 907 kg (2,000 lb) to as much as 1,361 kg (3,000 lb).[citation needed]
53
+
54
+ The world record for the heaviest bull was 1,740 kg (3,840 lb), a Chianina named Donetto, when he was exhibited at the Arezzo show in 1955.[42] The heaviest steer was eight-year-old 'Old Ben', a Shorthorn/Hereford cross weighing in at 2,140 kg (4,720 lb) in 1910.[43]
55
+
56
+ In the United States, the average weight of beef cattle has steadily increased, especially since the 1970s, requiring the building of new slaughterhouses able to handle larger carcasses. New packing plants in the 1980s stimulated a large increase in cattle weights.[44] Before 1790 beef cattle averaged only 160 kg (350 lb) net; and thereafter weights climbed steadily.[45][46]
57
+
58
+ In laboratory studies, young cattle are able to memorize the locations of several food sources and retain this memory for at least 8 hours, although this declined after 12 hours.[47] Fifteen-month-old heifers learn more quickly than adult cows which have had either one or two calvings, but their longer-term memory is less stable.[48] Mature cattle perform well in spatial learning tasks and have a good long-term memory in these tests. Cattle tested in a radial arm maze are able to remember the locations of high-quality food for at least 30 days. Although they initially learn to avoid low-quality food, this memory diminishes over the same duration.[49] Under less artificial testing conditions, young cattle showed they were able to remember the location of feed for at least 48 days.[50] Cattle can make an association between a visual stimulus and food within 1 day—memory of this association can be retained for 1 year, despite a slight decay.[51]
59
+
60
+ Calves are capable of discrimination learning[52] and adult cattle compare favourably with small mammals in their learning ability in the Closed-field Test.[53]
61
+
62
+ They are also able to discriminate between familiar individuals, and among humans. Cattle can tell the difference between familiar and unfamiliar animals of the same species (conspecifics). Studies show they behave less aggressively toward familiar individuals when they are forming a new group.[54] Calves can also discriminate between humans based on previous experience, as shown by approaching those who handled them positively and avoiding those who handled them aversively.[55] Although cattle can discriminate between humans by their faces alone, they also use other cues such as the color of clothes when these are available.[56]
63
+
64
+ In audio play-back studies, calves prefer their own mother's vocalizations compared to the vocalizations of an unfamiliar mother.[57]
65
+
66
+ In laboratory studies using images, cattle can discriminate between images of the heads of cattle and other animal species.[58] They are also able to distinguish between familiar and unfamiliar conspecifics. Furthermore, they are able to categorize images as familiar and unfamiliar individuals.[54]
67
+
68
+ When mixed with other individuals, cloned calves from the same donor form subgroups, indicating that kin discrimination occurs and may be a basis of grouping behaviour. It has also been shown using images of cattle that both artificially inseminated and cloned calves have similar cognitive capacities of kin and non-kin discrimination.[59]
69
+
70
+ Cattle can recognize familiar individuals. Visual individual recognition is a more complex mental process than visual discrimination. It requires the recollection of the learned idiosyncratic identity of an individual that has been previously encountered and the formation of a mental representation.[60] By using 2-dimensional images of the heads of one cow (face, profiles, ​3⁄4 views), all the tested heifers showed individual recognition of familiar and unfamiliar individuals from their own breed. Furthermore, almost all the heifers recognized unknown individuals from different breeds, although this was achieved with greater difficulty. Individual recognition was most difficult when the visual features of the breed being tested were quite different from the breed in the image, for example, the breed being tested had no spots whereas the image was of a spotted breed.[61]
71
+
72
+ Cattle use visual/brain lateralisation in their visual scanning of novel and familiar stimuli.[62] Domestic cattle prefer to view novel stimuli with the left eye, i.e. using the right brain hemisphere (similar to horses, Australian magpies, chicks, toads and fish) but use the right eye, i.e. using the left hemisphere, for viewing familiar stimuli.[63]
73
+
74
+ In cattle, temperament can affect production traits such as carcass and meat quality or milk yield as well as affecting the animal's overall health and reproduction. Cattle temperament is defined as "the consistent behavioral and physiological difference observed between individuals in response to a stressor or environmental challenge and is used to describe the relatively stable difference in the behavioral predisposition of an animal, which can be related to psychobiological mechanisms".[65] Generally, cattle temperament is assumed to be multidimensional. Five underlying categories of temperament traits have been proposed:[66]
75
+
76
+ In a study on Holstein–Friesian heifers learning to press a panel to open a gate for access to a food reward, the researchers also recorded the heart rate and behavior of the heifers when moving along the race towards the food. When the heifers made clear improvements in learning, they had higher heart rates and tended to move more vigorously along the race. The researchers concluded this was an indication that cattle may react emotionally to their own learning improvement.[67]
77
+
78
+ Negative emotional states are associated with a bias toward negative responses towards ambiguous cues in judgement tasks. After separation from their mothers, Holstein calves showed such a cognitive bias indicative of low mood.[68] A similar study showed that after hot-iron disbudding (dehorning), calves had a similar negative bias indicating that post-operative pain following this routine procedure results in a negative change in emotional state.[69]
79
+
80
+ In studies of visual discrimination, the position of the ears has been used as an indicator of emotional state.[54] When cattle are stressed other cattle can tell by the chemicals released in their urine.[70]
81
+
82
+ Cattle are very gregarious and even short-term isolation is considered to cause severe psychological stress. When Aubrac and Friesian heifers are isolated, they increase their vocalizations and experience increased heart rate and plasma cortisol concentrations. These physiological changes are greater in Aubracs. When visual contact is re-instated, vocalizations rapidly decline, regardless of the familiarity of the returning cattle, however, heart rate decreases are greater if the returning cattle are familiar to the previously-isolated individual.[71] Mirrors have been used to reduce stress in isolated cattle.[72]
83
+
84
+ Cattle use all of the five widely recognized sensory modalities. These can assist in some complex behavioural patterns, for example, in grazing behaviour. Cattle eat mixed diets, but when given the opportunity, show a partial preference of approximately 70% clover and 30% grass. This preference has a diurnal pattern, with a stronger preference for clover in the morning, and the proportion of grass increasing towards the evening.[73]
85
+
86
+ Vision is the dominant sense in cattle and they obtain almost 50% of their information visually.
87
+ [74]
88
+
89
+ Cattle are a prey animal and to assist predator detection, their eyes are located on the sides of their head rather than the front. This gives them a wide field of view of 330° but limits binocular vision (and therefore stereopsis) to 30° to 50° compared to 140° in humans.[54][75] This means they have a blind spot directly behind them. Cattle have good visual acuity,[54] but compared to humans, their visual accommodation is poor.[clarification needed][74]
90
+
91
+ Cattle have two kinds of color receptors in the cone cells of their retinas. This means that cattle are dichromatic, as are most other non-primate land mammals.[76][77] There are two to three rods per cone in the fovea centralis but five to six near the optic papilla.[75] Cattle can distinguish long wavelength colors (yellow, orange and red) much better than the shorter wavelengths (blue, grey and green). Calves are able to discriminate between long (red) and short (blue) or medium (green) wavelengths, but have limited ability to discriminate between the short and medium. They also approach handlers more quickly under red light.[78] Whilst having good color sensitivity, it is not as good as humans or sheep.[54]
92
+
93
+ A common misconception about cattle (particularly bulls) is that they are enraged by the color red (something provocative is often said to be "like a red flag to a bull"). This is a myth. In bullfighting, it is the movement of the red flag or cape that irritates the bull and incites it to charge.[79]
94
+
95
+ Cattle have a well-developed sense of taste and can distinguish the four primary tastes (sweet, salty, bitter and sour). They possess around 20,000 taste buds. The strength of taste perception depends on the individual's current food requirements. They avoid bitter-tasting foods (potentially toxic) and have a marked preference for sweet (high calorific value) and salty foods (electrolyte balance). Their sensitivity to sour-tasting foods helps them to maintain optimal ruminal pH.[74]
96
+
97
+ Plants have low levels of sodium and cattle have developed the capacity of seeking salt by taste and smell. If cattle become depleted of sodium salts, they show increased locomotion directed to searching for these. To assist in their search, the olfactory and gustatory receptors able to detect minute amounts of sodium salts increase their sensitivity as biochemical disruption develops with sodium salt depletion.[80][81]
98
+
99
+ Cattle hearing ranges from 23 Hz to 35 kHz. Their frequency of best sensitivity is 8 kHz and they have a lowest threshold of −21 db (re 20 μN/m−2), which means their hearing is more acute than horses (lowest threshold of 7 db).[82] Sound localization acuity thresholds are an average of 30°. This means that cattle are less able to localise sounds compared to goats (18°), dogs (8°) and humans (0.8°).[83] Because cattle have a broad foveal fields of view covering almost the entire horizon, they may not need very accurate locus information from their auditory systems to direct their gaze to a sound source.
100
+
101
+ Vocalizations are an important mode of communication amongst cattle and can provide information on the age, sex, dominance status and reproductive status of the caller. Calves can recognize their mothers using vocalizations; vocal behaviour may play a role by indicating estrus and competitive display by bulls.[84]
102
+
103
+ Cattle have a range of odiferous glands over their body including interdigital, infraorbital, inguinal and sebaceous glands, indicating that olfaction probably plays a large role in their social life. Both the primary olfactory system using the olfactory bulbs, and the secondary olfactory system using the vomeronasal organ are used.[85] This latter olfactory system is used in the flehmen response. There is evidence that when cattle are stressed, this can be recognised by other cattle and this is communicated by alarm substances in the urine.[70] The odour of dog faeces induces behavioural changes prior to cattle feeding, whereas the odours of urine from either stressed or non-stressed conspecifics and blood have no effect.[86]
104
+
105
+ In the laboratory, cattle can be trained to recognise conspecific individuals using olfaction only.[85]
106
+
107
+ In general, cattle use their sense of smell to "expand" on information detected by other sensory modalities. However, in the case of social and reproductive behaviours, olfaction is a key source of information.[74]
108
+
109
+ Cattle have tactile sensations detected mainly by mechanoreceptors, thermoreceptors and nociceptors in the skin and muzzle. These are used most frequently when cattle explore their environment.[74]
110
+
111
+ There is conflicting evidence for magnetoreception in cattle. One study reported that resting and grazing cattle tend to align their body axes in the geomagnetic north–south (N-S) direction.[87] In a follow-up study, cattle exposed to various magnetic fields directly beneath or in the vicinity of power lines trending in various magnetic directions exhibited distinct patterns of alignment.[88] However, in 2011, a group of Czech researchers reported their failed attempt to replicate the finding using Google Earth images.[89]
112
+
113
+ Under natural conditions, calves stay with their mother until weaning at 8 to 11 months. Heifer and bull calves are equally attached to their mothers in the first few months of life.[90] Cattle are considered to be "hider" type animals,[clarification needed] but in the artificial environment of small calving pens, close proximity between cow and calf is maintained by the mother at the first three calvings but this changes to being mediated by the calf after these. Primiparous dams show a higher incidence of abnormal maternal behavior.[91]
114
+
115
+ Beef-calves reared on the range suckle an average of 5.0 times every 24 hours with an average total time of 46 min spent suckling. There is a diurnal rhythm in suckling activity with peaks between 05:00–07:00, 10:00–13:00 and 17:00–21:00.[92]
116
+
117
+ Studies on the natural weaning of zebu cattle (Bos indicus) have shown that the cow weans her calves over a 2-week period, but after that, she continues to show strong affiliatory behavior with her offspring and preferentially chooses them for grooming and as grazing partners for at least 4–5 years.[93]
118
+
119
+ Semi-wild Highland cattle heifers first give birth at 2 or 3 years of age, and the timing of birth is synchronized with increases in natural food quality. Average calving interval is 391 days, and calving mortality within the first year of life is 5%.[94]
120
+
121
+ One study showed that over a 4-year period, dominance relationships within a herd of semi-wild highland cattle were very firm. There were few overt aggressive conflicts and the majority of disputes were settled by agonistic (non-aggressive, competitive) behaviors that involved no physical contact between opponents (e.g. threatening and spontaneous withdrawing). Such agonistic behavior reduces the risk of injury. Dominance status depended on age and sex, with older animals generally being dominant to young ones and males dominant to females. Young bulls gained superior dominance status over adult cows when they reached about 2 years of age.[94]
122
+
123
+ As with many animal dominance hierarchies, dominance-associated aggressiveness does not correlate with rank position, but is closely related to rank distance between individuals.[94]
124
+
125
+ Dominance is maintained in several ways. Cattle often engage in mock fights where they test each other's strength in a non-aggressive way. Licking is primarily performed by subordinates and received by dominant animals. Mounting is a playful behavior shown by calves of both sexes and by bulls and sometimes by cows in estrus,[95] however, this is not a dominance related behavior as has been found in other species.[94]
126
+
127
+ The horns of cattle are "honest signals" used in mate selection. Furthermore, horned cattle attempt to keep greater distances between themselves and have fewer physical interactions than hornless cattle. This leads to more stable social relationships.[96]
128
+
129
+ In calves, the frequency of agonistic behavior decreases as space allowance increases, but this does not occur for changes in group size. However, in adult cattle, the number of agonistic encounters increases as the group size increases.[97]
130
+
131
+ When grazing, cattle vary several aspects of their bite, i.e. tongue and jaw movements, depending on characteristics of the plant they are eating. Bite area decreases with the density of the plants but increases with their height. Bite area is determined by the sweep of the tongue; in one study observing 750-kilogram (1,650 lb) steers, bite area reached a maximum of approximately 170 cm2 (30 sq in). Bite depth increases with the height of the plants. By adjusting their behavior, cattle obtain heavier bites in swards that are tall and sparse compared with short, dense swards of equal mass/area.[98] Cattle adjust other aspects of their grazing behavior in relation to the available food; foraging velocity decreases and intake rate increases in areas of abundant palatable forage.[99]
132
+
133
+ Cattle avoid grazing areas contaminated by the faeces of other cattle more strongly than they avoid areas contaminated by sheep,[100] but they do not avoid pasture contaminated by rabbit faeces.[101]
134
+
135
+ In 24 April 2009, edition of the journal Science, a team of researchers led by the National Institutes of Health and the US Department of Agriculture reported having mapped the bovine genome.[102] The scientists found cattle have about 22,000 genes, and 80% of their genes are shared with humans, and they share about 1000 genes with dogs and rodents, but are not found in humans. Using this bovine "HapMap", researchers can track the differences between the breeds that affect the quality of meat and milk yields.[103]
136
+
137
+ Behavioral traits of cattle can be as heritable as some production traits, and often, the two can be related.[104] The heritability of fear varies markedly in cattle from low (0.1) to high (0.53); such high variation is also found in pigs and sheep, probably due to differences in the methods used.[105] The heritability of temperament (response to isolation during handling) has been calculated as 0.36 and 0.46 for habituation to handling.[106] Rangeland assessments show that the heritability of aggressiveness in cattle is around 0.36.[107]
138
+
139
+ Quantitative trait loci (QTLs) have been found for a range of production and behavioral characteristics for both dairy and beef cattle.[108]
140
+
141
+ Cattle occupy a unique role in human history, having been domesticated since at least the early neolithic age.
142
+
143
+ Archeozoological and genetic data indicate that cattle were first domesticated from wild aurochs (Bos primigenius) approximately 10,500 years ago. There were two major areas of domestication: one in the Near East (specifically central Anatolia, the Levant and Western Iran), giving rise to the taurine line, and a second in the area that is now Pakistan, resulting in the indicine line.[109] Modern mitochondrial DNA variation indicates the taurine line may have arisen from as few as 80 aurochs tamed in the upper reaches of Mesopotamia near the villages of Çayönü Tepesi in what is now southeastern Turkey and Dja'de el-Mughara in what is now northern Iraq.[1]
144
+
145
+ Although European cattle are largely descended from the taurine lineage, gene flow from African cattle (partially of indicine origin) contributed substantial genomic components to both southern European cattle breeds and their New World descendants.[109] A study on 134 breeds showed that modern taurine cattle originated from Africa, Asia, North and South America, Australia, and Europe.[110] Some researchers have suggested that African taurine cattle are derived from a third independent domestication from North African aurochsen.[109]
146
+
147
+ As early as 9000 BC both grain and cattle were used as money or as barter (the first grain remains found, considered to be evidence of pre-agricultural practice date to 17,000 BC).[111][112][113] Some evidence also exists to suggest that other animals, such as camels and goats, may have been used as currency in some parts of the world.[114] One of the advantages of using cattle as currency is that it allows the seller to set a fixed price. It even created the standard pricing. For example, two chickens were traded for one cow as cows were deemed to be more valuable than chickens.[112]
148
+
149
+ Cattle are often raised by allowing herds to graze on the grasses of large tracts of rangeland. Raising cattle in this manner allows the use of land that might be unsuitable for growing crops. The most common interactions with cattle involve daily feeding, cleaning and milking. Many routine husbandry practices involve ear tagging, dehorning, loading, medical operations, vaccinations and hoof care, as well as training for agricultural shows and preparations. Also, some cultural differences occur in working with cattle; the cattle husbandry of Fulani men rests on behavioural techniques, whereas in Europe, cattle are controlled primarily by physical means, such as fences.[115] Breeders use cattle husbandry to reduce M. bovis infection susceptibility by selective breeding and maintaining herd health to avoid concurrent disease.[116]
150
+
151
+ Cattle are farmed for beef, veal, dairy, and leather. They are less commonly used for conservation grazing, or simply to maintain grassland for wildlife, such as in Epping Forest, England. They are often used in some of the most wild places for livestock. Depending on the breed, cattle can survive on hill grazing, heaths, marshes, moors and semidesert. Modern cattle are more commercial than older breeds and, having become more specialized, are less versatile. For this reason, many smaller farmers still favor old breeds, such as the Jersey dairy breed.
152
+ In Portugal, Spain, southern France and some Latin American countries, bulls are used in the activity of bullfighting; Jallikattu in India is a bull taming sport radically different from European bullfighting, humans are unarmed and bulls are not killed. In many other countries bullfighting is illegal. Other activities such as bull riding are seen as part of a rodeo, especially in North America. Bull-leaping, a central ritual in Bronze Age Minoan culture (see Sacred Bull), still exists in southwestern France. In modern times, cattle are also entered into agricultural competitions. These competitions can involve live cattle or cattle carcases in hoof and hook events.
153
+
154
+ In terms of food intake by humans, consumption of cattle is less efficient than of grain or vegetables with regard to land use, and hence cattle grazing consumes more area than such other agricultural production when raised on grains.[117] Nonetheless, cattle and other forms of domesticated animals can sometimes help to use plant resources in areas not easily amenable to other forms of agriculture. Bulls are sometimes used as guard animals.[118][119]
155
+
156
+ The average sleep time of a domestic cow is about 4 hours a day.[120] Cattle do have a stay apparatus,[121] but do not sleep standing up,[122] they lie down to sleep deeply.[123] In spite of the urban legend, cows cannot be tipped over by people pushing on them.[124]
157
+
158
+ The meat of adult cattle is known as beef, and that of calves is veal. Other animal parts are also used as food products, including blood, liver, kidney, heart and oxtail. Cattle also produce milk, and dairy cattle are specifically bred to produce the large quantities of milk processed and sold for human consumption. Cattle today are the basis of a multibillion-dollar industry worldwide. The international trade in beef for 2000 was over $30 billion and represented only 23% of world beef production.[125] Approximately 300 million cattle, including dairy cattle, are slaughtered each year for food.[126] The production of milk, which is also made into cheese, butter, yogurt, and other dairy products, is comparable in economic size to beef production, and provides an important part of the food supply for many of the world's people. Cattle hides, used for leather to make shoes, couches and clothing, are another widespread product. Cattle remain broadly used as draft animals in many developing countries, such as India. Cattle are also used in some sporting games, including rodeo and bullfighting.
159
+
160
+ Source: Helgi Library,[127] World Bank, FAOSTAT
161
+
162
+ About half the world's meat comes from cattle.[128]
163
+
164
+ Certain breeds of cattle, such as the Holstein-Friesian, are used to produce milk,[129][130] which can be processed into dairy products such as milk, cheese or yogurt. Dairy cattle are usually kept on specialized dairy farms designed for milk production. Most cows are milked twice per day, with milk processed at a dairy, which may be onsite at the farm or the milk may be shipped to a dairy plant for eventual sale of a dairy product.[131] For dairy cattle to continue producing milk, they must give birth to one calf per year. If the calf is male, it generally is slaughtered at a young age to produce veal.[132] They will continue to produce milk until three weeks before birth.[130] Over the last fifty years, dairy farming has become more intensive to increase the yield of milk produced by each cow. The Holstein-Friesian is the breed of dairy cow most common in the UK, Europe and the United States. It has been bred selectively to produce the highest yields of milk of any cow. Around 22 litres per day is average in the UK.[129][130]
165
+
166
+ Most cattle are not kept solely for hides, which are usually a by-product of beef production. Hides are most commonly used for leather, which can be made into a variety of product, including shoes. In 2012 India was the world's largest producer of cattle hides.[133]
167
+
168
+ Feral cattle are defined as being 'cattle that are not domesticated or cultivated'.[134] Populations of feral cattle are known to come from and exist in: Australia, United States of America,[135] Colombia, Argentina, Spain, France and many islands, including New Guinea, Hawaii, Galapagos, Juan Fernández Islands, Hispaniola (Dominican Republic and Haiti), Tristan da Cunha and Île Amsterdam,[136] two islands of Kuchinoshima[137] and Kazura Island next to Naru Island in Japan.[138][139] Chillingham cattle is sometimes regarded as a feral breed.[140] Aleutian wild cattles can be found on Aleutian Islands.[141] The "Kinmen cattle" which is dominantly found on Kinmen Island, Taiwan is mostly domesticated while smaller portion of the population is believed to live in the wild due to accidental releases.[142]
169
+
170
+ Other notable examples include cattle in the vicinity of Hong Kong (in the Shing Mun Country Park,[143] among Sai Kung District[144] and Lantau Island[145] and on Grass Island[146]), and semi-feral animals in Yangmingshan, Taiwan.[147]
171
+
172
+ (2003)
173
+
174
+ & Hoekstra
175
+
176
+ (2003)
177
+
178
+ (2003)
179
+
180
+ Gut flora in cattle include methanogens that produce methane as a byproduct of enteric fermentation, which cattle belch out. The same volume of atmospheric methane has a higher global warming potential than atmospheric carbon dioxide.[151][152] Methane belching from cattle can be reduced with genetic selection, immunization, rumen defaunation, diet modification, decreased antibiotic use, and grazing management, among others.[153][154][155][156]
181
+
182
+ A report from the Food and Agriculture Organization (FAO) states that the livestock sector is "responsible for 18% of greenhouse gas emissions".[157] The IPCC estimates that cattle and other livestock emit about 80 to 93 Megatonnes of methane per year,[158] accounting for an estimated 37% of anthropogenic methane emissions,[157] and additional methane is produced by anaerobic fermentation of manure in manure lagoons and other manure storage structures.[159] The net change in atmospheric methane content was recently about 1 Megatonne per year,[160] and in some recent years there has been no increase in atmospheric methane content.[161] While cattle fed forage actually produce more methane than grain-fed cattle, the increase may be offset by the increased carbon recapture of pastures, which recapture three times the CO2 of cropland used for grain.[162]
183
+
184
+ One of the cited changes suggested to reduce greenhouse gas emissions is intensification of the livestock industry, since intensification leads to less land for a given level of production. This assertion is supported by studies of the US beef production system, suggesting practices prevailing in 2007 involved 8.6% less fossil fuel use, 16.3% less greenhouse gas emissions, 12.1% less water use, and 33.0% less land use, per unit mass of beef produced, than those used in 1977.[163] The analysis took into account not only practices in feedlots, but also feed production (with less feed needed in more intensive production systems), forage-based cow-calf operations and back-grounding before cattle enter a feedlot (with more beef produced per head of cattle from those sources, in more intensive systems), and beef from animals derived from the dairy industry.
185
+
186
+ The number of American cattle kept in confined feedlot conditions fluctuates. From 1 January 2002 through 1 January 2012, there was no significant overall upward or downward trend in the number of US cattle on feed for slaughter, which averaged about 14.046 million head over that period.[164][165] Previously, the number had increased; it was 12.453 million in 1985.[166] Cattle on feed (for slaughter) numbered about 14.121 million on 1 January 2012, i.e. about 15.5% of the estimated inventory of 90.8 million US cattle (including calves) on that date. Of the 14.121 million, US cattle on feed (for slaughter) in operations with 1000 head or more were estimated to number 11.9 million.[165] Cattle feedlots in this size category correspond to the regulatory definition of "large" concentrated animal feeding operations (CAFOs) for cattle other than mature dairy cows or veal calves.[167] Significant numbers of dairy, as well as beef cattle, are confined in CAFOs, defined as "new and existing operations which stable or confine and feed or maintain for a total of 45 days or more in any 12-month period more than the number of animals specified"[168] where "[c]rops, vegetation, forage growth, or post-harvest residues are not sustained in the normal growing season over any portion of the lot or facility."[169] They may be designated as small, medium and large. Such designation of cattle CAFOs is according to cattle type (mature dairy cows, veal calves or other) and cattle numbers, but medium CAFOs are so designated only if they meet certain discharge criteria, and small CAFOs are designated only on a case-by-case basis.[170]
187
+
188
+ A CAFO that discharges pollutants is required to obtain a permit, which requires a plan to manage nutrient runoff, manure, chemicals, contaminants, and other wastewater pursuant to the US Clean Water Act.[171] The regulations involving CAFO permitting have been extensively litigated.[172]
189
+ Commonly, CAFO wastewater and manure nutrients are applied to land at agronomic rates for use by forages or crops, and it is often assumed that various constituents of wastewater and manure, e.g. organic contaminants and pathogens, will be retained, inactivated or degraded on the land with application at such rates; however, additional evidence is needed to test reliability of such assumptions
190
+ .[173] Concerns raised by opponents of CAFOs have included risks of contaminated water due to feedlot runoff,[174] soil erosion, human and animal exposure to toxic chemicals, development of antibiotic resistant bacteria and an increase in E. coli contamination.[175] While research suggests some of these impacts can be mitigated by developing wastewater treatment systems[174] and planting cover crops in larger setback zones,[176] the Union of Concerned Scientists released a report in 2008 concluding that CAFOs are generally unsustainable and externalize costs.[162]
191
+
192
+ An estimated 935,000 cattle operations were operating in the US in 2010.[177] In 2001, the US Environmental Protection Agency (EPA) tallied 5,990 cattle CAFOs then regulated, consisting of beef (2,200), dairy (3,150), heifer (620) and veal operations (20).[178] Since that time, the EPA has established CAFOs as an enforcement priority. EPA enforcement highlights for fiscal year 2010 indicated enforcement actions against 12 cattle CAFOs for violations that included failures to obtain a permit, failures to meet the terms of a permit, and discharges of contaminated water.[179]
193
+
194
+ Another concern is manure, which if not well-managed, can lead to adverse environmental consequences. However, manure also is a valuable source of nutrients and organic matter when used as a fertilizer.[180] Manure was used as a fertilizer on about 6,400,000 hectares (15.8 million acres) of US cropland in 2006, with manure from cattle accounting for nearly 70% of manure applications to soybeans and about 80% or more of manure applications to corn, wheat, barley, oats and sorghum.[181] Substitution of manure for synthetic fertilizers in crop production can be environmentally significant, as between 43 and 88 megajoules of fossil fuel energy would be used per kg of nitrogen in manufacture of synthetic nitrogenous fertilizers.[182]
195
+
196
+ Grazing by cattle at low intensities can create a favourable environment for native herbs and forbs by mimicking the native grazers who they displaced; in many world regions, though, cattle are reducing biodiversity due to overgrazing.[183] A survey of refuge managers on 123 National Wildlife Refuges in the US tallied 86 species of wildlife considered positively affected and 82 considered negatively affected by refuge cattle grazing or haying.[184] Proper management of pastures, notably managed intensive rotational grazing and grazing at low intensities can lead to less use of fossil fuel energy, increased recapture of carbon dioxide, fewer ammonia emissions into the atmosphere, reduced soil erosion, better air quality, and less water pollution.[162]
197
+
198
+ The veterinary discipline dealing with cattle and cattle diseases (bovine veterinary) is called buiatrics.[185] Veterinarians and professionals working on cattle health issues are pooled in the World Association for Buiatrics, founded in 1960.[186] National associations and affiliates also exist.[187]
199
+
200
+ Cattle diseases were in the center of attention in the 1980s and 1990s when the Bovine spongiform encephalopathy (BSE), also known as mad cow disease, was of concern. Cattle might catch and develop various other diseases, like blackleg, bluetongue, foot rot too.[188][189][190]
201
+
202
+ In most states, as cattle health is not only a veterinarian issue, but also a public health issue, public health and food safety standards and farming regulations directly affect the daily work of farmers who keep cattle.[191] However, said rules change frequently and are often debated. For instance, in the U.K., it was proposed in 2011 that milk from tuberculosis-infected cattle should be allowed to enter the food chain.[192] Internal food safety regulations might affect a country's trade policy as well. For example, the United States has just reviewed its beef import rules according to the "mad cow standards"; while Mexico forbids the entry of cattle who are older than 30 months.[193]
203
+
204
+ Cow urine is commonly used in India for internal medical purposes.[194][195] It is distilled and then consumed by patients seeking treatment for a wide variety of illnesses.[196] At present, no conclusive medical evidence shows this has any effect.[197] However, an Indian medicine containing cow urine has already obtained U.S. patents.[198]
205
+
206
+ Digital dermatitis is caused by the bacteria from the genus Treponema. It differs from foot rot and can appear under unsanitary conditions such as poor hygiene or inadequate hoof trimming, among other causes. It primarily affects dairy cattle and has been known to lower the quantity of milk produced, however the milk quality remains unaffected. Cattle are also susceptible to ringworm caused by the fungus, Trichophyton verrucosum, a contagious skin disease which may be transferred to humans exposed to infected cows.[199]
207
+
208
+ Stocking density refers to the number of animals within a specified area. When stocking density reaches high levels, the behavioural needs of the animals may not be met. This can negatively influence health, welfare and production performance.[200]
209
+
210
+ The effect of overstocking in cows can have a negative effect on milk production and reproduction rates which are two very important traits for dairy farmers. Overcrowding of cows in barns has been found to reduced feeding, resting and rumination.[200] Although they consume the same amount of dry matter within the span of a day, they consume the food at a much more rapid rate, and this behaviour in cows can lead to further complications.[201] The feeding behaviour of cows during their post-milking period is very important as it has been proven that the longer animals can eat after milking, the longer they will be standing up and therefore causing less contamination to the teat ends.[202] This is necessary to reduce the risk of mastitis as infection has been shown to increase the chances of embryonic loss.[203] Sufficient rest is important for dairy cows because it is during this period that their resting blood flow increases up to 50%, this is directly proportionate to milk production.[202] Each additional hour of rest can be seen to translate to 2 to 3.5 more pounds of milk per cow daily. Stocking densities of anything over 120% have been shown to decrease the amount of time cows spend lying down.[204]
211
+
212
+ Cortisol is an important stress hormone; its plasma concentrations increase greatly when subjected to high levels of stress.[205] Increased concentration levels of cortisol have been associated with significant increases in gonadotrophin levels and lowered progestin levels. Reduction of stress is important in the reproductive state of cows as an increase in gonadotrophin and lowered progesterone levels may impinge on the ovulatory and lutenization process and to reduce the chances of successful implantation.[206] A high cortisol level will also stimulate the degradation of fats and proteins which may make it difficult for the animal to sustain its pregnancy if implanted successfully.[205]
213
+
214
+ Animal rights activists have criticized the treatment of cattle, claiming that common practices in cattle husbandry, slaughter, and entertainment unnecessarily cause cattle fear, stress, and pain. They advocate for abstaining from the consumption of cattle-related animal products (such as beef, cow's milk, veal, and leather) and cattle-based entertainment (such as rodeos and bullfighting) in order to end one's participation in the cruelty, claiming that the animals are only treated this way due to market forces and popular demand.
215
+
216
+ The following practices have been criticized by animal welfare and animal rights groups:[207] branding,[208] castration,[209] dehorning,[210] ear tagging,[211] nose ringing,[212] restraint,[213] tail docking,[214] the use of veal crates,[215] and cattle prods.[216] Further, the stress induced by high stocking density (such as in feedlots, auctions, and during transport) is known to negatively affect the health of cattle,[217][218] and has also been criticized.[219][220]
217
+
218
+ While the treatment of dairy cows is similar to that of beef cattle, especially towards the end of their life, it has faced additional criticism.[221] To produce milk from dairy cattle, most calves are separated from their mothers soon after birth and fed milk replacement in order to retain the cows' milk for human consumption.[222] Animal welfare advocates point out that this breaks the natural bond between the mother and her calf.[222] Unwanted male calves are either slaughtered at birth or sent for veal production.[222] To prolong lactation, dairy cows are almost permanently kept pregnant through artificial insemination.[222] Because of this, some feminists state that dairy production is based on the sexual exploitation of cows.[223][224] Although cows' natural life expectancy is about twenty years,[225] after about five years the cows' milk production has dropped; they are then considered "spent" and are sent to slaughter, which is considered cruel by some.[226][227]
219
+
220
+ While leather is often a by-product of slaughter, in some countries, such as India and Bangladesh, cows are raised primarily for their leather. These leather industries often make their cows walk long distances across borders to be killed in neighboring provinces and countries where cattle slaughter is legal. Some cows die along the long journey, and exhausted animals are often beaten and have chili and tobacco rubbed into their eyes to make them keep walking.[228] These practices have faced backlash from various animal rights groups.[229][230]
221
+
222
+ There has been a long history of protests against rodeos,[231] with the opposition saying that rodeos are unnecessary and cause stress, injury, and death to the animals.[232][233]
223
+
224
+ The running of the bulls faces opposition due to the stress and injuries incurred by the bulls during the event.[234]
225
+
226
+ Bullfighting is considered by many people, including animal rights and animal welfare advocates, to be a cruel, barbaric blood sport in which bulls are forced to suffer severe stress and a slow, torturous death.[235] A number of animal rights and animal welfare groups are involved in anti-bullfighting activities.[236]
227
+
228
+ Oxen (singular ox) are cattle trained as draft animals. Often they are adult, castrated males of larger breeds, although females and bulls are also used in some areas. Usually, an ox is over four years old due to the need for training and to allow it to grow to full size. Oxen are used for plowing, transport, hauling cargo, grain-grinding by trampling or by powering machines, irrigation by powering pumps, and wagon drawing. Oxen were commonly used to skid logs in forests, and sometimes still are, in low-impact, select-cut logging. Oxen are most often used in teams of two, paired, for light work such as carting, with additional pairs added when more power is required, sometimes up to a total of 20 or more.
229
+
230
+ Oxen can be trained to respond to a teamster's signals. These signals are given by verbal commands or by noise (whip cracks). Verbal commands vary according to dialect and local tradition. Oxen can pull harder and longer than horses. Though not as fast as horses, they are less prone to injury because they are more sure-footed.
231
+
232
+ Many oxen are used worldwide, especially in developing countries. About 11.3 million draft oxen are used in sub-Saharan Africa.[237] In India, the number of draft cattle in 1998 was estimated at 65.7 million head.[238] About half the world's crop production is thought to depend on land preparation (such as plowing) made possible by animal traction.[239]
233
+
234
+ The cow is mentioned often in the Quran. The second and longest surah of the Quran is named Al-Baqara ("The Cow"). Out of the 286 verses of the surah, seven mention cows (Al Baqarah 67–73).[240][241] The name of the surah derives from this passage in which Moses orders his people to sacrifice a cow in order to resurrect a man murdered by an unknown person.[242]
235
+
236
+ Cattle are venerated within the Hindu religion of India. In the Vedic period they were a symbol of plenty [243]:130 and were frequently slaughtered. In later times they gradually acquired their present status. According to the Mahabharata, they are to be treated with the same respect 'as one's mother'.[244] In the middle of the first millennium, the consumption of beef began to be disfavoured by lawgivers.[243]:144 Although there has never been any cow-goddesses or temples dedicated to them,[243]:146 cows appear in numerous stories from the Vedas and Puranas. The deity Krishna was brought up in a family of cowherders, and given the name Govinda (protector of the cows). Also, Shiva is traditionally said to ride on the back of a bull named Nandi.
237
+
238
+ Milk and milk products were used in Vedic rituals.[243]:130 In the postvedic period products of the cow—milk, curd, ghee, but also cow dung and urine (gomutra), or the combination of these five (panchagavya)—began to assume an increasingly important role in ritual purification and expiation.[243]:130–131
239
+
240
+ Veneration of the cow has become a symbol of the identity of Hindus as a community,[243]:20 especially since the end of the 19th century. Slaughter of cows (including oxen, bulls and calves) is forbidden by law in several states of the Indian Union. McDonald's outlets in India do not serve any beef burgers. In Maharaja Ranjit Singh's empire of the early 19th century, the killing of a cow was punishable by death.[245]
241
+
242
+ Cattle are typically represented in heraldry by the bull.
243
+
244
+ Arms of the Azores
245
+
246
+ Arms of Mecklenburg region, Germany
247
+
248
+ Arms of Turin, Italy
249
+
250
+ Arms of Kaunas, Lithuania
251
+
252
+ Arms of Bielsk Podlaski, Poland
253
+
254
+ Arms of Ciołek, Poland
255
+
256
+ Arms of Turek, Poland
257
+
258
+ For 2013, the FAO estimated global cattle numbers at 1.47 billion.[249] Regionally, the FAO estimate for 2013 includes: Asia 497 million; South America 350 million; Africa 307 million; Europe 122 million; North America 102 million; Central America 47 million; Oceania 40 million; and Caribbean 9 million.
259
+
260
+ Didactic model of Bovine
261
+
262
+ Bovine anatomical model
263
+
264
+ Didactic model of a bovine muscular system
265
+
en/5895.html.txt ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+
6
+
7
+ In mammals, the vagina is the elastic, muscular part of the female genital tract. In humans, it extends from the vulva to the cervix. The outer vaginal opening is normally partly covered by a membrane called the hymen. At the deep end, the cervix (neck of the uterus) bulges into the vagina. The vagina allows for sexual intercourse and birth. It also channels menstrual flow (menses), which occurs in humans and closely related primates as part of the monthly menstrual cycle.
8
+
9
+ Although research on the vagina is especially lacking for different animals, its location, structure and size are documented as varying among species. Female mammals usually have two external openings in the vulva, the urethral opening for the urinary tract and the vaginal opening for the genital tract. This is different from male mammals, who usually have a single urethral opening for both urination and reproduction. The vaginal opening is much larger than the nearby urethral opening, and both are protected by the labia in humans. In amphibians, birds, reptiles and monotremes, the cloaca is the single external opening for the gastrointestinal tract, the urinary, and reproductive tracts.
10
+
11
+ To accommodate smoother penetration of the vagina during sexual intercourse or other sexual activity, vaginal moisture increases during sexual arousal in human females and other female mammals. This increase in moisture provides vaginal lubrication, which reduces friction. The texture of the vaginal walls creates friction for the penis during sexual intercourse and stimulates it toward ejaculation, enabling fertilization. Along with pleasure and bonding, women's sexual behavior with others (which can include heterosexual or lesbian sexual activity) can result in sexually transmitted infections (STIs), the risk of which can be reduced by recommended safe sex practices. Other health issues may also affect the human vagina.
12
+
13
+ The vagina and vulva have evoked strong reactions in societies throughout history, including negative perceptions and language, cultural taboos, and their use as symbols for female sexuality, spirituality, or regeneration of life. In common speech, the word vagina is often used to refer to the vulva or to the female genitals in general. By its dictionary and anatomical definitions, however, vagina refers exclusively to the specific internal structure, and understanding the distinction can improve knowledge of the female genitalia and aid in healthcare communication.
14
+
15
+ The term vagina is from Latin meaning "sheath" or "scabbard"; the plural of vagina is either vaginae, or vaginas.[1] The vagina may also be referred to as "the birth canal" in the context of pregnancy and childbirth.[2][3] Although by its dictionary and anatomical definitions, the term vagina refers exclusively to the specific internal structure, it is colloquially used to refer to the vulva or to both the vagina and vulva.[4][5]
16
+
17
+ Using the term vagina to mean "vulva" can pose medical or legal confusion; for example, a person's interpretation of its location might not match another's interpretation of the location.[4][6] Medically, the vagina is the canal between the hymen (or remnants of the hymen) and the cervix, while, legally, it begins at the vulva (between the labia).[4] It may be that the incorrect use of the term vagina is due to not as much thought going into the anatomy of the female genitals as has gone into the study of male genitals, and that this has contributed to an absence of correct vocabulary for the external female genitalia among both the general public and health professionals. Because of this and because a better understanding of female genitalia can help combat sexual and psychological harm with regard to female development, researchers endorse correct terminology for the vulva.[6][7][8]
18
+
19
+ The human vagina is an elastic, muscular canal that extends from the vulva to the cervix.[9][10] The opening of the vagina lies in the urogenital triangle. The urogenital triangle is the front triangle of the perineum and also consists of the urethral opening and associated parts of the external genitalia.[11] The vaginal canal travels upwards and backwards, between the urethra at the front, and the rectum at the back. Near the upper vagina, the cervix protrudes into the vagina on its front surface at approximately a 90 degree angle.[12] The vaginal and urethral openings are protected by the labia.[13]
20
+
21
+ When not sexually aroused, the vagina is a collapsed tube, with the front and back walls placed together. The lateral walls, especially their middle area, are relatively more rigid. Because of this, the collapsed vagina has an H-shaped cross section.[10][14] Behind, the inner vagina is separated from the rectum by the recto-uterine pouch, the middle vagina by loose connective tissue, and the lower vagina by the perineal body.[15] Where the vaginal lumen surrounds the cervix of the uterus, it is divided into four continuous regions (vaginal fornices); these are the anterior, posterior, right lateral, and left lateral fornices.[9][10] The posterior fornix is deeper than the anterior fornix.[10]
22
+
23
+ Supporting the vagina are its upper, middle, and lower third muscles and ligaments. The upper third are the levator ani muscles, and the transcervical, pubocervical, and sacrocervical ligaments.[9][16] It is supported by the upper portions of the cardinal ligaments and the parametrium.[17] The middle third of the vagina involves the urogenital diaphragm.[9] It is supported by the levator ani muscles and the lower portion of the cardinal ligaments.[17] The lower third is supported by the perineal body,[9][18] or the urogenital and pelvic diaphragms.[19] The lower third may also be described as being supported by the perineal body and the pubovaginal part of the levator ani muscle.[16]
24
+
25
+ The vaginal opening is at the posterior end of the vulval vestibule, behind the urethral opening. The opening to the vagina is normally obscured by the labia minora (vaginal lips), but may be exposed after vaginal delivery.[10]
26
+
27
+ The hymen is a membrane of tissue that surrounds or partially covers the vaginal opening.[10] The effects of intercourse and childbirth on the hymen are variable. Where it is broken, it may completely disappear or remnants known as carunculae myrtiformes may persist. Otherwise, being very elastic, it may return to its normal position.[20] Additionally, the hymen may be lacerated by disease, injury, medical examination, masturbation or physical exercise. For these reasons, virginity cannot be definitively determined by examining the hymen.[20][21]
28
+
29
+ The length of the vagina varies among women of child-bearing age. Because of the presence of the cervix in the front wall of the vagina, there is a difference in length between the front wall, approximately 7.5 cm (2.5 to 3 in) long, and the back wall, approximately 9 cm (3.5 in) long.[10][22] During sexual arousal, the vagina expands both in length and width. If a woman stands upright, the vaginal canal points in an upward-backward direction and forms an angle of approximately 45 degrees with the uterus.[10][18] The vaginal opening and hymen also vary in size; in children, although the hymen commonly appears crescent-shaped, many shapes are possible.[10][23]
30
+
31
+ The vaginal plate is the precursor to the vagina.[24] During development, the vaginal plate begins to grow where the fused ends of the paramesonephric ducts (Müllerian ducts) enter the back wall of the urogenital sinus as the sinus tubercle. As the plate grows, it significantly separates the cervix and the urogenital sinus; eventually, the central cells of the plate break down to form the vaginal lumen.[24] This usually occurs by the twenty to twenty-fourth week of development. If the lumen does not form, or is incomplete, membranes known as vaginal septae can form across or around the tract, causing obstruction of the outflow tract later in life.[24]
32
+
33
+ During sexual differentiation, without testosterone, the urogenital sinus persists as the vestibule of the vagina. The two urogenital folds of the genital tubercle form the labia minora, and the labioscrotal swellings enlarge to form the labia majora.[25][26]
34
+
35
+ There are conflicting views on the embryologic origin of the vagina. The majority view is Koff's 1933 description, which posits that the upper two-thirds of the vagina originate from the caudal part of the Müllerian duct, while the lower part of the vagina develops from the urogenital sinus.[27][28] Other views are Bulmer's 1957's description that the vaginal epithelium derives solely from the urogenital sinus epithelium,[29] and Witschi's 1970 research, which reexamined Koff's description and concluded that the sinovaginal bulbs are the same as the lower portions of the Wolffian ducts.[28][30] Witschi's view is supported by research by Acién et al., Bok and Drews.[28][30] Robboy et al. reviewed Koff and Bulmer's theories, and support Bulmer's description in light of their own research.[29] The debates stem from the complexity of the interrelated tissues and the absence of an animal model that matches human vaginal development.[29][31] Because of this, study of human vaginal development is ongoing and may help resolve the conflicting data.[28]
36
+
37
+ The vaginal wall from the lumen outwards consists firstly of a mucosa of stratified squamous epithelium that is not keratinized, with a lamina propria (a thin layer of connective tissue) underneath it. Secondly, there is a layer of smooth muscle with bundles of circular fibers internal to longitudinal fibers (those that run lengthwise). Lastly, is an outer layer of connective tissue called the adventitia. Some texts list four layers by counting the two sublayers of the mucosa (epithelium and lamina propria) separately.[32][33]
38
+
39
+ The smooth muscular layer within the vagina has a weak contractive force that can create some pressure in the lumen of the vagina; much stronger contractive force, such as during childbirth, comes from muscles in the pelvic floor that are attached to the adventitia around the vagina.[34]
40
+
41
+ The lamina propria is rich in blood vessels and lymphatic channels. The muscular layer is composed of smooth muscle fibers, with an outer layer of longitudinal muscle, an inner layer of circular muscle, and oblique muscle fibers between. The outer layer, the adventitia, is a thin dense layer of connective tissue and it blends with loose connective tissue containing blood vessels, lymphatic vessels and nerve fibers that are between pelvic organs.[12][33][22] The vaginal mucosa is absent of glands. It forms folds (transverse ridges or rugae), which are more prominent in the outer third of the vagina; their function is to provide the vagina with increased surface area for extension and stretching.[9][10]
42
+
43
+ The epithelium of the ectocervix (the portion the uterine cervix extending into the vagina) is an extension of, and shares a border with, the vaginal epithelium.[35] The vaginal epithelium is made up of layers of cells, including the basal cells, the parabasal cells, the superficial squamous flat cells, and the intermediate cells.[36] The basal layer of the epithelium is the most mitotically active and reproduces new cells.[37] The superficial cells shed continuously and basal cells replace them.[10][38][39] Estrogen induces the intermediate and superficial cells to fill with glycogen.[39][40] Cells from the lower basal layer transition from active metabolic activity to death (apoptosis). In these mid-layers of the epithelia, the cells begin to lose their mitochondria and other organelles.[37][41] The cells retain a usually high level of glycogen compared to other epithelial tissue in the body.[37]
44
+
45
+ Under the influence of maternal estrogen, the vagina of a newborn is lined by thick stratified squamous epithelium (or mucosa) for two to four weeks after birth. Between then to puberty, the epithelium remains thin with only a few layers of cuboidal cells without glycogen.[39][42] The epithelium also has few rugae and is red in color before puberty.[4] When puberty begins, the mucosa thickens and again becomes stratified squamous epithelium with glycogen containing cells, under the influence of the girl's rising estrogen levels.[39] Finally, the epithelium thins out from menopause onward and eventually ceases to contain glycogen, because of the lack of estrogen.[10][38][43]
46
+
47
+ Flattened squamous cells are more resistant to both abrasion and infection.[42] The permeability of the epithelium allows for an effective response from the immune system since antibodies and other immune components can easily reach the surface.[44] The vaginal epithelium differs from the similar tissue of the skin. The epidermis of the skin is relatively resistant to water because it contains high levels of lipids. The vaginal epithelium contains lower levels of lipids. This allows the passage of water and water-soluble substances through the tissue.[44]
48
+
49
+ Keratinization happens when the epithelium is exposed to the dry external atmosphere.[10] In abnormal circumstances, such as in pelvic organ prolapse, the mucosa may be exposed to air, becoming dry and keratinized.[45]
50
+
51
+ Blood is supplied to the vagina mainly via the vaginal artery, which emerges from a branch of the internal iliac artery or the uterine artery.[9][46] The vaginal arteries anastamose (are joined) along the side of the vagina with the cervical branch of the uterine artery; this forms the azygos artery,[46] which lies on the midline of the anterior and posterior vagina.[15] Other arteries which supply the vagina include the middle rectal artery and the internal pudendal artery,[10] all branches of the internal iliac artery.[15] Three groups of lymphatic vessels accompany these arteries; the upper group accompanies the vaginal branches of the uterine artery; a middle group accompanies the vaginal arteries; and the lower group, draining lymph from the area outside the hymen, drain to the inguinal lymph nodes.[15][47] Ninety-five percent of the lymphatic channels of the vagina are within 3 mm of the surface of the vagina.[48]
52
+
53
+ Two main veins drain blood from the vagina, one on the left and one on the right. These form a network of smaller veins, the vaginal venous plexus, on the sides of the vagina, connecting with similar venous plexuses of the uterus, bladder, and rectum. These ultimately drain into the internal iliac veins.[15]
54
+
55
+ The nerve supply of the upper vagina is provided by the sympathetic and parasympathetic areas of the pelvic plexus. The lower vagina is supplied by the pudendal nerve.[10][15]
56
+
57
+ Vaginal secretions are primarily from the uterus, cervix, and vaginal epithelium in addition to minuscule vaginal lubrication from the Bartholin's glands upon sexual arousal.[10] It takes little vaginal secretion to make the vagina moist; secretions may increase during sexual arousal, the middle of or a little prior to menstruation, or during pregnancy.[10] Menstruation (also known as a "period" or "monthly") is the regular discharge of blood and mucosal tissue (known as menses) from the inner lining of the uterus through the vagina.[49] The vaginal mucous membrane varies in thickness and composition during the menstrual cycle,[50] which is the regular, natural change that occurs in the female reproductive system (specifically the uterus and ovaries) that makes pregnancy possible.[51][52] Different hygiene products such as tampons, menstrual cups, and sanitary napkins are available to absorb or capture menstrual blood.[53]
58
+
59
+ The Bartholin's glands, located near the vaginal opening, were originally considered the primary source for vaginal lubrication, but further examination showed that they provide only a few drops of mucus.[54] Vaginal lubrication is mostly provided by plasma seepage known as transudate from the vaginal walls. This initially forms as sweat-like droplets, and is caused by increased fluid pressure in the tissue of the vagina (vasocongestion), resulting in the release of plasma as transudate from the capillaries through the vaginal epithelium.[54][55][56]
60
+
61
+ Before and during ovulation, the mucus glands within the cervix secrete different variations of mucus, which provides an alkaline, fertile environment in the vaginal canal that is favorable to the survival of sperm.[57] Following menopause, vaginal lubrication naturally decreases.[58]
62
+
63
+ Nerve endings in the vagina can provide pleasurable sensations when the vagina is stimulated during sexual activity. Women may derive pleasure from one part of the vagina, or from a feeling of closeness and fullness during vaginal penetration.[59] Because the vagina is not rich in nerve endings, women often do not receive sufficient sexual stimulation, or orgasm, solely from vaginal penetration.[59][60][61] Although the literature commonly cites a greater concentration of nerve endings and therefore greater sensitivity near the vaginal entrance (the outer one-third or lower third),[60][61][62] some scientific examinations of vaginal wall innervation indicate no single area with a greater density of nerve endings.[63][64] Other research indicates that only some women have a greater density of nerve endings in the anterior vaginal wall.[63][65] Because of the fewer nerve endings in the vagina, childbirth pain is significantly more tolerable.[61][66][67]
64
+
65
+ Pleasure can be derived from the vagina in a variety of ways. In addition to penile penetration, pleasure can come from masturbation, fingering, oral sex (cunnilingus), or specific sex positions (such as the missionary position or the spoons sex position).[68] Heterosexual couples may engage in cunnilingus or fingering as forms of foreplay to incite sexual arousal or as accompanying acts,[69][70] or as a type of birth control, or to preserve virginity.[71][72] Less commonly, they may use non penile-vaginal sexual acts as a primary means of sexual pleasure.[70] By contrast, lesbians and other women who have sex with women commonly engage in cunnilingus or fingering as main forms of sexual activity.[73][74] Some women and couples use sex toys, such as a vibrator or dildo, for vaginal pleasure.[75] The Kama Sutra – an ancient Hindu text written by Vātsyāyana, which includes a number of sexual positions – may also be used to increase sexual pleasure,[76] with special emphasis on female sexual satisfaction.[77]
66
+
67
+ Most women require direct stimulation of the clitoris to orgasm.[60][61] The clitoris plays a part in vaginal stimulation. It is a sex organ of multiplanar structure containing an abundance of nerve endings, with a broad attachment to the pubic arch and extensive supporting tissue to the labia. Research indicates that it forms a tissue cluster with the vagina. This tissue is perhaps more extensive in some women than in others, which may contribute to orgasms experienced vaginally.[60][78][79]
68
+
69
+ During sexual arousal, and particularly the stimulation of the clitoris, the walls of the vagina lubricate. This begins after ten to thirty seconds of sexual arousal, and increases in amount the longer the woman is aroused.[80] It reduces friction or injury that can be caused by insertion of the penis into the vagina or other penetration of the vagina during sexual activity. The vagina lengthens during the arousal, and can continue to lengthen in response to pressure; as the woman becomes fully aroused, the vagina expands in length and width, while the cervix retracts.[80][81] With the upper two-thirds of the vagina expanding and lengthening, the uterus rises into the greater pelvis, and the cervix is elevated above the vaginal floor, resulting in tenting of the mid-vaginal plane.[80] This is known as the tenting or ballooning effect.[82] As the elastic walls of the vagina stretch or contract, with support from the pelvic muscles, to wrap around the inserted penis (or other object),[62] this creates friction for the penis and helps to cause a man to experience orgasm and ejaculation, which in turn enables fertilization.[83]
70
+
71
+ An area in the vagina that may be an erogenous zone is the G-spot. It is typically defined as being located at the anterior wall of the vagina, a couple or few inches in from the entrance, and some women experience intense pleasure, and sometimes an orgasm, if this area is stimulated during sexual activity.[63][65] A G-spot orgasm may be responsible for female ejaculation, leading some doctors and researchers to believe that G-spot pleasure comes from the Skene's glands, a female homologue of the prostate, rather than any particular spot on the vaginal wall; other researchers consider the connection between the Skene's glands and the G-spot area to be weak.[63][64][65] The G-spot's existence (and existence as a distinct structure) is still under dispute because reports of its location can vary from woman to woman, it appears to be nonexistent in some women, and it is hypothesized to be an extension of the clitoris and therefore the reason for orgasms experienced vaginally.[63][66][79]
72
+
73
+ The vagina is the birth canal for the delivery of a baby. When labor (a physiological process preceding delivery) nears, several signs may occur, including vaginal discharge, and the rupture of membranes (water breaking) that can result in a gush of amniotic fluid[84] or an irregular or small stream of fluid from the vagina.[85][86] Water breaking most commonly happens during labor; however, it can occur before labor (known as premature rupture of membranes) and this happens in 10% of cases.[85][87] Braxton Hicks contractions are also a sign of nearing labor, but not all women notice them.[84] Among women giving birth for the first time, Braxton Hicks contractions are mistaken for actual contractions, and are usually very strong in the days leading up to labor.[88]
74
+
75
+ As the body prepares for childbirth, the cervix softens, thins, moves forward to face the front, and begins to open. This allows the fetus to settle or "drop" into the pelvis.[84] As the fetus settles into the pelvis, pain from the sciatic nerves, increased vaginal discharge, and increased urinary frequency can occur. While these symptoms are likelier to happen after labor has begun for women who have given birth before, they may happen ten to fourteen days before labor in women experiencing labor for the first time.[84]
76
+
77
+ The fetus begins to lose the support of the cervix when contractions begin. With cervical dilation reaching a diameter of more than 10 cm (4 in) to accommodate the head of the fetus, the head moves from the uterus to the vagina.[84] The elasticity of the vagina allows it to stretch to many times its normal diameter in order to deliver the child.[22]
78
+
79
+ Vaginal births are more common, but if there is a risk of complications a caesarean section (C-section) may be performed.[89] The vaginal mucosa has an abnormal accumulation of fluid (edematous) and is thin, with few rugae, a little after birth. The mucosa thickens and rugae return in approximately three weeks once the ovaries regain usual function and estrogen flow is restored. The vaginal opening gapes and is relaxed, until it returns to its approximate pre-pregnant state six to eight weeks after delivery, known as the postpartum period; however, the vagina will continue to be larger in size than it was previously.[90]
80
+
81
+ After giving birth, there is a phase of vaginal discharge called lochia that can vary significantly in the amount of loss and its duration but can go on for up to six weeks.[91]
82
+
83
+ The vaginal flora is a complex ecosystem that changes throughout life, from birth to menopause. The vaginal microbiota resides in and on the outermost layer of the vaginal epithelium.[44] This microbiome consists of species and genera which typically do not cause symptoms or infections in women with normal immunity. The vaginal microbiome is dominated by Lactobacillus species.[92] These species metabolize glycogen, breaking it down into sugar. Lactobacilli metabolize the sugar into glucose and lactic acid.[93] Under the influence of hormones, such as estrogen, progesterone and follicle-stimulating hormone (FSH), the vaginal ecosystem undergoes cyclic or periodic changes.[93]
84
+
85
+ Vaginal health can be assessed during a pelvic examination, along with the health of most of the organs of the female reproductive system.[94][95][96] Such exams may include the Pap test (or cervical smear). In the United States, Pap test screening is recommended starting around 21 years of age until the age of 65.[97] However, other countries do not recommend pap testing in non-sexually active women.[98] Guidelines on frequency vary from every three to five years.[98][99][100] Routine pelvic examination on adult women who are not pregnant and lack symptoms may be more harmful than beneficial.[101] A normal finding during the pelvic exam of a pregnant women is a bluish tinge to the vaginal wall.[94]
86
+
87
+ Pelvic exams are most often performed when there are unexplained symptoms of discharge, pain, unexpected bleeding or urinary problems.[94][102][103] During a pelvic exam, the vaginal opening is assessed for position, symmetry, presence of the hymen, and shape. The vagina is assessed internally by the examiner with gloved fingers, before the speculum is inserted, to note the presence of any weakness, lumps or nodules. Inflammation and discharge are noted if present. During this time, the Skene's and Bartolin's glands are palpated to identify abnormalities in these structures. After the digital examination of the vagina is complete, the speculum, an instrument to visualize internal structures, is carefully inserted to make the cervix visible.[94] Examination of the vagina may also be done during a cavity search.[104]
88
+
89
+ Lacerations or other injuries to the vagina can occur during sexual assault or other sexual abuse.[4][94] These can be tears, bruises, inflammation and abrasions. Sexual assault with objects can damage the vagina and X-ray examination may reveal the presence of foreign objects.[4] If consent is given, a pelvic examination is part of the assessment of sexual assault.[105] Pelvic exams are also performed during pregnancy, and women with high risk pregnancies have exams more often.[94][106]
90
+
91
+ Intravaginal administration is a route of administration where the medication is inserted into the vagina as a creme or tablet. Pharmacologically, this has the potential advantage of promoting therapeutic effects primarily in the vagina or nearby structures (such as the vaginal portion of cervix) with limited systemic adverse effects compared to other routes of administration.[107][108] Medications used to ripen the cervix and induce labor are commonly administered via this route, as are estrogens, contraceptive agents, propranolol, and antifungals. Vaginal rings can also be used to deliver medication, including birth control in contraceptive vaginal rings. These are inserted into the vagina and provide continuous, low dose and consistent drug levels in the vagina and throughout the body.[109][110]
92
+
93
+ Before the baby merges from the womb, an injection for pain control during childbirth may be administered through the vaginal wall and near the pudendal nerve. Because the pudendal nerve carries motor and sensory fibers that innervate the pelvic muscles, a pudendal nerve block relieves birth pain. The medicine does not harm the child, and is without significant complications.[111]
94
+
95
+ Vaginal infections or diseases include yeast infection, vaginitis, sexually transmitted infections (STIs) and cancer. Lactobacillus gasseri and other Lactobacillus species in the vaginal flora provide some protection from infections by their secretion of bacteriocins and hydrogen peroxide.[112] The healthy vagina of a woman of child-bearing age is acidic, with a pH normally ranging between 3.8 and 4.5.[93] The low pH prohibits growth of many strains of pathogenic microbes.[93] The acidic balance of the vagina may also be affected by pregnancy, menstruation, diabetes or other illness, birth control pills, certain antibiotics, poor diet, and stress (such as from a lack of sleep).[113][114] Any of these changes to the acidic balance of the vagina may contribute to yeast infection.[113] An elevated pH (greater than 4.5) of the vaginal fluid can be caused by an overgrowth of bacteria as in bacterial vaginosis, or in the parasitic infection trichomoniasis, both of which have vaginitis as a symptom.[93][115] Vaginal flora populated by a number of different bacteria characteristic of bacterial vaginosis increases the risk of adverse pregnancy outcomes.[116] During a pelvic exam, samples of vaginal fluids may be taken to screen for sexually transmitted infections or other infections.[94][117]
96
+
97
+ Because the vagina is self-cleansing, it usually does not need special hygiene.[118] Clinicians generally discourage the practice of douching for maintaining vulvovaginal health.[118][119] Since the vaginal flora gives protection against disease, a disturbance of this balance may lead to infection and abnormal discharge.[118] Vaginal discharge may indicate a vaginal infection by color and odor, or the resulting symptoms of discharge, such as irritation or burning.[120][121] Abnormal vaginal discharge may be caused by STIs, diabetes, douches, fragranced soaps, bubble baths, birth control pills, yeast infection (commonly as a result of antibiotic use) or another form of vaginitis.[120] While vaginitis is an inflammation of the vagina, and is attributed to infection, hormonal issues, or irritants,[122][123] vaginismus is an involuntary tightening of the vagina muscles during vaginal penetration that is caused by a conditioned reflex or disease.[122] Vaginal discharge due to yeast infection is usually thick, creamy in color and odorless, while discharge due to bacterial vaginosis is gray-white in color, and discharge due to trichomoniasis is usually a gray color, thin in consistency, and has a fishy odor. Discharge in 25% of the trichomoniasis cases is yellow-green.[121]
98
+
99
+ HIV/AIDS, human papillomavirus (HPV), genital herpes and trichomoniasis are some STIs that may affect the vagina, and health sources recommend safe sex (or barrier method) practices to prevent the transmission of these and other STIs.[124][125] Safe sex commonly involves the use of condoms, and sometimes female condoms (which give women more control). Both types can help avert pregnancy by preventing semen from coming in contact with the vagina.[126][127] There is, however, little research on whether female condoms are as effective as male condoms at preventing STIs,[127] and they are slightly less effective than male condoms at preventing pregnancy, which may be because the female condom fits less tightly than the male condom or because it can slip into the vagina and spill semen.[128]
100
+
101
+ The vaginal lymph nodes often trap cancerous cells that originate in the vagina. These nodes can be assessed for the presence of disease. Selective surgical removal (rather than total and more invasive removal) of vaginal lymph nodes reduces the risk of complications that can accompany more radical surgeries. These selective nodes act as sentinel lymph nodes.[48] Instead of surgery, the lymph nodes of concern are sometimes treated with radiation therapy administered to the patient's pelvic, inguinal lymph nodes, or both.[129]
102
+
103
+ Vaginal cancer and vulvar cancer are very rare, and primarily affect older women.[130][131] Cervical cancer (which is relatively common) increases the risk of vaginal cancer,[132] which is why there is a significant chance for vaginal cancer to occur at the same time as, or after, cervical cancer. It may be that their causes are the same.[132][130][133] Cervical cancer may be prevented by pap smear screening and HPV vaccines, but HPV vaccines only cover HPV types 16 and 18, the cause of 70% of cervical cancers.[134][135] Some symptoms of cervical and vaginal cancer are dyspareunia, and abnormal vaginal bleeding or vaginal discharge, especially after sexual intercourse or menopause.[136][137] However, most cervical cancers are asymptomatic (present no symptoms).[136] Vaginal intracavity brachytherapy (VBT) is used to treat endometrial, vaginal and cervical cancer. An applicator is inserted into the vagina to allow the administration of radiation as close to the site of the cancer as possible.[138][139] Survival rates increase with VBT when compared to external beam radiation therapy.[138] By using the vagina to place the emitter as close to the cancerous growth as possible, the systemic effects of radiation therapy are reduced and cure rates for vaginal cancer are higher.[140] Research is unclear on whether treating cervical cancer with radiation therapy increases the risk of vaginal cancer.[132]
104
+
105
+ Age and hormone levels significantly correlate with the pH of the vagina.[141] Estrogen, glycogen and lactobacilli impact these levels.[142][143] At birth, the vagina is acidic with a pH of approximately 4.5,[141] and ceases to be acidic by three to six weeks of age,[144] becoming alkaline.[145] Average vaginal pH is 7.0 in pre-pubertal girls.[142] Although there is a high degree of variability in timing, girls who are approximately seven to twelve years of age will continue to have labial development as the hymen thickens and the vagina elongates to approximately 8 cm. The vaginal mucosa thickens and the vaginal pH becomes acidic again. Girls may also experience a thin, white vaginal discharge called leukorrhea.[145] The vaginal microbiota of adolescent girls aged 13 to 18 years is similar to women of reproductive age,[143] who have an average vaginal pH of 3.8–4.5,[93] but research is not as clear on whether this is the same for premenarcheal or perimenarcheal girls.[143] The vaginal pH during menopause is 6.5–7.0 (without hormone replacement therapy), or 4.5–5.0 with hormone replacement therapy.[143]
106
+
107
+ After menopause, the body produces less estrogen. This causes atrophic vaginitis (thinning and inflammation of the vaginal walls),[38][146] which can lead to vaginal itching, burning, bleeding, soreness, or vaginal dryness (a decrease in lubrication).[147] Vaginal dryness can cause discomfort on its own or discomfort or pain during sexual intercourse.[147][148] Hot flashes are also characteristic of menopause.[114][149] Menopause also affects the composition of vaginal support structures. The vascular structures become fewer with advancing age.[150] Specific collagens become altered in composition and ratios. It is thought that the weakening of the support structures of the vagina is due to the physiological changes in this connective tissue.[151]
108
+
109
+ Menopausal symptoms can be eased by estrogen-containing vaginal creams,[149] non-prescription, non-hormonal medications,[147] vaginal estrogen rings such as the Femring,[152] or other hormone replacement therapies,[149] but there are risks (including adverse effects) associated with hormone replacement therapy.[153][154] Vaginal creams and vaginal estrogen rings may not have the same risks as other hormone replacement treatments.[155] Hormone replacement therapy can treat vaginal dryness,[152] but a personal lubricant may be used to temporarily remedy vaginal dryness specifically for sexual intercourse.[148] Some women have an increase in sexual desire following menopause.[147] It may be that menopausal women who continue to engage in sexual activity regularly experience vaginal lubrication similar to levels in women who have not entered menopause, and can enjoy sexual intercourse fully.[147] They may have less vaginal atrophy and fewer problems concerning sexual intercourse.[156]
110
+
111
+ Vaginal changes that happen with aging and childbirth include mucosal redundancy, rounding of the posterior aspect of the vagina with shortening of the distance from the distal end of the anal canal to the vaginal opening, diastasis or disruption of the pubococcygeus muscles caused by poor repair of an episiotomy, and blebs that may protrude beyond the area of the vaginal opening.[157] Other vaginal changes related to aging and childbirth are stress urinary incontinence, rectocele, and cystocele.[157] Physical changes resulting from pregnancy, childbirth, and menopause often contribute to stress urinary incontinence. If a woman has weak pelvic floor muscle support and tissue damage from childbirth or pelvic surgery, a lack of estrogen can further weaken the pelvic muscles and contribute to stress urinary incontinence.[158] Pelvic organ prolapse, such as a rectocele or cystocele, is characterized by the descent of pelvic organs from their normal positions to impinge upon the vagina.[159][160] A reduction in estrogen does not cause rectocele, cystocele or uterine prolapse, but childbirth and weakness in pelvic support structures can.[156] Prolapse may also occur when the pelvic floor becomes injured during a hysterectomy, gynecological cancer treatment, or heavy lifting.[159][160] Pelvic floor exercises such as Kegel exercises can be used to strengthen the pelvic floor muscles,[161] preventing or arresting the progression of prolapse.[162] There is no evidence that doing Kegel exercises isotonically or with some form of weight is superior; there are greater risks with using weights since a foreign object is introduced into the vagina.[163]
112
+
113
+ During the third stage of labor, while the infant is being born, the vagina undergoes significant changes. A gush of blood from the vagina may be seen right before the baby is born. Lacerations to the vagina that can occur during birth vary in depth, severity and the amount of adjacent tissue involvement.[4][164] The laceration can be so extensive as to involve the rectum and anus. This event can be especially distressing to a new mother.[164][165] When this occurs, fecal incontinence develops and stool can leave through the vagina.[164] Close to 85% of spontaneous vaginal births develop some form of tearing. Out of these, 60–70% require suturing.[166][167] Lacerations from labor do not always occur.[44]
114
+
115
+ The vagina, including the vaginal opening, may be altered as a result of surgeries such as an episiotomy, vaginectomy, vaginoplasty or labiaplasty.[157][168] Those who undergo vaginoplasty are usually older and have given birth.[157] A thorough examination of the vagina before a vaginoplasty is standard, as well as a referral to a urogynecologist to diagnose possible vaginal disorders.[157] With regard to labiaplasty, reduction of the labia minora is quick without hindrance, complications are minor and rare, and can be corrected. Any scarring from the procedure is minimal, and long-term problems have not been identified.[157]
116
+
117
+ During an episiotomy, a surgical incision is made during the second stage of labor to enlarge the vaginal opening for the baby to pass through.[44][138] Although its routine use is no longer recommended,[169] and not having an episiotomy is found to have better results than an episiotomy,[44] it is one of the most common medical procedures performed on women. The incision is made through the skin, vaginal epithelium, subcutaneous fat, perineal body and superficial transverse perineal muscle and extends from the vagina to the anus.[170][171] Episiotomies can be painful after delivery. Women often report pain during sexual intercourse up to three months after laceration repair or an episiotomy.[166][167] Some surgical techniques result in less pain than others.[166] The two types of episiotomies performed are the medial incision and the medio-lateral incision. The median incision is a perpendicular cut between the vagina and the anus and is the most common.[44][172] The medio-lateral incision is made between the vagina at an angle and is not as likely to tear through to the anus. The medio-lateral cut takes more time to heal than the median cut.[44]
118
+
119
+ Vaginectomy is surgery to remove all or part of the vagina, and is usually used to treat malignancy.[168] Removal of some or all of the reproductive organs and genitalia can result in damage to the nerves and leave behind scarring or adhesions.[173] Sexual function may also be impaired as a result, as in the case of some cervical cancer surgeries. These surgeries can impact pain, elasticity, vaginal lubrication and sexual arousal. This often resolves after one year but may take longer.[173]
120
+
121
+ Women, especially those who are older and have had multiple births, may choose to surgically correct vaginal laxity. This surgery has been described as vaginal tightening or rejuvenation.[174] While a woman may experience an improvement in self-image and sexual pleasure by undergoing vaginal tightening or rejuvenation,[174] there are risks associated with the procedures, including infection, narrowing of the vaginal opening, insufficient tightening, decreased sexual function (such as pain during sexual intercourse), and rectovaginal fistula. Women who undergo this procedure may unknowingly have a medical issue, such as a prolapse, and an attempt to correct this is also made during the surgery.[175]
122
+
123
+ Surgery on the vagina can be elective or cosmetic. Women who seek cosmetic surgery can have congenital conditions, physical discomfort or wish to alter the appearance of their genitals. Concerns over average genital appearance or measurements are largely unavailable and make defining a successful outcome for such surgery difficult.[176] A number of sex reassignment surgeries are available to transgender people. Although not all intersex conditions require surgical treatment, some choose genital surgery to correct atypical anatomical conditions.[177]
124
+
125
+ Vaginal anomalies are defects that result in an abnormal or absent vagina.[178][179] The most common obstructive vaginal anomaly is an imperforate hymen, a condition in which the hymen obstructs menstrual flow or other vaginal secretions.[180][181] Another vaginal anomaly is a transverse vaginal septum, which partially or completely blocks the vaginal canal.[180] The precise cause of an obstruction must be determined before it is repaired, since corrective surgery differs depending on the cause.[182] In some cases, such as isolated vaginal agenesis, the external genitalia may appear normal.[183]
126
+
127
+ Abnormal openings known as fistulas can cause urine or feces to enter the vagina, resulting in incontinence.[184][185] The vagina is susceptible to fistula formation because of its proximity to the urinary and gastrointestinal tracts.[186] Specific causes are manifold and include obstructed labor, hysterectomy, malignancy, radiation, episiotomy, and bowel disorders.[187][188] A small number of vaginal fistulas are congenital.[189] Various surgical methods are employed to repair fistulas.[190][184] Untreated, fistulas can result in significant disability and have a profound impact on quality of life.[184]
128
+
129
+ Vaginal evisceration is a serious complication of a vaginal hysterectomy and occurs when the vaginal cuff ruptures, allowing the small intestine to protrude from the vagina.[105][191]
130
+
131
+ Cysts may also affect the vagina. Various types of vaginal cysts can develop on the surface of the vaginal epithelium or in deeper layers of the vagina and can grow to be as large as 7 cm.[192][193] Often, they are an incidental finding during a routine pelvic examination.[194] Vaginal cysts can mimic other structures that protrude from the vagina such as a rectocele and cystocele.[192] Cysts that can be present include Müllerian cysts, Gartner's duct cysts, and epidermoid cysts.[195][196] A vaginal cyst is most likely to develop in women between the ages of 30 to 40.[192] It is estimated that 1 out of 200 women has a vaginal cyst.[192][197] The Bartholin's cyst is of vulvar rather than vaginal origin,[198] but it presents as a lump at the vaginal opening.[199] It is more common in younger women and is usually without symptoms,[200] but it can cause pain if an abscess forms,[200] block the entrance to the vulval vestibule if large,[201] and impede walking or cause painful sexual intercourse.[200]
132
+
133
+ Various perceptions of the vagina have existed throughout history, including the belief it is the center of sexual desire, a metaphor for life via birth, inferior to the penis, unappealing to sight or smell, or vulgar.[202][203][204] These views can largely be attributed to sex differences, and how they are interpreted. David Buss, an evolutionary psychologist, stated that because a penis is significantly larger than a clitoris and is readily visible while the vagina is not, and males urinate through the penis, boys are taught from childhood to touch their penises while girls are often taught that they should not touch their own genitalia, which implies that there is harm in doing so. Buss attributed this as the reason many women are not as familiar with their genitalia, and that researchers assume these sex differences explain why boys learn to masturbate before girls and do so more often.[205]
134
+
135
+ The word vagina is commonly avoided in conversation,[206] and many people are confused about the vagina's anatomy and may be unaware that it is not used for urination.[207][208][209] This is exacerbated by phrases such as "boys have a penis, girls have a vagina", which causes children to think that girls have one orifice in the pelvic area.[208] Author Hilda Hutcherson stated, "Because many [women] have been conditioned since childhood through verbal and nonverbal cues to think of [their] genitals as ugly, smelly and unclean, [they] aren't able to fully enjoy intimate encounters" because of fear that their partner will dislike the sight, smell, or taste of their genitals. She argued that women, unlike men, did not have locker room experiences in school where they compared each other's genitals, which is one reason so many women wonder if their genitals are normal.[203] Scholar Catherine Blackledge stated that having a vagina meant she would typically be treated less well than her vagina-less counterparts and subject to inequalities (such as job inequality), which she categorized as being treated like a second-class citizen.[206]
136
+
137
+ Negative views of the vagina are simultaneously contrasted by views that it is a powerful symbol of female sexuality, spirituality, or life. Author Denise Linn stated that the vagina "is a powerful symbol of womanliness, openness, acceptance, and receptivity. It is the inner valley spirit."[210] Sigmund Freud placed significant value on the vagina,[211] postulating the concept that vaginal orgasm is separate from clitoral orgasm, and that, upon reaching puberty, the proper response of mature women is a changeover to vaginal orgasms (meaning orgasms without any clitoral stimulation). This theory made many women feel inadequate, as the majority of women cannot achieve orgasm via vaginal intercourse alone.[212][213][214] Regarding religion, the vagina represents a powerful symbol as the yoni in Hinduism, and this may indicate the value that Hindu society has given female sexuality and the vagina's ability to deliver life.[215]
138
+
139
+ While, in ancient times, the vagina was often considered equivalent (homologous) to the penis, with anatomists Galen (129 AD – 200 AD) and Vesalius (1514–1564) regarding the organs as structurally the same except for the vagina being inverted, anatomical studies over latter centuries showed the clitoris to be the penile equivalent.[78][216] Another perception of the vagina was that the release of vaginal fluids would cure or remedy a number of ailments; various methods were used over the centuries to release "female seed" (via vaginal lubrication or female ejaculation) as a treatment for suffocatio ex semine retento (suffocation of the womb, lit. 'suffocation from retained seed'), green sickness, and possibly for female hysteria. Reported methods for treatment included a midwife rubbing the walls of the vagina or insertion of the penis or penis-shaped objects into the vagina. Symptoms of the female hysteria diagnosis – a concept that is no longer recognized by medical authorities as a medical disorder – included faintness, nervousness, insomnia, fluid retention, heaviness in abdomen, muscle spasm, shortness of breath, irritability, loss of appetite for food or sex, and a propensity for causing trouble.[217] It may be that women who were considered suffering from female hysteria condition would sometimes undergo "pelvic massage" – stimulation of the genitals by the doctor until the woman experienced "hysterical paroxysm" (i.e., orgasm). In this case, paroxysm was regarded as a medical treatment, and not a sexual release.[217]
140
+
141
+ The vagina and vulva have been given many vulgar names, three of which are cunt, twat, and pussy. Cunt is also used as a derogatory epithet referring to people of either sex. This usage is relatively recent, dating from the late nineteenth century.[218] Reflecting different national usages, cunt is described as "an unpleasant or stupid person" in the Compact Oxford English Dictionary,[219] whereas the Merriam-Webster has a usage of the term as "usually disparaging and obscene: woman",[220] noting that it is used in the United States as "an offensive way to refer to a woman".[221] Random House defines it as "a despicable, contemptible or foolish man".[218] Some feminists of the 1970s sought to eliminate disparaging terms such as cunt.[222] Twat is widely used as a derogatory epithet, especially in British English, referring to a person considered obnoxious or stupid.[223][224] Pussy can indicate "cowardice or weakness", and "the human vulva or vagina" or by extension "sexual intercourse with a woman".[225] In contemporary English, use of the word pussy to refer to women is considered derogatory or demeaning, treating people as sexual objects.[226]
142
+
143
+ The vagina loquens, or "talking vagina", is a significant tradition in literature and art, dating back to the ancient folklore motifs of the "talking cunt".[227][228] These tales usually involve vaginas talking due to the effect of magic or charms, and often admitting to their lack of chastity.[227] Other folk tales relate the vagina as having teeth – vagina dentata (Latin for "toothed vagina"). These carry the implication that sexual intercourse might result in injury, emasculation, or castration for the man involved. These stories were frequently told as cautionary tales warning of the dangers of unknown women and to discourage rape.[229]
144
+
145
+ In 1966, the French artist Niki de Saint Phalle collaborated with Dadaist artist Jean Tinguely and Per Olof Ultvedt on a large sculpture installation entitled "hon-en katedral" (also spelled "Hon-en-Katedrall", which means "she-a cathedral") for Moderna Museet, in Stockholm, Sweden. The outer form is a giant, reclining sculpture of a woman which visitors can enter through a door-sized vaginal opening between her spread legs.[230]
146
+
147
+ The Vagina Monologues, a 1996 episodic play by Eve Ensler, has contributed to making female sexuality a topic of public discourse. It is made up of a varying number of monologues read by a number of women. Initially, Ensler performed every monologue herself, with subsequent performances featuring three actresses; latter versions feature a different actress for every role. Each of the monologues deals with an aspect of the feminine experience, touching on matters such as sexual activity, love, rape, menstruation, female genital mutilation, masturbation, birth, orgasm, the various common names for the vagina, or simply as a physical aspect of the body. A recurring theme throughout the pieces is the vagina as a tool of female empowerment, and the ultimate embodiment of individuality.[231][232]
148
+
149
+ Societal views, influenced by tradition, a lack of knowledge on anatomy, or sexism, can significantly impact a person's decision to alter their own or another person's genitalia.[175][233] Women may want to alter their genitalia (vagina or vulva) because they believe that its appearance, such as the length of the labia minora covering the vaginal opening, is not normal, or because they desire a smaller vaginal opening or tighter vagina. Women may want to remain youthful in appearance and sexual function. These views are often influenced by the media,[175][234] including pornography,[234] and women can have low self-esteem as a result.[175] They may be embarrassed to be naked in front of a sexual partner and may insist on having sex with the lights off.[175] When modification surgery is performed purely for cosmetic reasons, it is often viewed poorly,[175] and some doctors have compared such surgeries to female genital mutilation (FGM).[234]
150
+
151
+ Female genital mutilation, also known as female circumcision or female genital cutting, is genital modification with no health benefits.[235][236] The most severe form is Type III FGM, which is infibulation and involves removing all or part of the labia and the vagina being closed up. A small hole is left for the passage of urine and menstrual blood, and the vagina is opened up for sexual intercourse and childbirth.[236]
152
+
153
+ Significant controversy surrounds female genital mutilation,[235][236] with the World Health Organization (WHO) and other health organizations campaigning against the procedures on behalf of human rights, stating that it is "a violation of the human rights of girls and women" and "reflects deep-rooted inequality between the sexes".[236] Female genital mutilation has existed at one point or another in almost all human civilizations,[237] most commonly to exert control over the sexual behavior, including masturbation, of girls and women.[236][237] It is carried out in several countries, especially in Africa, and to a lesser extent in other parts of the Middle East and Southeast Asia, on girls from a few days old to mid-adolescent, often to reduce sexual desire in an effort to preserve vaginal virginity.[235][236][237] Comfort Momoh stated it may be that female genital mutilation was "practiced in ancient Egypt as a sign of distinction among the aristocracy"; there are reports that traces of infibulation are on Egyptian mummies.[237]
154
+
155
+ Custom and tradition are the most frequently cited reasons for the practice of female genital mutilation. Some cultures believe that female genital mutilation is part of a girl's initiation into adulthood and that not performing it can disrupt social and political cohesion.[236][237] In these societies, a girl is often not considered an adult unless she has undergone the procedure.[236]
156
+
157
+ The vagina is a structure of animals in which the female is internally fertilized, rather than by traumatic insemination used by some invertebrates. The shape of the vagina varies among different animals. In placental mammals and marsupials, the vagina leads from the uterus to the exterior of the female body. Female marsupials have two lateral vaginas, which lead to separate uteri, but both open externally through the same orifice; a third canal, which is known as the median vagina, and can be transitory or permanent, is used for birth.[238] The female spotted hyena does not have an external vaginal opening. Instead, the vagina exits through the clitoris, allowing the females to urinate, copulate and give birth through the clitoris.[239] The vagina of the female coyote contracts during copulation, forming a copulatory tie.[240]
158
+
159
+ Birds, monotremes, and some reptiles have a part of the oviduct that leads to the cloaca.[241][242] Chickens have a vaginal aperture that opens from the vertical apex of the cloaca. The vagina extends upward from the aperture and becomes the egg gland.[242] In some jawless fish, there is neither oviduct nor vagina and instead the egg travels directly through the body cavity (and is fertilised externally as in most fish and amphibians). In insects and other invertebrates, the vagina can be a part of the oviduct (see insect reproductive system).[243] Birds have a cloaca into which the urinary, reproductive tract (vagina) and gastrointestinal tract empty.[244] Females of some waterfowl species have developed vaginal structures called dead end sacs and clockwise coils to protect themselves from sexual coercion.[245]
160
+
161
+ A lack of research on the vagina and other female genitalia, especially for different animals, has stifled knowledge on female sexual anatomy.[246][247] One explanation for why male genitalia is studied more includes penises being significantly simpler to analyze than female genital cavities, because male genitals usually protrude and are therefore easier to assess and measure. By contrast, female genitals are more often concealed, and require more dissection, which in turn requires more time.[246] Another explanation is that a main function of the penis is to impregnate, while female genitals may alter shape upon interaction with male organs, especially as to benefit or hinder reproductive success.[246]
162
+
163
+ Non-human primates are optimal models for human biomedical research because humans and non-human primates share physiological characteristics as a result of evolution.[248] While menstruation is heavily associated with human females, and they have the most pronounced menstruation, it is also typical of ape relatives and monkeys.[249][250] Female macaques menstruate, with a cycle length over the course of a lifetime that is comparable to that of female humans. Estrogens and progestogens in the menstrual cycles and during premenarche and postmenopause are also similar in female humans and macaques; however, only in macaques does keratinization of the epithelium occur during the follicular phase.[248] The vaginal pH of macaques also differs, with near-neutral to slightly alkaline median values and is widely variable, which may be due to its lack of lactobacilli in the vaginal flora.[248] This is one reason why, although macaques are used for studying HIV transmission and testing microbicides,[248] animal models are not often used in the study of sexually transmitted infections, such as trichomoniasis. Another is that such conditions' causes are inextricably bound to humans' genetic makeup, making results from other species difficult to apply to humans.[251]
en/5896.html.txt ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The blood vessels are the components of the circulatory system that transport blood throughout the human body.[1] These vessels transport blood cells, nutrients, and oxygen to the tissues of the body. They also take waste and carbon dioxide away from the tissues. Blood vessels are needed to sustain life, because all of the body's tissues rely on their functionality.[2]
2
+
3
+ There are five types of blood vessels: the arteries, which carry the blood away from the heart; the arterioles; the capillaries, where the exchange of water and chemicals between the blood and the tissues occurs; the venules; and the veins, which carry blood from the capillaries back towards the heart.
4
+
5
+ The word vascular, meaning relating to the blood vessels, is derived from the Latin vas, meaning vessel. Some structures – such as cartilage, the epithelium, and the lens and cornea of the eye – do not contain blood vessels and are labeled avascular.
6
+
7
+ The arteries and veins have three layers. The middle layer is thicker in the arteries than it is in the veins:
8
+
9
+ Capillaries consist of a single layer of endothelial cells with a supporting subendothelium consisting of a basement membrane and connective tissue.
10
+
11
+ When blood vessels connect to form a region of diffuse vascular supply it is called an anastomosis. Anastomoses provide critical alternative routes for blood to flow in case of blockages.
12
+
13
+ Leg veins have valves which prevent backflow of the blood being pumped against gravity by the surrounding muscles.[3]
14
+
15
+ There are various kinds of blood vessels:
16
+
17
+ They are roughly grouped as "arterial" and "venous", determined by whether the blood in it is flowing away from (arterial) or toward (venous) the heart. The term "arterial blood" is nevertheless used to indicate blood high in oxygen, although the pulmonary artery carries "venous blood" and blood flowing in the pulmonary vein is rich in oxygen. This is because they are carrying the blood to and from the lungs, respectively, to be oxygenated.
18
+
19
+ Blood vessels function to transport blood. In general, arteries and arterioles transport oxygenated blood from the lungs to the body and its organs, and veins and venules transport deoxygenated blood from the body to the lungs. Blood vessels also circulate blood throughout the circulatory system Oxygen (bound to hemoglobin in red blood cells) is the most critical nutrient carried by the blood. In all arteries apart from the pulmonary artery, hemoglobin is highly saturated (95–100%) with oxygen. In all veins apart from the pulmonary vein, the saturation of hemoglobin is about 75%.[citation needed] (The values are reversed in the pulmonary circulation.) In addition to carrying oxygen, blood also carries hormones, waste products and nutrients for cells of the body.
20
+
21
+ Blood vessels do not actively engage in the transport of blood (they have no appreciable peristalsis). Blood is propelled through arteries and arterioles through pressure generated by the heartbeat.[4] Blood vessels also transport red blood cells which contain the oxygen necessary for daily activities. The amount of red blood cells present in your vessels has an effect on your health. Hematocrit tests can be performed to calculate the proportion of red blood cells in your blood. Higher proportions result in conditions such as dehydration or heart disease while lower proportions could lead to anemia and long-term blood loss.[5]
22
+
23
+ Permeability of the endothelium is pivotal in the release of nutrients to the tissue. It is also increased in inflammation in response to histamine, prostaglandins and interleukins, which leads to most of the symptoms of inflammation (swelling, redness, warmth and pain).
24
+
25
+ Arteries—and veins to a degree—can regulate their inner diameter by contraction of the muscular layer. This changes the blood flow to downstream organs, and is determined by the autonomic nervous system. Vasodilation and vasoconstriction are also used antagonistically as methods of thermoregulation.
26
+
27
+ The size of blood vessels is different for each of them. It ranges from a diameter of about 25 millimeters for the aorta to only 8 micrometers in the capillaries. This comes out to about a 3000-fold range.[6] Vasoconstriction is the constriction of blood vessels (narrowing, becoming smaller in cross-sectional area) by contracting the vascular smooth muscle in the vessel walls. It is regulated by vasoconstrictors (agents that cause vasoconstriction). These include paracrine factors (e.g. prostaglandins), a number of hormones (e.g. vasopressin and angiotensin) and neurotransmitters (e.g. epinephrine) from the nervous system.
28
+
29
+ Vasodilation is a similar process mediated by antagonistically acting mediators. The most prominent vasodilator is nitric oxide (termed endothelium-derived relaxing factor for this reason).
30
+
31
+ The circulatory system uses the channel of blood vessels to deliver blood to all parts of the body. This is a result of the left and right side of the heart working together to allow blood to flow continuously to the lungs and other parts of the body. Oxygen-poor blood enters the right side of the heart through two large veins. Oxygen-rich blood from the lungs enters through the pulmonary veins on the left side of the heart into the aorta and then reaches the rest of the body. The capillaries are responsible for allowing the blood to receive oxygen through tiny air sacs in the lungs. This is also the site where carbon dioxide exits the blood. This all occurs in the lungs where blood is oxygenated.[7]
32
+
33
+ The blood pressure in blood vessels is traditionally expressed in millimetres of mercury (1 mmHg = 133 Pa). In the arterial system, this is usually around 120 mmHg systolic (high pressure wave due to contraction of the heart) and 80 mmHg diastolic (low pressure wave). In contrast, pressures in the venous system are constant and rarely exceed 10 mmHg.
34
+
35
+ Vascular resistance occurs where the vessels away from the heart oppose the flow of blood. Resistance is an accumulation of three different factors: blood viscosity, blood vessel length, and vessel radius.[8]
36
+
37
+ Blood viscosity is the thickness of the blood and its resistance to flow as a result of the different components of the blood. Blood is 92% water by weight and the rest of blood is composed of protein, nutrients, electrolytes, wastes, and dissolved gases. Depending on the health of an individual, the blood viscosity can vary (i.e. anemia causing relatively lower concentrations of protein, high blood pressure an increase in dissolved salts or lipids, etc.).[8]
38
+
39
+ Vessel length is the total length of the vessel measured as the distance away from the heart. As the total length of the vessel increases, the total resistance as a result of friction will increase.[8]
40
+
41
+ Vessel radius also affects the total resistance as a result of contact with the vessel wall. As the radius of the wall gets smaller, the proportion of the blood making contact with the wall will increase. The greater amount of contact with the wall will increase the total resistance against the blood flow.[9]
42
+
43
+ Blood vessels play a huge role in virtually every medical condition. Cancer, for example, cannot progress unless the tumor causes angiogenesis (formation of new blood vessels) to supply the malignant cells' metabolic demand. Atherosclerosis, the formation of lipid lumps (atheromas) in the blood vessel wall, is the most common cardiovascular disease, the main cause of death in the Western world.
44
+
45
+ Blood vessel permeability is increased in inflammation. Damage, due to trauma or spontaneously, may lead to hemorrhage due to mechanical damage to the vessel endothelium. In contrast, occlusion of the blood vessel by atherosclerotic plaque, by an embolised blood clot or a foreign body leads to downstream ischemia (insufficient blood supply) and possibly necrosis. Vessel occlusion tends to be a positive feedback system; an occluded vessel creates eddies in the normally laminar flow or plug flow blood currents. These eddies create abnormal fluid velocity gradients which push blood elements such as cholesterol or chylomicron bodies to the endothelium. These deposit onto the arterial walls which are already partially occluded and build upon the blockage.[10]
46
+
47
+ The most common disease of the blood vessels is hypertension or high blood pressure. This is caused by an increase in the pressure of the blood flowing through the vessels. Hypertension can lead to more serious conditions such as heart failure and stroke. To prevent these diseases, the most common treatment option is medication as opposed to surgery. Aspirin helps prevent blood clots and can also help limit inflammation.[11]
48
+
49
+ Vasculitis is inflammation of the vessel wall, due to autoimmune disease or infection.
50
+
51
+ ocular group: central retinal
en/5897.html.txt ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The blood vessels are the components of the circulatory system that transport blood throughout the human body.[1] These vessels transport blood cells, nutrients, and oxygen to the tissues of the body. They also take waste and carbon dioxide away from the tissues. Blood vessels are needed to sustain life, because all of the body's tissues rely on their functionality.[2]
2
+
3
+ There are five types of blood vessels: the arteries, which carry the blood away from the heart; the arterioles; the capillaries, where the exchange of water and chemicals between the blood and the tissues occurs; the venules; and the veins, which carry blood from the capillaries back towards the heart.
4
+
5
+ The word vascular, meaning relating to the blood vessels, is derived from the Latin vas, meaning vessel. Some structures – such as cartilage, the epithelium, and the lens and cornea of the eye – do not contain blood vessels and are labeled avascular.
6
+
7
+ The arteries and veins have three layers. The middle layer is thicker in the arteries than it is in the veins:
8
+
9
+ Capillaries consist of a single layer of endothelial cells with a supporting subendothelium consisting of a basement membrane and connective tissue.
10
+
11
+ When blood vessels connect to form a region of diffuse vascular supply it is called an anastomosis. Anastomoses provide critical alternative routes for blood to flow in case of blockages.
12
+
13
+ Leg veins have valves which prevent backflow of the blood being pumped against gravity by the surrounding muscles.[3]
14
+
15
+ There are various kinds of blood vessels:
16
+
17
+ They are roughly grouped as "arterial" and "venous", determined by whether the blood in it is flowing away from (arterial) or toward (venous) the heart. The term "arterial blood" is nevertheless used to indicate blood high in oxygen, although the pulmonary artery carries "venous blood" and blood flowing in the pulmonary vein is rich in oxygen. This is because they are carrying the blood to and from the lungs, respectively, to be oxygenated.
18
+
19
+ Blood vessels function to transport blood. In general, arteries and arterioles transport oxygenated blood from the lungs to the body and its organs, and veins and venules transport deoxygenated blood from the body to the lungs. Blood vessels also circulate blood throughout the circulatory system Oxygen (bound to hemoglobin in red blood cells) is the most critical nutrient carried by the blood. In all arteries apart from the pulmonary artery, hemoglobin is highly saturated (95–100%) with oxygen. In all veins apart from the pulmonary vein, the saturation of hemoglobin is about 75%.[citation needed] (The values are reversed in the pulmonary circulation.) In addition to carrying oxygen, blood also carries hormones, waste products and nutrients for cells of the body.
20
+
21
+ Blood vessels do not actively engage in the transport of blood (they have no appreciable peristalsis). Blood is propelled through arteries and arterioles through pressure generated by the heartbeat.[4] Blood vessels also transport red blood cells which contain the oxygen necessary for daily activities. The amount of red blood cells present in your vessels has an effect on your health. Hematocrit tests can be performed to calculate the proportion of red blood cells in your blood. Higher proportions result in conditions such as dehydration or heart disease while lower proportions could lead to anemia and long-term blood loss.[5]
22
+
23
+ Permeability of the endothelium is pivotal in the release of nutrients to the tissue. It is also increased in inflammation in response to histamine, prostaglandins and interleukins, which leads to most of the symptoms of inflammation (swelling, redness, warmth and pain).
24
+
25
+ Arteries—and veins to a degree—can regulate their inner diameter by contraction of the muscular layer. This changes the blood flow to downstream organs, and is determined by the autonomic nervous system. Vasodilation and vasoconstriction are also used antagonistically as methods of thermoregulation.
26
+
27
+ The size of blood vessels is different for each of them. It ranges from a diameter of about 25 millimeters for the aorta to only 8 micrometers in the capillaries. This comes out to about a 3000-fold range.[6] Vasoconstriction is the constriction of blood vessels (narrowing, becoming smaller in cross-sectional area) by contracting the vascular smooth muscle in the vessel walls. It is regulated by vasoconstrictors (agents that cause vasoconstriction). These include paracrine factors (e.g. prostaglandins), a number of hormones (e.g. vasopressin and angiotensin) and neurotransmitters (e.g. epinephrine) from the nervous system.
28
+
29
+ Vasodilation is a similar process mediated by antagonistically acting mediators. The most prominent vasodilator is nitric oxide (termed endothelium-derived relaxing factor for this reason).
30
+
31
+ The circulatory system uses the channel of blood vessels to deliver blood to all parts of the body. This is a result of the left and right side of the heart working together to allow blood to flow continuously to the lungs and other parts of the body. Oxygen-poor blood enters the right side of the heart through two large veins. Oxygen-rich blood from the lungs enters through the pulmonary veins on the left side of the heart into the aorta and then reaches the rest of the body. The capillaries are responsible for allowing the blood to receive oxygen through tiny air sacs in the lungs. This is also the site where carbon dioxide exits the blood. This all occurs in the lungs where blood is oxygenated.[7]
32
+
33
+ The blood pressure in blood vessels is traditionally expressed in millimetres of mercury (1 mmHg = 133 Pa). In the arterial system, this is usually around 120 mmHg systolic (high pressure wave due to contraction of the heart) and 80 mmHg diastolic (low pressure wave). In contrast, pressures in the venous system are constant and rarely exceed 10 mmHg.
34
+
35
+ Vascular resistance occurs where the vessels away from the heart oppose the flow of blood. Resistance is an accumulation of three different factors: blood viscosity, blood vessel length, and vessel radius.[8]
36
+
37
+ Blood viscosity is the thickness of the blood and its resistance to flow as a result of the different components of the blood. Blood is 92% water by weight and the rest of blood is composed of protein, nutrients, electrolytes, wastes, and dissolved gases. Depending on the health of an individual, the blood viscosity can vary (i.e. anemia causing relatively lower concentrations of protein, high blood pressure an increase in dissolved salts or lipids, etc.).[8]
38
+
39
+ Vessel length is the total length of the vessel measured as the distance away from the heart. As the total length of the vessel increases, the total resistance as a result of friction will increase.[8]
40
+
41
+ Vessel radius also affects the total resistance as a result of contact with the vessel wall. As the radius of the wall gets smaller, the proportion of the blood making contact with the wall will increase. The greater amount of contact with the wall will increase the total resistance against the blood flow.[9]
42
+
43
+ Blood vessels play a huge role in virtually every medical condition. Cancer, for example, cannot progress unless the tumor causes angiogenesis (formation of new blood vessels) to supply the malignant cells' metabolic demand. Atherosclerosis, the formation of lipid lumps (atheromas) in the blood vessel wall, is the most common cardiovascular disease, the main cause of death in the Western world.
44
+
45
+ Blood vessel permeability is increased in inflammation. Damage, due to trauma or spontaneously, may lead to hemorrhage due to mechanical damage to the vessel endothelium. In contrast, occlusion of the blood vessel by atherosclerotic plaque, by an embolised blood clot or a foreign body leads to downstream ischemia (insufficient blood supply) and possibly necrosis. Vessel occlusion tends to be a positive feedback system; an occluded vessel creates eddies in the normally laminar flow or plug flow blood currents. These eddies create abnormal fluid velocity gradients which push blood elements such as cholesterol or chylomicron bodies to the endothelium. These deposit onto the arterial walls which are already partially occluded and build upon the blockage.[10]
46
+
47
+ The most common disease of the blood vessels is hypertension or high blood pressure. This is caused by an increase in the pressure of the blood flowing through the vessels. Hypertension can lead to more serious conditions such as heart failure and stroke. To prevent these diseases, the most common treatment option is medication as opposed to surgery. Aspirin helps prevent blood clots and can also help limit inflammation.[11]
48
+
49
+ Vasculitis is inflammation of the vessel wall, due to autoimmune disease or infection.
50
+
51
+ ocular group: central retinal
en/5898.html.txt ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Valencia (Spanish: [baˈlenθja]), officially València (Valencian: [vaˈlensia]),[5] is the capital of the autonomous community of Valencia and the third-largest city in Spain after Madrid and Barcelona, surpassing 800,000 inhabitants in the municipality. The wider urban area also comprising the neighbouring municipalities has a population of around 1.6 million people.[3][6] Valencia is Spain's third largest metropolitan area, with a population ranging from 1.7 to 2.5 million[2] depending on how the metropolitan area is defined. The Port of Valencia is the 5th busiest container port in Europe and the busiest container port on the Mediterranean Sea. The city is ranked as a Beta-global city in the Globalization and World Cities Research Network.[7]
4
+
5
+ Valencia was founded as a Roman colony by the consul Decimus Junius Brutus Callaicus in 138 BC, and called Valentia Edetanorum. In 714 Moroccan and Arab Moors occupied the city, introducing their language, religion and customs; they implemented improved irrigation systems and the cultivation of new crops as well. Valencia was the capital of the Taifa of Valencia. In 1238 the Christian king James I of Aragon conquered the city and divided the land among the nobles who helped him conquer it, as witnessed in the Llibre del Repartiment. He also created the new Kingdom of Valencia, which had its own laws (Furs), with Valencia as its main city and capital. In the 18th century Philip V of Spain abolished the privileges as punishment to the kingdom of Valencia for aligning with the Habsburg side in the War of the Spanish Succession. Valencia was the capital of Spain when Joseph Bonaparte moved the Court there in the summer of 1812. It also served as the capital between 1936 and 1937, during the Second Spanish Republic.
6
+
7
+ The city is situated on the banks of the Turia, on the east coast of the Iberian Peninsula, fronting the Gulf of Valencia on the Mediterranean Sea. Its historic centre is one of the largest in Spain, with approximately 169 ha (420 acres).[improper synthesis?][8]
8
+ Due to its long history, Valencia has numerous celebrations and traditions, such as the Fallas, which were declared Fiestas of National Tourist Interest of Spain in 1965[9] and an Intangible cultural heritage by UNESCO in November 2016. Joan Ribó from Compromís has been the mayor of the city since 2015.
9
+
10
+ The original Latin name of the city was Valentia (IPA: [waˈlɛntɪ.a]), meaning "strength", or "valour", due to the Roman practice of recognising the valour of former Roman soldiers after a war. The Roman historian Livy explains that the founding of Valentia in the 2nd century BC was due to the settling of the Roman soldiers who fought against a Lusitanian rebel, Viriatus, during the Third Lusitanian Raid of the Lusitanian War.[10]
11
+
12
+ During the rule of the Muslim kingdoms in Spain, it had the title Medina at-Tarab ('City of Joy') according to one transliteration, or Medina at-Turab ('City of Sands') according to another, since it was located on the banks of the River Turia. It is not clear if the term Balansiyya was reserved for the entire Taifa of Valencia or also designated the city.[11]
13
+
14
+ By gradual sound changes, Valentia has become Valencia [baˈlenθja] (i.e. before a pausa or nasal sound) or [- βaˈlenθja] (after a continuant) in Castilian and València [vaˈlensia] in Valencian. In Valencian, e with grave accent (è) indicates /ɛ/ in contrast to /e/, but the word València is an exception to this rule, since è is pronounced /e/. The spelling "València" was approved by the AVL based on tradition after a debate on the matter. The name "València" has been the only official name of the city since 2017.[12]
15
+
16
+ Located on the eastern coast of the Iberian Peninsula and the western part of the Mediterranean Sea, fronting the Gulf of Valencia, Valencia lies on the highly fertile alluvial silts accumulated on the floodplain formed in the lower course of the Turia River.[13] At its founding by the Romans, it stood on a river island in the Turia, 6.4 kilometres (4.0 mi) from the sea.
17
+
18
+ The Albufera lagoon, located about 12 km (7 mi) south of the city proper (and part of the municipality), originally was a saltwater lagoon, yet, since the severing of links to the sea, it has eventually become a freshwater lagoon as well as it has progressively decreased in size.[14] The albufera and its environment are exploited for the cultivation of rice in paddy fields, and for hunting and fishing purposes.[14]
19
+
20
+ The City Council bought the lake from the Crown of Spain for 1,072,980 pesetas in 1911,[15] and today it forms the main portion of the Parc Natural de l'Albufera (Albufera Nature Reserve), with a surface area of 21,120 hectares (52,200 acres). In 1976, because of its cultural, historical, and ecological value, it was declared as natural park.
21
+
22
+ Valencia has a subtropical Mediterranean climate (Köppen Csa)[16] with mild winters and hot, dry summers.[17][18]
23
+
24
+ The maximum of precipitation occurs in the Autumn, coinciding with the time of the year when cold drop (gota fría) episodes of heavy rainfall—associated to cut-off low pressure systems at high altitude—[19] are common along the Western mediterranean coast.[20] The year-on-year variability in precipitation may be, however, considerable.[20]
25
+
26
+ Its average annual temperature is 18.3 °C (64.9 °F); 22.8 °C (73.0 °F) during the day and 13.8 °C (56.8 °F) at night.
27
+ In the coldest month, January, the maximum daily temperature typically ranges from 14 to 20 °C (57 to 68 °F), the minimum temperature typically at night ranges from 5 to 10 °C (41 to 50 °F). During the warmest months – July and August, the maximum temperature during the day typically ranges from 28 to 32 °C (82 to 90 °F), about 21 to 23 °C (70 to 73 °F) at night. March is transitional, the temperature often exceeds 20 °C (68 °F), with an average temperature of 19.3 °C (66.7 °F) during the day and 10.0 °C (50.0 °F) at night. December, January and February are the coldest months, with average temperatures around 17 °C (63 °F) during the day and 8 °C (46 °F) at night. Snowfall is extremely rare; the most recent occasion was a small amount of snow that fell on 11 January 1960.[21] Valencia has one of the mildest winters in Europe, owing to its southern location on the Mediterranean Sea and the Foehn phenomenon. The January average is comparable to temperatures expected for May and September in the major cities of northern Europe.
28
+
29
+ Valencia, on average, has 2,696 sunshine hours per year, from 155 in December (average of 5 hours of sunshine duration a day) to 315 in July (average above 10 hours of sunshine duration a day). The average temperature of the sea is 14–15 °C (57–59 °F) in winter and 25–26 °C (77–79 °F) in summer.[22][23] Average annual relative humidity is 65%.[24]
30
+
31
+ Valencia is one of the oldest cities in Spain, founded in the Roman period, c. 138 BC, under the name "Valentia Edetanorum". A few centuries later, with the power vacuum left by the demise of the Roman imperial administration, the Catholic Church assumed the reins of power in the city, coinciding with the first waves of the invading Germanic peoples (Suevi, Vandals and Alans, and later the Visigoths).
32
+
33
+ The city surrendered to the invading Moors (Berbers and Arabs) about 714 AD,[26] and the cathedral of Saint Vincent was turned into a mosque.
34
+
35
+ The Castilian nobleman Rodrigo Díaz de Vivar, known as El Cid, in command of a combined Christian and Moorish army, besieged the city beginning in 1092. After the siege ended in May 1094, he ruled the city and its surrounding territory as his own fiefdom for five years from 15 June 1094 to July 1099.
36
+
37
+ The city remained in the hands of Christian troops until 1102, when the Almoravids retook the city and restored the Muslim religion. Alfonso VI of León and Castile drove them from the city, but was unable to hold it. The Almoravid Mazdali took possession on 5 May 1109, then the Almohads, seized control of it in 1171.
38
+
39
+ Many Jews lived in Valencia during early Muslim rule, including the accomplished Jewish poet Solomon ibn Gabirol, who spent his last years in the city.[27] Jews continued to live in Valencia throughout the Almoravid and Almohad dynasties, many of them being artisans such as silversmiths, shoemakers, blacksmiths, locksmiths, etc.; a few were rabbinic scholars. When the city fell to James I of Aragon, the Jewish population of the city constituted about 7 percent of the population.[27]
40
+
41
+ In 1238,[28] King James I of Aragon, with an army composed of Aragonese, Catalans, Navarrese and crusaders from the Order of Calatrava, laid siege to Valencia and on 28 September obtained a surrender.[29] Fifty thousand Moors were forced to leave.
42
+
43
+ The city endured serious troubles in the mid-14th century, including the decimation of the population by the Black Death of 1348 and
44
+ subsequent years of epidemics — as well as a series of wars and riots that followed. In 1391, the Jewish quarter was destroyed.[27]
45
+
46
+ Genoese traders promoted the expansion of the cultivation of white mulberry in the area by the late 14th century, later also introducing innovative silk manufacturing techniques, with the city becoming a centre of production of mulberry, yet, at least not for a time, a major silk-making centre.[30] The Genoese community in Valencia—comprising merchants, artisans and workers—became along Seville's one of the most important in the Iberian Peninsula.[31]
47
+
48
+ The 15th century was a time of economic expansion, known as the Valencian Golden Age, during which culture and the arts flourished. Concurrent population growth made Valencia the most populous city in the Crown of Aragon. Some of the landmark buildings of the city were built during the Late Middle Ages, including the Serranos Towers (1392), the Silk Exchange (1482), the Micalet [es], and the Chapel of the Kings of the Convent of Sant Domènec. In painting and sculpture, Flemish and Italian trends had an influence on Valencian artists.
49
+
50
+ Valencia became a major slave trade centre in the 15th century, second only to Lisbon in the West,[32] prompting a Lisbon–Seville–Valencia axis by the second half of the century, powered by the incipient Portuguese slave trade originated in Western Africa.[33] By the end of the 15th century Valencia was in the group of largest European cities, being the most populated city in the Hispanic Monarchy and second to Lisbon in the Iberian Peninsula.[34]
51
+
52
+ Following the death of Ferdinand II in 1516, the nobiliary estate challenged the Crown amid the relative void of power.[35] The nobles earned the rejection from the people of Valencia, and the whole kingdom was plunged into armed revolt—the Revolt of the Brotherhoods—and full blown civil war between 1521 and 1522.[35] Muslims vassals were forced to convert in 1526 at behest of Charles V.[35]
53
+
54
+ Urban and rural delinquency—linked to phenomena such as vagrancy, gambling, larceny, pimping and false begging—as well as the nobiliary banditry consisting of the revenges and rivalries between the aristocratic families flourished in Valencia during the 16th century.[36]
55
+
56
+ Also during the 16th century, the North-African piracy targeted the whole coastline of the kingdom of Valencia, forcing the fortification of sites.[37] By the late 1520s, the intensification of the activity of the Barbary corsairs along the conflictive domestic situation and the emergence of the Atlantic Ocean in detriment of the Mediterranean in the global trade networks put an end to the economic splendor of the city.[38] The piracy also paved the way for the ensuing development of Christian piracy, that had Valencia as one of its main bases in the Iberian Mediterranean.[37] The Berber threat—initially with Ottoman support—generated great insecurity on the coast, and it would not be substantially reduced until the 1580s.[37]
57
+
58
+ The crisis deepened during the 17th century with the expulsion in 1609 of the Moriscos, descendants of the Muslim population that had converted to Christianity. The Spanish government systematically forced Moriscos to leave the kingdom for Muslim North Africa. They were concentrated in the former Crown of Aragon, and in the Kingdom of Valencia specifically, they constituted roughly a third of the total population.[39] The expulsion caused the financial ruin of some of the Valencian nobility and the bankruptcy of the Taula de Canvi financial institution in 1613.
59
+
60
+ The decline of the city reached its nadir with the War of Spanish Succession (1702–1709), marking the end of the political and legal independence of the Kingdom of Valencia. During the War of the Spanish Succession, Valencia sided with the Habsburg ruler of the Holy Roman Empire, Charles of Austria. King Charles of Austria vowed to protect the laws of the Kingdom of Valencia (Furs), which gained him the sympathy of a wide sector of the Valencian population. On 24 January 1706, Charles Mordaunt, 3rd Earl of Peterborough, 1st Earl of Monmouth, led a handful of English cavalrymen into the city after riding south from Barcelona, captured the nearby fortress at Sagunt, and bluffed the Spanish Bourbon army into withdrawal.
61
+
62
+ The English held the city for 16 months and defeated several attempts to expel them. After the victory of the Bourbons at the Battle of Almansa on 25 April 1707, the English army evacuated Valencia and Philip V ordered the repeal of the Furs of Valencia as punishment for the kingdom's support of Charles of Austria.[40] By the Nueva Planta decrees (Decretos de Nueva Planta) the ancient Charters of Valencia were abolished and the city was governed by the Castilian Charter, similarly to other places in the Crown of Aragon.
63
+
64
+ The Valencian economy recovered during the 18th century with the rising manufacture of woven silk and ceramic tiles. The silk industry boomed during this century, with the city replacing Toledo as the main silk-manufacturing centre in Spain.[30] The Palau de Justícia is an example of the affluence manifested in the most prosperous times of Bourbon rule (1758–1802) during the rule of Charles III. The 18th century was the age of the Enlightenment in Europe, and its humanistic ideals influenced such men as Gregory Maians and Pérez Bayer in Valencia, who maintained correspondence with the leading French and German thinkers of the time.
65
+
66
+ The 19th century began with Spain embroiled in wars with France, Portugal, and England—but the War of Independence most affected the Valencian territories and the capital city. The repercussions of the French Revolution were still felt when Napoleon's armies invaded the Iberian Peninsula. The Valencian people rose up in arms against them on 23 May 1808, inspired by leaders such as Vicent Doménech el Palleter.
67
+
68
+ The mutineers seized the Citadel, a Supreme Junta government took over, and on 26–28 June, Napoleon's Marshal Moncey attacked the city with a column of 9,000 French imperial troops in the First Battle of Valencia. He failed to take the city in two assaults and retreated to Madrid. Marshal Suchet began a long siege of the city in October 1811, and after intense bombardment forced it to surrender on 8 January 1812. After the capitulation, the French instituted reforms in Valencia, which became the capital of Spain when the Bonapartist pretender to the throne, José I (Joseph Bonaparte, Napoleon's elder brother), moved the Court there in the middle of 1812. The disaster of the Battle of Vitoria on 21 June 1813 obliged Suchet to quit Valencia, and the French troops withdrew in July.
69
+
70
+ Ferdinand VII became king after the victorious end of the Peninsular War, which freed Spain from Napoleonic domination. When he returned on 24 March 1814 from exile in France, the Cortes requested that he respect the liberal Constitution of 1812, which seriously limited royal powers. Ferdinand refused and went to Valencia instead of Madrid. Here, on 17 April, General Elio invited the King to reclaim his absolute rights and put his troops at the King's disposition. The king abolished the Constitution of 1812 and dissolved the two chambers of the Spanish Parliament on 10 May. Thus began six years (1814–1820) of absolutist rule, but the constitution was reinstated during the Trienio Liberal, a period of three years of liberal government in Spain from 1820–1823.
71
+
72
+ On the death of King Ferdinand VII in 1833, Baldomero Espartero became one of the most ardent defenders of the hereditary rights of the king's daughter, the future Isabella II. During the regency of Maria Cristina, Espartero ruled Spain for two years as its 18th Prime Minister from 16 September 1840 to 21 May 1841. City life in Valencia carried on in a revolutionary climate, with frequent clashes between liberals and republicans.
73
+
74
+ The reign of Isabella II as an adult (1843–1868) was a period of relative stability and growth for Valencia. During the second half of the 19th century the bourgeoisie encouraged the development of the city and its environs; land-owners were enriched by the introduction of the orange crop and the expansion of vineyards and other crops,. This economic boom corresponded with a revival of local traditions and of the Valencian language, which had been ruthlessly suppressed from the time of Philip V.
75
+
76
+ Works to demolish the walls of the old city started on 20 February 1865.[41] The demolition works of the citadel ended after the 1868 Glorious Revolution.[41]
77
+
78
+ Following the introduction of the universal manhood suffrage in the late 19th century, the political landscape in Valencia—until then consisting of the bipartisanship characteristic of the early Restoration period—experienced a change, leading to a growth of republican forces, gathered around the emerging figure of Vicente Blasco Ibáñez.[42] Not unlike the equally republican lerrouxism, the Populist blasquism [es] came to mobilize the Valencian masses by promoting anticlericalism.[43] Meanwhile, in reaction, the right-wing coalesced around several initiatives such as the Catholic League or the re-formulation of the Valencian Carlism and the Valencianism did similarly with organizations such as Valencia Nova or the Unió Valencianista.[44]
79
+
80
+ In the early 20th century Valencia was an industrialised city. The silk industry had disappeared, but there was a large production of hides and skins, wood, metals and foodstuffs, this last with substantial exports, particularly of wine and citrus. Small businesses predominated, but with the rapid mechanisation of industry larger companies were being formed. The best expression of this dynamic was in the regional exhibitions, including that of 1909 held next to the pedestrian avenue L'Albereda (Paseo de la Alameda), which depicted the progress of agriculture and industry. Among the most architecturally successful buildings of the era were those designed in the Art Nouveau style, such as the North Station (Estació del Nord) and the Central and Columbus markets.
81
+
82
+ World War I (1914–1918) greatly affected the Valencian economy, causing the collapse of its citrus exports. The Second Spanish Republic (1931–1939) opened the way for democratic participation and the increased politicisation of citizens, especially in response to the rise of Conservative Front power in 1933. The inevitable march to civil war and the combat in Madrid resulted in the removal of the capital of the Republic to Valencia.
83
+
84
+ After the continuous unsuccessful Francoist offensive on besieged Madrid during the Spanish Civil War, Valencia temporarily became the capital of Republican Spain on 6 November 1936. It hosted the government until 31 October 1937.[45]
85
+
86
+ The city was heavily bombarded by air and sea, mainly by the fascist Italian airforce, as well as the Francoist airforce with German Nazi support. By the end of the war the city had survived 442 bombardments, leaving 2,831 dead and 847 wounded, although it is estimated that the death toll was higher. The Republican government moved to Barcelona on 31 October of that year. On 30 March 1939, Valencia surrendered and the Nationalist troops entered the city. The postwar years were a time of hardship for Valencians. During Franco's regime speaking or teaching Valencian was prohibited; in a significant reversal it is now compulsory for every schoolchild in Valencia.
87
+
88
+ The dictatorship of Franco forbade political parties and began a harsh ideological and cultural repression countenanced and sometimes led by the Catholic Church. Franco's regime also executed some of the main Valencian intellectuals, like Juan Peset, rector of University of Valencia. Large groups of them, including Josep Renau and Max Aub went into exile.
89
+
90
+ In 1943 Franco decreed the exclusivity of Valencia and Barcelona for the celebration of international fairs in Spain.[46] These two cities would hold the monopoly on international fairs for more than three decades, until its abolishment in 1979 by the government of Adolfo Suárez.[46]
91
+
92
+ In October 1957, the 1957 Valencia flood, a major flood of the Turia river, left 81 casualties and notable property damage.[47] The disaster led to the remodelling of the city and the creation of a new river bed for the Turia, with the old one becoming one of the city's "green lungs".[47]
93
+
94
+ The economy began to recover in the early 1960s, and the city experienced explosive population growth through immigration spurred by the jobs created with the implementation of major urban projects and infrastructure improvements. With the advent of democracy in Spain, the ancient kingdom of Valencia was established as a new autonomous entity, the Valencian Community, the Statute of Autonomy of 1982 designating Valencia as its capital.
95
+
96
+ Valencia has since then experienced a surge in its cultural development, exemplified by exhibitions and performances at such iconic institutions as the Palau de la Música, the Palacio de Congresos, the Metro, the City of Arts and Sciences (Ciutat de les Arts i les Ciències), the Valencian Museum of Enlightenment and Modernity (Museo Valenciano de la Ilustracion y la Modernidad), and the Institute of Modern Art (Institut Valencià d'Art Modern). The various productions of Santiago Calatrava, a renowned structural engineer, architect, and sculptor and of the architect Félix Candela have contributed to Valencia's international reputation. These public works and the ongoing rehabilitation of the Old City (Ciutat Vella) have helped improve the city's livability and tourism is continually increasing.
97
+
98
+ On 3 July 2006, a major mass transit disaster, the Valencia Metro derailment, left 43 dead and 47 wounded.[48] Days later, on 9 July, the World Day of Families, during Mass at Valencia's Cathedral, Our Lady of the Forsaken Basilica, Pope Benedict XVI used, the Sant Calze, a 1st-century Middle-Eastern artifact that some Catholics believe is the Holy Grail.[n. 1]
99
+
100
+ Valencia was selected in 2003 to host the historic America's Cup yacht race, the first European city ever to do so. The America's Cup matches took place from April to July 2007. On 3 July 2007, Alinghi defeated Team New Zealand to retain the America's Cup. Twenty-two days later, on 25 July 2007, the leaders of the Alinghi syndicate, holder of the America's Cup, officially announced that Valencia would be the host city for the 33rd America's Cup, held in June 2009.[50]
101
+
102
+ The results of the Valencia municipal elections from 1991 to 2011 delivered a 24-year uninterrupted rule (1991–2015) by the People's Party (PP) and Mayor Rita Barberá, who was invested to office thanks to the support from the Valencian Union. Barberá's rule was ousted by left-leaning forces after the 2015 municipal election with Joan Ribó (Compromís) becoming the new mayor.
103
+
104
+ Valencia enjoyed strong economic growth before the economic crisis of 2008, much of it spurred by tourism and the construction industry,[citation needed] with concurrent development and expansion of telecommunications and transport. The city's economy is service-oriented, as nearly 84% of the working population is employed in service sector occupations[citation needed]. However, the city still maintains an important industrial base, with 8.5% of the population employed in this sector. Growth has recently improved in the manufacturing sector, mainly automobile assembly; (The large factory of Ford Motor Company lies in a suburb of the city, Almussafes[51]). Agricultural activities are still carried on in the municipality, even though of relatively minor importance with only 1.9% of the working population and 3,973 ha (9,820 acres) planted mostly in orchards and citrus groves.
105
+
106
+ Since the onset of the Great Recession (2008), Valencia had experienced a growing unemployment rate, increased government debt, etc. Severe spending cuts had been introduced by the city government.
107
+
108
+ In 2009, Valencia was designated "the 29th fastest-improving European city".[52] Its influence in commerce, education, entertainment, media, fashion, science and the arts contributes to its status as one of the world's "Beta"-rank global cities.[7]
109
+
110
+ The city is the seat of one of the four stock exchanges in Spain, the Bolsa de Valencia [es], part of Bolsas y Mercados Españoles (BME), owned by SIX Group.[53]
111
+
112
+ The Valencia metropolitan area had a GDP amounting to $52.7 billion, and $28,141 per capita.[54]
113
+
114
+ Valencia's port is the biggest on the Mediterranean western coast,[55] the first of Spain in container traffic as of 2008[update][56] and the second of Spain[57] in total traffic, handling 20% of Spain's exports.[58] The main exports are foodstuffs and beverages. Other exports include oranges, furniture, ceramic tiles, fans, textiles and iron products. Valencia's manufacturing sector focuses on metallurgy, chemicals, textiles, shipbuilding and brewing. Small and medium-sized industries are an important part of the local economy, and before the current crisis unemployment was lower than the Spanish average.
115
+
116
+ Valencia's port underwent radical changes to accommodate the 32nd America's Cup in 2007. It was divided into two parts—one was unchanged while the other section was modified for the America's Cup festivities. The two sections remain divided by a wall that projects far into the water to maintain clean water for the America's Cup side.
117
+
118
+ Public transport is provided by the Ferrocarrils de la Generalitat Valenciana (FGV), which operates the Metrovalencia and other rail and bus services. The Estació del Nord (North Station) is the main railway terminus in Valencia. A new temporary station, Estació de València-Joaquín Sorolla, has been built on land adjacent to this terminus to accommodate high speed AVE trains to and from Madrid, Barcelona, Seville and Alicante. Valencia Airport is situated 9 km (5.6 mi) west of Valencia city centre. Alicante Airport is situated about 133 km (83 mi) south of center of Valencia.
119
+
120
+ The City of Valencia also makes available a bicycle sharing system named Valenbisi to both visitors and residents. As of 13 October 2012, the system has 2750 bikes distributed over 250 stations all throughout the city.[59]
121
+
122
+ The average amount of time people spend commuting with public transit in Valencia, for example to and from work, on a weekday is 44 min. 6% of public transit riders, ride for more than 2 hours every day. The average amount of time people wait at a stop or station for public transit is 10 min, while 9% of riders wait for over 20 minutes on average every day. The average distance people usually ride in a single trip with public transit is 5.9 km (3.7 mi), while 8% travel for over 12 km (7.5 mi) in a single direction.[60]
123
+
124
+ Starting in the mid-1990s, Valencia, formerly an industrial centre, saw rapid development that expanded its cultural and tourism possibilities, and transformed it into a newly vibrant city. Many local landmarks were restored, including the ancient Towers of the medieval city (Serrans Towers and Quart Towers), and the Saint Miquel dels Reis monastery (Monasterio de San Miguel de los Reyes), which now holds a conservation library. Whole sections of the old city, for example the Carmen Quarter, have been extensively renovated. The Passeig Marítim, a 4 km (2 mi) long palm tree-lined promenade was constructed along the beaches of the north side of the port (Platja de Les Arenes, Platja del Cabanyal and Platja de la Malva-rosa).
125
+
126
+ The city has numerous convention centres and venues for trade events, among them the Feria Valencia Convention and Exhibition Centre (Institución Ferial de Valencia) and the Palau de congres (Conference Palace), and several 5-star hotels to accommodate business travelers.
127
+
128
+ In its long history, Valencia has acquired many local traditions and festivals, among them the Falles, which were declared Celebrations of International Tourist Interest (Festes de Interés Turístic Internacional) on 25 January 1965 and UNESCO's intangible cultural heritage of humanity list on 30 November 2016, and the Water Tribunal of Valencia (Tribunal de les Aigües de València), which was declared an intangible cultural heritage of humanity (Patrimoni Cultural Inmaterial de la Humanitat) in 2009. In addition to these Valencia has hosted world-class events that helped shape the city's reputation and put it in the international spotlight, e.g., the Regional Exhibition of 1909, the 32nd and the 33rd America's Cup competitions, the European Grand Prix of Formula One auto racing, the Valencia Open 500 tennis tournament, and the Global Champions Tour of equestrian sports. The final round of the MotoGP Championship is held annually at the Circuito de la Communitat Valenciana.
129
+
130
+ The 2007 America's Cup yachting races were held at Valencia in June and July 2007 and attracted huge crowds. The Louis Vuitton stage drew 1,044,373 visitors and the America's Cup match drew 466,010 visitors to the event.[61]
131
+
132
+ Valencia is a municipality, the basic local administrative division in Spain. The Ayuntamiento is the body charged with the municipal government and administration.[62] The Plenary of the ayuntamiento/ajuntament (known as Consell Municipal de València in the case of Valencia) is formed by 33 elected municipal councillors, who in turn invest the mayor. The last municipal election took place on 26 May 2019. Since 2015, Joan Ribó (Compromís) serves as Mayor. He renewed his spell for a second mandate following the 2019 election.[63]
133
+
134
+ The third largest city in Spain and the 24th most populous municipality in the European Union, Valencia has a population of 809,267[64] within its administrative limits on a land area of 134.6 km2 (52 sq mi). The urban area of Valencia extending beyond the administrative city limits has a population of between 1,564,145[65] and 1,595,000.[3] Also according to Spanish Ministry of Development, Greater Urban Area (es. Gran Área Urbana) within Horta of Valencia has a population of 1,551,585 on area of 62,881 km2 (24,278 sq mi), in period of 2001-2011 there was a population increase of 191,842 people, an increase of 14.1%.[6] About 2 million people live in the Valencia metropolitan area. According to the CityPopulation.de, metropolitan area has a population of 1,770,742,[66] according to the Organization for Economic Cooperation and Development: 2,300,000,[67] according to the World Gazetteer: 2,513,965[68] and according to the Eurostat: 2,522,383.[2] Between 2007 and 2008 there was a 14% increase in the foreign born population with the largest numeric increases by country being from Bolivia, Romania and Italy. This growth in the foreign born population, which rose from 1.5% in the year 2000[69] to 9.1% in 2009,[70] has also occurred in the two larger cities of Madrid and Barcelona.[71] The main countries of origin were Romania, United Kingdom and Bulgaria.[72]
135
+
136
+ The 10 largest groups of foreign born people in 2018 were :
137
+
138
+ Valencia is known internationally for the Falles (Les Falles), a local festival held in March, as well as for paella valenciana, traditional Valencian ceramics, craftsmanship in traditional dress, and the architecture of the City of Arts and Sciences, designed by Santiago Calatrava and Félix Candela.
139
+
140
+ There are also a number of well-preserved traditional Catholic festivities throughout the year. Holy Week celebrations in Valencia are considered some of the most colourful in Spain.[73]
141
+
142
+ Valencia was once the site of the Formula One European Grand Prix, first hosting the event on 24 August 2008, but was dropped at the beginning of the Grand Prix 2013 season, though still holds the annual Moto GP race at the Circuit Ricardo Tormo, usually that last race of the season in November.
143
+
144
+ The University of Valencia (officially Universitat de València Estudi General) was founded in 1499, being one of the oldest surviving universities in Spain and the oldest university in the Valencian Community. It was listed as one of the four leading Spanish universities in the 2011 Shanghai Academic Ranking of World Universities.
145
+
146
+ In 2012, Boston's Berklee College of Music opened a satellite campus at the Palau de les Arts Reina Sofia, its first and only international campus outside the U.S.[74] Since 2003, Valencia has also hosted the music courses of Musikeon, the leading musical institution in the Spanish-speaking world.
147
+
148
+ Valencia is known for its gastronomic culture. The paella (a simmered rice dish with meat (usually chicken or rabbit) or seafood) was born in Valencia; Other traditional dishes of Valencian gastronomy includes "fideuà", "arròs a banda", "arròs negre" (black rice), "fartons", "bunyols", the Spanish omelette, "pinchos" or "tapas" and "calamares"(squid).
149
+
150
+ Valencia was also the birthplace of the cold xufa beverage known as orxata, popular in many parts of the world, including the Americas.
151
+
152
+ Valencian (the way Valencians refer to the Catalan language) and Spanish are the two official languages. Spanish is currently the predominant language in the city proper.[75] Valencia proper and its surrounding metropolitan area are—along the Alicante area—the traditionally Valencian-speaking territories of the Valencian Community where the Valencian language is less spoken and read.[76] According to a 2019 survey commissioned by the local government, the 76 % of the population only use Spanish in their daily life, 1.3 % only use the Valencian language, while 17.6 % of the population use both languages indistinctively.[77] However, vis-à-vis the education system and according to the 1983 regional Law on the Use and Teaching of the Valencian Language, the municipality of Valencia is included within the territory of Valencian linguistic predominance.[78] In 1993, the municipal government agreed to exclusively use Valencian for the signage of new street plaques.[79]
153
+
154
+ Every year, the five days and nights from 15 to 19 March, called Falles, are a continual festival in Valencia; beginning on 1 March, the popular pyrotechnic events called mascletàes start every day at 2:00 pm. The Falles (Fallas in Spanish) is an enduring tradition in Valencia and other towns in the Valencian Community,[80] where it has become an important tourist attraction. The festival began in the 18th century,[81] and came to be celebrated on the night of the feast day of Saint Joseph, the patron saint of carpenters, with the burning of waste planks of wood from their workshops, as well as worn-out wooden objects brought by people in the neighborhood.[82]
155
+
156
+ This tradition continued to evolve, and eventually the parots were dressed with clothing to look like people—these were the first ninots, with features identifiable as being those of a well-known person from the neighborhood often added as well. In 1901 the city inaugurated the awarding of prizes for the best Falles monuments,[81] and neighborhood groups still vie with each other to make the most impressive and outrageous creations.[83] Their intricate assemblages, placed on top of pedestals for better visibility, depict famous personalities and topical subjects of the past year, presenting humorous and often satirical commentary on them.
157
+
158
+ 19 March at night Valencians burn all the Falles in an event called "La Cremà".
159
+
160
+ The Setmana or Semana Santa Marinera [es], as the Holy Week is known in the city, was declared "Festival of National Tourist Interest" by 2012.[84]
161
+
162
+ Major monuments include Valencia Cathedral, the Torres de Serrans, the Torres de Quart (es:Torres de Quart), the Llotja de la Seda (declared a World Heritage Site by UNESCO in 1996), and the Ciutat de les Arts i les Ciències (City of Arts and Sciences), an entertainment-based cultural and architectural complex designed by Santiago Calatrava and Félix Candela.[85] The Museu de Belles Arts de València houses a large collection of paintings from the 14th to the 18th centuries, including works by Velázquez, El Greco, and Goya, as well as an important series of engravings by Piranesi.[86] The Institut Valencià d'Art Modern (Valencian Institute of Modern Art) houses both permanent collections and temporary exhibitions of contemporary art and photography.[87]
163
+
164
+ The ancient winding streets of the Barrio del Carmen contain buildings dating to Roman and Arabic times. The Cathedral, built between the 13th and 15th centuries, is primarily of Valencian Gothic style but contains elements of Baroque and Romanesque architecture. Beside the cathedral is the Gothic Basilica of the Virgin (Basílica De La Mare de Déu dels Desamparats). The 15th-century Serrans and Quart towers are part of what was once the wall surrounding the city.
165
+
166
+ UNESCO has recognised the Silk Exchange market (La Llotja de la Seda), erected in early Valencian Gothic style, as a World Heritage Site.[88] The Central Market (Mercat Central) in Valencian Art Nouveau style, is one of the largest in Europe. The main railway station Estació Del Nord is built in Valencian Art Nouveau (a Spanish version of Art Nouveau) style.
167
+
168
+ World-renowned (and city-born) architect Santiago Calatrava produced the futuristic City of Arts and Sciences (Ciutat de les Arts i les Ciències), which contains an opera house/performing arts centre, a science museum, an IMAX cinema/planetarium, an oceanographic park and other structures such as a long covered walkway and restaurants. Calatrava is also responsible for the bridge named after him in the centre of the city. The Palau de la Música de València (Music Palace) is another noteworthy example of modern architecture in Valencia.
169
+
170
+ Cathedral of Valencia
171
+
172
+ The gothic courtyard of the Palace of the Admiral of Aragon (Palau de l'Almirall)
173
+
174
+ Convento de Santo Domingo (1300-1640)
175
+
176
+ Llotja de la Seda (Silk Exchange, interior)
177
+
178
+ Mercat de Colon in Valencian Art Nouveau style
179
+
180
+ Palace of the Marqués de Dos Aguas
181
+
182
+ L´Hemisfèric (IMAX Dome cinema) and Palau de les Arts Reina Sofia
183
+
184
+ Museu de les Ciències Príncipe Felipe
185
+
186
+ Assut de l'Or Bridge and L'Àgora behind.
187
+
188
+ Sant Joan de l'Hospital church built in 1316 (except for a Baroque chapel)
189
+
190
+ Mudéjar (Christian) baths Banys de l'Almirall (1313-1320)
191
+
192
+ The Museum of Fine Arts of Valencia, is the second museum with the largest amount of paintings in Spain,[89][90] after Prado Museum
193
+
194
+ Mercat Central (Central Market), in Valencian Art Nouveau style
195
+
196
+ Real Colegio Seminario del Corpus Christi
197
+
198
+ One of the few arch-bridges that links the cathedral with neighboring buildings. This one built in 1666.
199
+
200
+ Monastery of San Miguel de los Reyes built between 1548-1763
201
+
202
+ The Valencia Cathedral was called Iglesia Major in the early days of the Reconquista, then Iglesia de la Seu (Seu is from the Latin sedes, i.e., (archiepiscopal) See), and by virtue of the papal concession of 16 October 1866, it was called the Basilica Metropolitana. It is situated in the centre of the ancient Roman city where some believe the temple of Diana stood. In Gothic times, it seems to have been dedicated to the Holy Saviour; the Cid dedicated it to the Blessed Virgin; King James I of Aragon did likewise, leaving in the main chapel the image of the Blessed Virgin, which he carried with him and is reputed to be the one now preserved in the sacristy. The Moorish mosque, which had been converted into a Christian Church by the conqueror, was deemed unworthy of the title of the cathedral of Valencia, and in 1262 Bishop Andrés de Albalat laid the cornerstone of the new Gothic building, with three naves; these reach only to the choir of the present building. Bishop Vidal de Blanes built the chapter hall, and James I added the tower, called El Micalet because it was blessed on St. Michael's day in 1418. The tower is about 58 metres (190 feet) high and is topped with a belfry (1660–1736).
203
+
204
+ In the 15th century the dome was added and the naves extended back of the choir, uniting the building to the tower and forming a main entrance. Archbishop Luis Alfonso de los Cameros began the building of the main chapel in 1674; the walls were decorated with marbles and bronzes in the Baroque style of that period. At the beginning of the 18th century the German Conrad Rudolphus built the façade of the main entrance. The other two doors lead into the transept; one, that of the Apostles in pure pointed Gothic, dates from the 14th century, the other is that of the Palau. The additions made to the back of the cathedral detract from its height. The 18th-century restoration rounded the pointed arches, covered the Gothic columns with Corinthian pillars, and redecorated the walls.
205
+
206
+ The dome has no lantern, its plain ceiling being pierced by two large side windows. There are four chapels on either side, besides that at the end and those that open into the choir, the transept, and the sanctuary. It contains many paintings by eminent artists. A silver reredos, which was behind the altar, was carried away in the war of 1808, and converted into coin to meet the expenses of the campaign. There are two paintings by Francisco de Goya in the San Francesco chapel. Behind the Chapel of the Blessed Sacrament is a small Renaissance chapel built by Calixtus III. Beside the cathedral is the chapel dedicated to the Our Lady of the Forsaken (Mare de Déu dels desamparats).
207
+
208
+ The Tribunal de les Aigües (Water Court), a court dating from Moorish times that hears and mediates in matters relating to irrigation water, sits at noon every Thursday outside the Porta dels Apostols (Portal of the Apostles).[91]
209
+
210
+ In 1409, a hospital was founded and placed under the patronage of Santa Maria dels Innocents; to this was attached a confraternity devoted to recovering the bodies of the unfriended dead in the city and within a radius of 5 km (3.1 mi) around it. At the end of the 15th century this confraternity separated from the hospital, and continued its work under the name of "Cofradia para el ámparo de los desamparados". King Philip IV of Spain and the Duke of Arcos suggested the building of the new chapel, and in 1647 the Viceroy, Conde de Oropesa, who had been preserved from the bubonic plague, insisted on carrying out their project. The Blessed Virgin was proclaimed patroness of the city under the title of Virgen de los desamparados (Virgin of the Forsaken), and Archbishop Pedro de Urbina, on 31 June 1652, laid the cornerstone of the new chapel of this name. The archiepiscopal palace, a grain market in the time of the Moors, is simple in design, with an inside cloister and achapel. In 1357, the arch that connects it with the cathedral was built. Inside the council chamber are preserved the portraits of all the prelates of Valencia.
211
+
212
+ El Temple (the Temple), the ancient church of the Knights Templar, which passed into the hands of the Order of Montesa and was rebuilt in the reigns of Ferdinand VI and Charles III; the former convent of the Dominicans, at one time the headquarters of the Capitan General, the cloister of which has a Gothic wing and chapter room, large columns imitating palm trees; the Colegio del Corpus Christi, which is devoted to the Blessed Sacrament, and in which perpetual adoration is carried on; the Jesuit college, which was destroyed in 1868 by the revolutionary Committee of the Popular Front, but later rebuilt; and the Colegio de San Juan (also of the Society), the former college of the nobles, now a provincial institute for secondary instruction.
213
+
214
+ The largest plaza in Valencia is the Plaça del Ajuntament; it is home to the City Hall (Ajuntament) on its western side and the central post office (Edifici de Correus) on its eastern side, a cinema that shows classic movies, and many restaurants and bars. The plaza is triangular in shape, with a large cement lot at the southern end, normally surrounded by flower vendors. It serves as ground zero during the Les Falles when the fireworks of the Mascletà can be heard every afternoon. There is a large fountain at the northern end.
215
+
216
+ The Plaça de la Mare de Déu contains the Basilica of the Virgin and the Turia fountain, and is a popular spot for locals and tourists. Around the corner is the Plaça de la Reina, with the cathedral, orange trees, and many bars and restaurants.
217
+
218
+ The Turia River was diverted in the 1960s, after severe flooding, and the old riverbed is now the Turia gardens, which contain a children's playground, a fountain, and sports fields. The Palau de la Música is adjacent to the Turia gardens and the City of Arts and Sciences lies at one end. The Valencia Bioparc is a zoo, also located in the Turia riverbed.
219
+
220
+ Other gardens in Valencia include:
221
+
222
+ Valencia is also internationally famous for its football club, Valencia CF, one of the most successful clubs in Europe and La Liga, winning the Spanish league a total of six times including in 2002 and 2004 (the year it also won the UEFA Cup), and was a UEFA Champions League runner-up in 2000 and 2001. The club is currently owned by Peter Lim, a Singaporean businessman who bought the club in 2014. The team's stadium is the Mestalla, which can host up to 49000 fans. The club's city rival, Levante UD, also plays in La Liga and its stadium is Estadi Ciutat de València.
223
+
224
+ Valencia is the only city in Spain with two American football teams in LNFA Serie A, the national first division: Valencia Firebats and Valencia Giants. The Firebats have been national champions four times and have represented Valencia and Spain in the European playoffs since 2005. Both teams share the Jardín del Turia stadium.
225
+
226
+ Once a year between 2008–2012 the European Formula One Grand Prix took place in the Valencia Street Circuit. Valencia is among (with Barcelona, Porto and Monte Carlo) the only European cities ever to host Formula One World Championship Grands Prix on public roads in the middle of cities. The final race in 2012 European Grand Prix saw home driver Fernando Alonso win for Ferrari, in spite of starting halfway down the field. The Valencian Community motorcycle Grand Prix (Gran Premi de la Comunitat Valenciana de motociclisme) is part of the Grand Prix motorcycle racing season at the Circuit Ricardo Tormo (also known as Circuit de València), held in November in the nearby town of Cheste. Periodically the Spanish round of the Deutsche Tourenwagen Masters touring car racing Championship (DTM) is held in Valencia.
227
+
228
+ Valencia is also the home of the Asociación Española de Rugby League, who are the governing body for Rugby league in Spain. The city plays host to a number of clubs playing the sport and to date has hosted all the country's home international matches.[95] In 2015 Valencia hosted their first match in the Rugby league European Federation C competition, which was a qualifier for the 2017 Rugby League World Cup. Spain won the fixture 40-30[96]
229
+
230
+ These towns administratively are within of districts of Valencia.
231
+
232
+
233
+
234
+ Valencia is twinned with:[97]
235
+
236
+ Valencia and Xi'an (China) have signed a statement-of-intent, yet to be ratified.
en/5899.html.txt ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Valencia (Spanish: [baˈlenθja]), officially València (Valencian: [vaˈlensia]),[5] is the capital of the autonomous community of Valencia and the third-largest city in Spain after Madrid and Barcelona, surpassing 800,000 inhabitants in the municipality. The wider urban area also comprising the neighbouring municipalities has a population of around 1.6 million people.[3][6] Valencia is Spain's third largest metropolitan area, with a population ranging from 1.7 to 2.5 million[2] depending on how the metropolitan area is defined. The Port of Valencia is the 5th busiest container port in Europe and the busiest container port on the Mediterranean Sea. The city is ranked as a Beta-global city in the Globalization and World Cities Research Network.[7]
4
+
5
+ Valencia was founded as a Roman colony by the consul Decimus Junius Brutus Callaicus in 138 BC, and called Valentia Edetanorum. In 714 Moroccan and Arab Moors occupied the city, introducing their language, religion and customs; they implemented improved irrigation systems and the cultivation of new crops as well. Valencia was the capital of the Taifa of Valencia. In 1238 the Christian king James I of Aragon conquered the city and divided the land among the nobles who helped him conquer it, as witnessed in the Llibre del Repartiment. He also created the new Kingdom of Valencia, which had its own laws (Furs), with Valencia as its main city and capital. In the 18th century Philip V of Spain abolished the privileges as punishment to the kingdom of Valencia for aligning with the Habsburg side in the War of the Spanish Succession. Valencia was the capital of Spain when Joseph Bonaparte moved the Court there in the summer of 1812. It also served as the capital between 1936 and 1937, during the Second Spanish Republic.
6
+
7
+ The city is situated on the banks of the Turia, on the east coast of the Iberian Peninsula, fronting the Gulf of Valencia on the Mediterranean Sea. Its historic centre is one of the largest in Spain, with approximately 169 ha (420 acres).[improper synthesis?][8]
8
+ Due to its long history, Valencia has numerous celebrations and traditions, such as the Fallas, which were declared Fiestas of National Tourist Interest of Spain in 1965[9] and an Intangible cultural heritage by UNESCO in November 2016. Joan Ribó from Compromís has been the mayor of the city since 2015.
9
+
10
+ The original Latin name of the city was Valentia (IPA: [waˈlɛntɪ.a]), meaning "strength", or "valour", due to the Roman practice of recognising the valour of former Roman soldiers after a war. The Roman historian Livy explains that the founding of Valentia in the 2nd century BC was due to the settling of the Roman soldiers who fought against a Lusitanian rebel, Viriatus, during the Third Lusitanian Raid of the Lusitanian War.[10]
11
+
12
+ During the rule of the Muslim kingdoms in Spain, it had the title Medina at-Tarab ('City of Joy') according to one transliteration, or Medina at-Turab ('City of Sands') according to another, since it was located on the banks of the River Turia. It is not clear if the term Balansiyya was reserved for the entire Taifa of Valencia or also designated the city.[11]
13
+
14
+ By gradual sound changes, Valentia has become Valencia [baˈlenθja] (i.e. before a pausa or nasal sound) or [- βaˈlenθja] (after a continuant) in Castilian and València [vaˈlensia] in Valencian. In Valencian, e with grave accent (è) indicates /ɛ/ in contrast to /e/, but the word València is an exception to this rule, since è is pronounced /e/. The spelling "València" was approved by the AVL based on tradition after a debate on the matter. The name "València" has been the only official name of the city since 2017.[12]
15
+
16
+ Located on the eastern coast of the Iberian Peninsula and the western part of the Mediterranean Sea, fronting the Gulf of Valencia, Valencia lies on the highly fertile alluvial silts accumulated on the floodplain formed in the lower course of the Turia River.[13] At its founding by the Romans, it stood on a river island in the Turia, 6.4 kilometres (4.0 mi) from the sea.
17
+
18
+ The Albufera lagoon, located about 12 km (7 mi) south of the city proper (and part of the municipality), originally was a saltwater lagoon, yet, since the severing of links to the sea, it has eventually become a freshwater lagoon as well as it has progressively decreased in size.[14] The albufera and its environment are exploited for the cultivation of rice in paddy fields, and for hunting and fishing purposes.[14]
19
+
20
+ The City Council bought the lake from the Crown of Spain for 1,072,980 pesetas in 1911,[15] and today it forms the main portion of the Parc Natural de l'Albufera (Albufera Nature Reserve), with a surface area of 21,120 hectares (52,200 acres). In 1976, because of its cultural, historical, and ecological value, it was declared as natural park.
21
+
22
+ Valencia has a subtropical Mediterranean climate (Köppen Csa)[16] with mild winters and hot, dry summers.[17][18]
23
+
24
+ The maximum of precipitation occurs in the Autumn, coinciding with the time of the year when cold drop (gota fría) episodes of heavy rainfall—associated to cut-off low pressure systems at high altitude—[19] are common along the Western mediterranean coast.[20] The year-on-year variability in precipitation may be, however, considerable.[20]
25
+
26
+ Its average annual temperature is 18.3 °C (64.9 °F); 22.8 °C (73.0 °F) during the day and 13.8 °C (56.8 °F) at night.
27
+ In the coldest month, January, the maximum daily temperature typically ranges from 14 to 20 °C (57 to 68 °F), the minimum temperature typically at night ranges from 5 to 10 °C (41 to 50 °F). During the warmest months – July and August, the maximum temperature during the day typically ranges from 28 to 32 °C (82 to 90 °F), about 21 to 23 °C (70 to 73 °F) at night. March is transitional, the temperature often exceeds 20 °C (68 °F), with an average temperature of 19.3 °C (66.7 °F) during the day and 10.0 °C (50.0 °F) at night. December, January and February are the coldest months, with average temperatures around 17 °C (63 °F) during the day and 8 °C (46 °F) at night. Snowfall is extremely rare; the most recent occasion was a small amount of snow that fell on 11 January 1960.[21] Valencia has one of the mildest winters in Europe, owing to its southern location on the Mediterranean Sea and the Foehn phenomenon. The January average is comparable to temperatures expected for May and September in the major cities of northern Europe.
28
+
29
+ Valencia, on average, has 2,696 sunshine hours per year, from 155 in December (average of 5 hours of sunshine duration a day) to 315 in July (average above 10 hours of sunshine duration a day). The average temperature of the sea is 14–15 °C (57–59 °F) in winter and 25–26 °C (77–79 °F) in summer.[22][23] Average annual relative humidity is 65%.[24]
30
+
31
+ Valencia is one of the oldest cities in Spain, founded in the Roman period, c. 138 BC, under the name "Valentia Edetanorum". A few centuries later, with the power vacuum left by the demise of the Roman imperial administration, the Catholic Church assumed the reins of power in the city, coinciding with the first waves of the invading Germanic peoples (Suevi, Vandals and Alans, and later the Visigoths).
32
+
33
+ The city surrendered to the invading Moors (Berbers and Arabs) about 714 AD,[26] and the cathedral of Saint Vincent was turned into a mosque.
34
+
35
+ The Castilian nobleman Rodrigo Díaz de Vivar, known as El Cid, in command of a combined Christian and Moorish army, besieged the city beginning in 1092. After the siege ended in May 1094, he ruled the city and its surrounding territory as his own fiefdom for five years from 15 June 1094 to July 1099.
36
+
37
+ The city remained in the hands of Christian troops until 1102, when the Almoravids retook the city and restored the Muslim religion. Alfonso VI of León and Castile drove them from the city, but was unable to hold it. The Almoravid Mazdali took possession on 5 May 1109, then the Almohads, seized control of it in 1171.
38
+
39
+ Many Jews lived in Valencia during early Muslim rule, including the accomplished Jewish poet Solomon ibn Gabirol, who spent his last years in the city.[27] Jews continued to live in Valencia throughout the Almoravid and Almohad dynasties, many of them being artisans such as silversmiths, shoemakers, blacksmiths, locksmiths, etc.; a few were rabbinic scholars. When the city fell to James I of Aragon, the Jewish population of the city constituted about 7 percent of the population.[27]
40
+
41
+ In 1238,[28] King James I of Aragon, with an army composed of Aragonese, Catalans, Navarrese and crusaders from the Order of Calatrava, laid siege to Valencia and on 28 September obtained a surrender.[29] Fifty thousand Moors were forced to leave.
42
+
43
+ The city endured serious troubles in the mid-14th century, including the decimation of the population by the Black Death of 1348 and
44
+ subsequent years of epidemics — as well as a series of wars and riots that followed. In 1391, the Jewish quarter was destroyed.[27]
45
+
46
+ Genoese traders promoted the expansion of the cultivation of white mulberry in the area by the late 14th century, later also introducing innovative silk manufacturing techniques, with the city becoming a centre of production of mulberry, yet, at least not for a time, a major silk-making centre.[30] The Genoese community in Valencia—comprising merchants, artisans and workers—became along Seville's one of the most important in the Iberian Peninsula.[31]
47
+
48
+ The 15th century was a time of economic expansion, known as the Valencian Golden Age, during which culture and the arts flourished. Concurrent population growth made Valencia the most populous city in the Crown of Aragon. Some of the landmark buildings of the city were built during the Late Middle Ages, including the Serranos Towers (1392), the Silk Exchange (1482), the Micalet [es], and the Chapel of the Kings of the Convent of Sant Domènec. In painting and sculpture, Flemish and Italian trends had an influence on Valencian artists.
49
+
50
+ Valencia became a major slave trade centre in the 15th century, second only to Lisbon in the West,[32] prompting a Lisbon–Seville–Valencia axis by the second half of the century, powered by the incipient Portuguese slave trade originated in Western Africa.[33] By the end of the 15th century Valencia was in the group of largest European cities, being the most populated city in the Hispanic Monarchy and second to Lisbon in the Iberian Peninsula.[34]
51
+
52
+ Following the death of Ferdinand II in 1516, the nobiliary estate challenged the Crown amid the relative void of power.[35] The nobles earned the rejection from the people of Valencia, and the whole kingdom was plunged into armed revolt—the Revolt of the Brotherhoods—and full blown civil war between 1521 and 1522.[35] Muslims vassals were forced to convert in 1526 at behest of Charles V.[35]
53
+
54
+ Urban and rural delinquency—linked to phenomena such as vagrancy, gambling, larceny, pimping and false begging—as well as the nobiliary banditry consisting of the revenges and rivalries between the aristocratic families flourished in Valencia during the 16th century.[36]
55
+
56
+ Also during the 16th century, the North-African piracy targeted the whole coastline of the kingdom of Valencia, forcing the fortification of sites.[37] By the late 1520s, the intensification of the activity of the Barbary corsairs along the conflictive domestic situation and the emergence of the Atlantic Ocean in detriment of the Mediterranean in the global trade networks put an end to the economic splendor of the city.[38] The piracy also paved the way for the ensuing development of Christian piracy, that had Valencia as one of its main bases in the Iberian Mediterranean.[37] The Berber threat—initially with Ottoman support—generated great insecurity on the coast, and it would not be substantially reduced until the 1580s.[37]
57
+
58
+ The crisis deepened during the 17th century with the expulsion in 1609 of the Moriscos, descendants of the Muslim population that had converted to Christianity. The Spanish government systematically forced Moriscos to leave the kingdom for Muslim North Africa. They were concentrated in the former Crown of Aragon, and in the Kingdom of Valencia specifically, they constituted roughly a third of the total population.[39] The expulsion caused the financial ruin of some of the Valencian nobility and the bankruptcy of the Taula de Canvi financial institution in 1613.
59
+
60
+ The decline of the city reached its nadir with the War of Spanish Succession (1702–1709), marking the end of the political and legal independence of the Kingdom of Valencia. During the War of the Spanish Succession, Valencia sided with the Habsburg ruler of the Holy Roman Empire, Charles of Austria. King Charles of Austria vowed to protect the laws of the Kingdom of Valencia (Furs), which gained him the sympathy of a wide sector of the Valencian population. On 24 January 1706, Charles Mordaunt, 3rd Earl of Peterborough, 1st Earl of Monmouth, led a handful of English cavalrymen into the city after riding south from Barcelona, captured the nearby fortress at Sagunt, and bluffed the Spanish Bourbon army into withdrawal.
61
+
62
+ The English held the city for 16 months and defeated several attempts to expel them. After the victory of the Bourbons at the Battle of Almansa on 25 April 1707, the English army evacuated Valencia and Philip V ordered the repeal of the Furs of Valencia as punishment for the kingdom's support of Charles of Austria.[40] By the Nueva Planta decrees (Decretos de Nueva Planta) the ancient Charters of Valencia were abolished and the city was governed by the Castilian Charter, similarly to other places in the Crown of Aragon.
63
+
64
+ The Valencian economy recovered during the 18th century with the rising manufacture of woven silk and ceramic tiles. The silk industry boomed during this century, with the city replacing Toledo as the main silk-manufacturing centre in Spain.[30] The Palau de Justícia is an example of the affluence manifested in the most prosperous times of Bourbon rule (1758–1802) during the rule of Charles III. The 18th century was the age of the Enlightenment in Europe, and its humanistic ideals influenced such men as Gregory Maians and Pérez Bayer in Valencia, who maintained correspondence with the leading French and German thinkers of the time.
65
+
66
+ The 19th century began with Spain embroiled in wars with France, Portugal, and England—but the War of Independence most affected the Valencian territories and the capital city. The repercussions of the French Revolution were still felt when Napoleon's armies invaded the Iberian Peninsula. The Valencian people rose up in arms against them on 23 May 1808, inspired by leaders such as Vicent Doménech el Palleter.
67
+
68
+ The mutineers seized the Citadel, a Supreme Junta government took over, and on 26–28 June, Napoleon's Marshal Moncey attacked the city with a column of 9,000 French imperial troops in the First Battle of Valencia. He failed to take the city in two assaults and retreated to Madrid. Marshal Suchet began a long siege of the city in October 1811, and after intense bombardment forced it to surrender on 8 January 1812. After the capitulation, the French instituted reforms in Valencia, which became the capital of Spain when the Bonapartist pretender to the throne, José I (Joseph Bonaparte, Napoleon's elder brother), moved the Court there in the middle of 1812. The disaster of the Battle of Vitoria on 21 June 1813 obliged Suchet to quit Valencia, and the French troops withdrew in July.
69
+
70
+ Ferdinand VII became king after the victorious end of the Peninsular War, which freed Spain from Napoleonic domination. When he returned on 24 March 1814 from exile in France, the Cortes requested that he respect the liberal Constitution of 1812, which seriously limited royal powers. Ferdinand refused and went to Valencia instead of Madrid. Here, on 17 April, General Elio invited the King to reclaim his absolute rights and put his troops at the King's disposition. The king abolished the Constitution of 1812 and dissolved the two chambers of the Spanish Parliament on 10 May. Thus began six years (1814–1820) of absolutist rule, but the constitution was reinstated during the Trienio Liberal, a period of three years of liberal government in Spain from 1820–1823.
71
+
72
+ On the death of King Ferdinand VII in 1833, Baldomero Espartero became one of the most ardent defenders of the hereditary rights of the king's daughter, the future Isabella II. During the regency of Maria Cristina, Espartero ruled Spain for two years as its 18th Prime Minister from 16 September 1840 to 21 May 1841. City life in Valencia carried on in a revolutionary climate, with frequent clashes between liberals and republicans.
73
+
74
+ The reign of Isabella II as an adult (1843–1868) was a period of relative stability and growth for Valencia. During the second half of the 19th century the bourgeoisie encouraged the development of the city and its environs; land-owners were enriched by the introduction of the orange crop and the expansion of vineyards and other crops,. This economic boom corresponded with a revival of local traditions and of the Valencian language, which had been ruthlessly suppressed from the time of Philip V.
75
+
76
+ Works to demolish the walls of the old city started on 20 February 1865.[41] The demolition works of the citadel ended after the 1868 Glorious Revolution.[41]
77
+
78
+ Following the introduction of the universal manhood suffrage in the late 19th century, the political landscape in Valencia—until then consisting of the bipartisanship characteristic of the early Restoration period—experienced a change, leading to a growth of republican forces, gathered around the emerging figure of Vicente Blasco Ibáñez.[42] Not unlike the equally republican lerrouxism, the Populist blasquism [es] came to mobilize the Valencian masses by promoting anticlericalism.[43] Meanwhile, in reaction, the right-wing coalesced around several initiatives such as the Catholic League or the re-formulation of the Valencian Carlism and the Valencianism did similarly with organizations such as Valencia Nova or the Unió Valencianista.[44]
79
+
80
+ In the early 20th century Valencia was an industrialised city. The silk industry had disappeared, but there was a large production of hides and skins, wood, metals and foodstuffs, this last with substantial exports, particularly of wine and citrus. Small businesses predominated, but with the rapid mechanisation of industry larger companies were being formed. The best expression of this dynamic was in the regional exhibitions, including that of 1909 held next to the pedestrian avenue L'Albereda (Paseo de la Alameda), which depicted the progress of agriculture and industry. Among the most architecturally successful buildings of the era were those designed in the Art Nouveau style, such as the North Station (Estació del Nord) and the Central and Columbus markets.
81
+
82
+ World War I (1914–1918) greatly affected the Valencian economy, causing the collapse of its citrus exports. The Second Spanish Republic (1931–1939) opened the way for democratic participation and the increased politicisation of citizens, especially in response to the rise of Conservative Front power in 1933. The inevitable march to civil war and the combat in Madrid resulted in the removal of the capital of the Republic to Valencia.
83
+
84
+ After the continuous unsuccessful Francoist offensive on besieged Madrid during the Spanish Civil War, Valencia temporarily became the capital of Republican Spain on 6 November 1936. It hosted the government until 31 October 1937.[45]
85
+
86
+ The city was heavily bombarded by air and sea, mainly by the fascist Italian airforce, as well as the Francoist airforce with German Nazi support. By the end of the war the city had survived 442 bombardments, leaving 2,831 dead and 847 wounded, although it is estimated that the death toll was higher. The Republican government moved to Barcelona on 31 October of that year. On 30 March 1939, Valencia surrendered and the Nationalist troops entered the city. The postwar years were a time of hardship for Valencians. During Franco's regime speaking or teaching Valencian was prohibited; in a significant reversal it is now compulsory for every schoolchild in Valencia.
87
+
88
+ The dictatorship of Franco forbade political parties and began a harsh ideological and cultural repression countenanced and sometimes led by the Catholic Church. Franco's regime also executed some of the main Valencian intellectuals, like Juan Peset, rector of University of Valencia. Large groups of them, including Josep Renau and Max Aub went into exile.
89
+
90
+ In 1943 Franco decreed the exclusivity of Valencia and Barcelona for the celebration of international fairs in Spain.[46] These two cities would hold the monopoly on international fairs for more than three decades, until its abolishment in 1979 by the government of Adolfo Suárez.[46]
91
+
92
+ In October 1957, the 1957 Valencia flood, a major flood of the Turia river, left 81 casualties and notable property damage.[47] The disaster led to the remodelling of the city and the creation of a new river bed for the Turia, with the old one becoming one of the city's "green lungs".[47]
93
+
94
+ The economy began to recover in the early 1960s, and the city experienced explosive population growth through immigration spurred by the jobs created with the implementation of major urban projects and infrastructure improvements. With the advent of democracy in Spain, the ancient kingdom of Valencia was established as a new autonomous entity, the Valencian Community, the Statute of Autonomy of 1982 designating Valencia as its capital.
95
+
96
+ Valencia has since then experienced a surge in its cultural development, exemplified by exhibitions and performances at such iconic institutions as the Palau de la Música, the Palacio de Congresos, the Metro, the City of Arts and Sciences (Ciutat de les Arts i les Ciències), the Valencian Museum of Enlightenment and Modernity (Museo Valenciano de la Ilustracion y la Modernidad), and the Institute of Modern Art (Institut Valencià d'Art Modern). The various productions of Santiago Calatrava, a renowned structural engineer, architect, and sculptor and of the architect Félix Candela have contributed to Valencia's international reputation. These public works and the ongoing rehabilitation of the Old City (Ciutat Vella) have helped improve the city's livability and tourism is continually increasing.
97
+
98
+ On 3 July 2006, a major mass transit disaster, the Valencia Metro derailment, left 43 dead and 47 wounded.[48] Days later, on 9 July, the World Day of Families, during Mass at Valencia's Cathedral, Our Lady of the Forsaken Basilica, Pope Benedict XVI used, the Sant Calze, a 1st-century Middle-Eastern artifact that some Catholics believe is the Holy Grail.[n. 1]
99
+
100
+ Valencia was selected in 2003 to host the historic America's Cup yacht race, the first European city ever to do so. The America's Cup matches took place from April to July 2007. On 3 July 2007, Alinghi defeated Team New Zealand to retain the America's Cup. Twenty-two days later, on 25 July 2007, the leaders of the Alinghi syndicate, holder of the America's Cup, officially announced that Valencia would be the host city for the 33rd America's Cup, held in June 2009.[50]
101
+
102
+ The results of the Valencia municipal elections from 1991 to 2011 delivered a 24-year uninterrupted rule (1991–2015) by the People's Party (PP) and Mayor Rita Barberá, who was invested to office thanks to the support from the Valencian Union. Barberá's rule was ousted by left-leaning forces after the 2015 municipal election with Joan Ribó (Compromís) becoming the new mayor.
103
+
104
+ Valencia enjoyed strong economic growth before the economic crisis of 2008, much of it spurred by tourism and the construction industry,[citation needed] with concurrent development and expansion of telecommunications and transport. The city's economy is service-oriented, as nearly 84% of the working population is employed in service sector occupations[citation needed]. However, the city still maintains an important industrial base, with 8.5% of the population employed in this sector. Growth has recently improved in the manufacturing sector, mainly automobile assembly; (The large factory of Ford Motor Company lies in a suburb of the city, Almussafes[51]). Agricultural activities are still carried on in the municipality, even though of relatively minor importance with only 1.9% of the working population and 3,973 ha (9,820 acres) planted mostly in orchards and citrus groves.
105
+
106
+ Since the onset of the Great Recession (2008), Valencia had experienced a growing unemployment rate, increased government debt, etc. Severe spending cuts had been introduced by the city government.
107
+
108
+ In 2009, Valencia was designated "the 29th fastest-improving European city".[52] Its influence in commerce, education, entertainment, media, fashion, science and the arts contributes to its status as one of the world's "Beta"-rank global cities.[7]
109
+
110
+ The city is the seat of one of the four stock exchanges in Spain, the Bolsa de Valencia [es], part of Bolsas y Mercados Españoles (BME), owned by SIX Group.[53]
111
+
112
+ The Valencia metropolitan area had a GDP amounting to $52.7 billion, and $28,141 per capita.[54]
113
+
114
+ Valencia's port is the biggest on the Mediterranean western coast,[55] the first of Spain in container traffic as of 2008[update][56] and the second of Spain[57] in total traffic, handling 20% of Spain's exports.[58] The main exports are foodstuffs and beverages. Other exports include oranges, furniture, ceramic tiles, fans, textiles and iron products. Valencia's manufacturing sector focuses on metallurgy, chemicals, textiles, shipbuilding and brewing. Small and medium-sized industries are an important part of the local economy, and before the current crisis unemployment was lower than the Spanish average.
115
+
116
+ Valencia's port underwent radical changes to accommodate the 32nd America's Cup in 2007. It was divided into two parts—one was unchanged while the other section was modified for the America's Cup festivities. The two sections remain divided by a wall that projects far into the water to maintain clean water for the America's Cup side.
117
+
118
+ Public transport is provided by the Ferrocarrils de la Generalitat Valenciana (FGV), which operates the Metrovalencia and other rail and bus services. The Estació del Nord (North Station) is the main railway terminus in Valencia. A new temporary station, Estació de València-Joaquín Sorolla, has been built on land adjacent to this terminus to accommodate high speed AVE trains to and from Madrid, Barcelona, Seville and Alicante. Valencia Airport is situated 9 km (5.6 mi) west of Valencia city centre. Alicante Airport is situated about 133 km (83 mi) south of center of Valencia.
119
+
120
+ The City of Valencia also makes available a bicycle sharing system named Valenbisi to both visitors and residents. As of 13 October 2012, the system has 2750 bikes distributed over 250 stations all throughout the city.[59]
121
+
122
+ The average amount of time people spend commuting with public transit in Valencia, for example to and from work, on a weekday is 44 min. 6% of public transit riders, ride for more than 2 hours every day. The average amount of time people wait at a stop or station for public transit is 10 min, while 9% of riders wait for over 20 minutes on average every day. The average distance people usually ride in a single trip with public transit is 5.9 km (3.7 mi), while 8% travel for over 12 km (7.5 mi) in a single direction.[60]
123
+
124
+ Starting in the mid-1990s, Valencia, formerly an industrial centre, saw rapid development that expanded its cultural and tourism possibilities, and transformed it into a newly vibrant city. Many local landmarks were restored, including the ancient Towers of the medieval city (Serrans Towers and Quart Towers), and the Saint Miquel dels Reis monastery (Monasterio de San Miguel de los Reyes), which now holds a conservation library. Whole sections of the old city, for example the Carmen Quarter, have been extensively renovated. The Passeig Marítim, a 4 km (2 mi) long palm tree-lined promenade was constructed along the beaches of the north side of the port (Platja de Les Arenes, Platja del Cabanyal and Platja de la Malva-rosa).
125
+
126
+ The city has numerous convention centres and venues for trade events, among them the Feria Valencia Convention and Exhibition Centre (Institución Ferial de Valencia) and the Palau de congres (Conference Palace), and several 5-star hotels to accommodate business travelers.
127
+
128
+ In its long history, Valencia has acquired many local traditions and festivals, among them the Falles, which were declared Celebrations of International Tourist Interest (Festes de Interés Turístic Internacional) on 25 January 1965 and UNESCO's intangible cultural heritage of humanity list on 30 November 2016, and the Water Tribunal of Valencia (Tribunal de les Aigües de València), which was declared an intangible cultural heritage of humanity (Patrimoni Cultural Inmaterial de la Humanitat) in 2009. In addition to these Valencia has hosted world-class events that helped shape the city's reputation and put it in the international spotlight, e.g., the Regional Exhibition of 1909, the 32nd and the 33rd America's Cup competitions, the European Grand Prix of Formula One auto racing, the Valencia Open 500 tennis tournament, and the Global Champions Tour of equestrian sports. The final round of the MotoGP Championship is held annually at the Circuito de la Communitat Valenciana.
129
+
130
+ The 2007 America's Cup yachting races were held at Valencia in June and July 2007 and attracted huge crowds. The Louis Vuitton stage drew 1,044,373 visitors and the America's Cup match drew 466,010 visitors to the event.[61]
131
+
132
+ Valencia is a municipality, the basic local administrative division in Spain. The Ayuntamiento is the body charged with the municipal government and administration.[62] The Plenary of the ayuntamiento/ajuntament (known as Consell Municipal de València in the case of Valencia) is formed by 33 elected municipal councillors, who in turn invest the mayor. The last municipal election took place on 26 May 2019. Since 2015, Joan Ribó (Compromís) serves as Mayor. He renewed his spell for a second mandate following the 2019 election.[63]
133
+
134
+ The third largest city in Spain and the 24th most populous municipality in the European Union, Valencia has a population of 809,267[64] within its administrative limits on a land area of 134.6 km2 (52 sq mi). The urban area of Valencia extending beyond the administrative city limits has a population of between 1,564,145[65] and 1,595,000.[3] Also according to Spanish Ministry of Development, Greater Urban Area (es. Gran Área Urbana) within Horta of Valencia has a population of 1,551,585 on area of 62,881 km2 (24,278 sq mi), in period of 2001-2011 there was a population increase of 191,842 people, an increase of 14.1%.[6] About 2 million people live in the Valencia metropolitan area. According to the CityPopulation.de, metropolitan area has a population of 1,770,742,[66] according to the Organization for Economic Cooperation and Development: 2,300,000,[67] according to the World Gazetteer: 2,513,965[68] and according to the Eurostat: 2,522,383.[2] Between 2007 and 2008 there was a 14% increase in the foreign born population with the largest numeric increases by country being from Bolivia, Romania and Italy. This growth in the foreign born population, which rose from 1.5% in the year 2000[69] to 9.1% in 2009,[70] has also occurred in the two larger cities of Madrid and Barcelona.[71] The main countries of origin were Romania, United Kingdom and Bulgaria.[72]
135
+
136
+ The 10 largest groups of foreign born people in 2018 were :
137
+
138
+ Valencia is known internationally for the Falles (Les Falles), a local festival held in March, as well as for paella valenciana, traditional Valencian ceramics, craftsmanship in traditional dress, and the architecture of the City of Arts and Sciences, designed by Santiago Calatrava and Félix Candela.
139
+
140
+ There are also a number of well-preserved traditional Catholic festivities throughout the year. Holy Week celebrations in Valencia are considered some of the most colourful in Spain.[73]
141
+
142
+ Valencia was once the site of the Formula One European Grand Prix, first hosting the event on 24 August 2008, but was dropped at the beginning of the Grand Prix 2013 season, though still holds the annual Moto GP race at the Circuit Ricardo Tormo, usually that last race of the season in November.
143
+
144
+ The University of Valencia (officially Universitat de València Estudi General) was founded in 1499, being one of the oldest surviving universities in Spain and the oldest university in the Valencian Community. It was listed as one of the four leading Spanish universities in the 2011 Shanghai Academic Ranking of World Universities.
145
+
146
+ In 2012, Boston's Berklee College of Music opened a satellite campus at the Palau de les Arts Reina Sofia, its first and only international campus outside the U.S.[74] Since 2003, Valencia has also hosted the music courses of Musikeon, the leading musical institution in the Spanish-speaking world.
147
+
148
+ Valencia is known for its gastronomic culture. The paella (a simmered rice dish with meat (usually chicken or rabbit) or seafood) was born in Valencia; Other traditional dishes of Valencian gastronomy includes "fideuà", "arròs a banda", "arròs negre" (black rice), "fartons", "bunyols", the Spanish omelette, "pinchos" or "tapas" and "calamares"(squid).
149
+
150
+ Valencia was also the birthplace of the cold xufa beverage known as orxata, popular in many parts of the world, including the Americas.
151
+
152
+ Valencian (the way Valencians refer to the Catalan language) and Spanish are the two official languages. Spanish is currently the predominant language in the city proper.[75] Valencia proper and its surrounding metropolitan area are—along the Alicante area—the traditionally Valencian-speaking territories of the Valencian Community where the Valencian language is less spoken and read.[76] According to a 2019 survey commissioned by the local government, the 76 % of the population only use Spanish in their daily life, 1.3 % only use the Valencian language, while 17.6 % of the population use both languages indistinctively.[77] However, vis-à-vis the education system and according to the 1983 regional Law on the Use and Teaching of the Valencian Language, the municipality of Valencia is included within the territory of Valencian linguistic predominance.[78] In 1993, the municipal government agreed to exclusively use Valencian for the signage of new street plaques.[79]
153
+
154
+ Every year, the five days and nights from 15 to 19 March, called Falles, are a continual festival in Valencia; beginning on 1 March, the popular pyrotechnic events called mascletàes start every day at 2:00 pm. The Falles (Fallas in Spanish) is an enduring tradition in Valencia and other towns in the Valencian Community,[80] where it has become an important tourist attraction. The festival began in the 18th century,[81] and came to be celebrated on the night of the feast day of Saint Joseph, the patron saint of carpenters, with the burning of waste planks of wood from their workshops, as well as worn-out wooden objects brought by people in the neighborhood.[82]
155
+
156
+ This tradition continued to evolve, and eventually the parots were dressed with clothing to look like people—these were the first ninots, with features identifiable as being those of a well-known person from the neighborhood often added as well. In 1901 the city inaugurated the awarding of prizes for the best Falles monuments,[81] and neighborhood groups still vie with each other to make the most impressive and outrageous creations.[83] Their intricate assemblages, placed on top of pedestals for better visibility, depict famous personalities and topical subjects of the past year, presenting humorous and often satirical commentary on them.
157
+
158
+ 19 March at night Valencians burn all the Falles in an event called "La Cremà".
159
+
160
+ The Setmana or Semana Santa Marinera [es], as the Holy Week is known in the city, was declared "Festival of National Tourist Interest" by 2012.[84]
161
+
162
+ Major monuments include Valencia Cathedral, the Torres de Serrans, the Torres de Quart (es:Torres de Quart), the Llotja de la Seda (declared a World Heritage Site by UNESCO in 1996), and the Ciutat de les Arts i les Ciències (City of Arts and Sciences), an entertainment-based cultural and architectural complex designed by Santiago Calatrava and Félix Candela.[85] The Museu de Belles Arts de València houses a large collection of paintings from the 14th to the 18th centuries, including works by Velázquez, El Greco, and Goya, as well as an important series of engravings by Piranesi.[86] The Institut Valencià d'Art Modern (Valencian Institute of Modern Art) houses both permanent collections and temporary exhibitions of contemporary art and photography.[87]
163
+
164
+ The ancient winding streets of the Barrio del Carmen contain buildings dating to Roman and Arabic times. The Cathedral, built between the 13th and 15th centuries, is primarily of Valencian Gothic style but contains elements of Baroque and Romanesque architecture. Beside the cathedral is the Gothic Basilica of the Virgin (Basílica De La Mare de Déu dels Desamparats). The 15th-century Serrans and Quart towers are part of what was once the wall surrounding the city.
165
+
166
+ UNESCO has recognised the Silk Exchange market (La Llotja de la Seda), erected in early Valencian Gothic style, as a World Heritage Site.[88] The Central Market (Mercat Central) in Valencian Art Nouveau style, is one of the largest in Europe. The main railway station Estació Del Nord is built in Valencian Art Nouveau (a Spanish version of Art Nouveau) style.
167
+
168
+ World-renowned (and city-born) architect Santiago Calatrava produced the futuristic City of Arts and Sciences (Ciutat de les Arts i les Ciències), which contains an opera house/performing arts centre, a science museum, an IMAX cinema/planetarium, an oceanographic park and other structures such as a long covered walkway and restaurants. Calatrava is also responsible for the bridge named after him in the centre of the city. The Palau de la Música de València (Music Palace) is another noteworthy example of modern architecture in Valencia.
169
+
170
+ Cathedral of Valencia
171
+
172
+ The gothic courtyard of the Palace of the Admiral of Aragon (Palau de l'Almirall)
173
+
174
+ Convento de Santo Domingo (1300-1640)
175
+
176
+ Llotja de la Seda (Silk Exchange, interior)
177
+
178
+ Mercat de Colon in Valencian Art Nouveau style
179
+
180
+ Palace of the Marqués de Dos Aguas
181
+
182
+ L´Hemisfèric (IMAX Dome cinema) and Palau de les Arts Reina Sofia
183
+
184
+ Museu de les Ciències Príncipe Felipe
185
+
186
+ Assut de l'Or Bridge and L'Àgora behind.
187
+
188
+ Sant Joan de l'Hospital church built in 1316 (except for a Baroque chapel)
189
+
190
+ Mudéjar (Christian) baths Banys de l'Almirall (1313-1320)
191
+
192
+ The Museum of Fine Arts of Valencia, is the second museum with the largest amount of paintings in Spain,[89][90] after Prado Museum
193
+
194
+ Mercat Central (Central Market), in Valencian Art Nouveau style
195
+
196
+ Real Colegio Seminario del Corpus Christi
197
+
198
+ One of the few arch-bridges that links the cathedral with neighboring buildings. This one built in 1666.
199
+
200
+ Monastery of San Miguel de los Reyes built between 1548-1763
201
+
202
+ The Valencia Cathedral was called Iglesia Major in the early days of the Reconquista, then Iglesia de la Seu (Seu is from the Latin sedes, i.e., (archiepiscopal) See), and by virtue of the papal concession of 16 October 1866, it was called the Basilica Metropolitana. It is situated in the centre of the ancient Roman city where some believe the temple of Diana stood. In Gothic times, it seems to have been dedicated to the Holy Saviour; the Cid dedicated it to the Blessed Virgin; King James I of Aragon did likewise, leaving in the main chapel the image of the Blessed Virgin, which he carried with him and is reputed to be the one now preserved in the sacristy. The Moorish mosque, which had been converted into a Christian Church by the conqueror, was deemed unworthy of the title of the cathedral of Valencia, and in 1262 Bishop Andrés de Albalat laid the cornerstone of the new Gothic building, with three naves; these reach only to the choir of the present building. Bishop Vidal de Blanes built the chapter hall, and James I added the tower, called El Micalet because it was blessed on St. Michael's day in 1418. The tower is about 58 metres (190 feet) high and is topped with a belfry (1660–1736).
203
+
204
+ In the 15th century the dome was added and the naves extended back of the choir, uniting the building to the tower and forming a main entrance. Archbishop Luis Alfonso de los Cameros began the building of the main chapel in 1674; the walls were decorated with marbles and bronzes in the Baroque style of that period. At the beginning of the 18th century the German Conrad Rudolphus built the façade of the main entrance. The other two doors lead into the transept; one, that of the Apostles in pure pointed Gothic, dates from the 14th century, the other is that of the Palau. The additions made to the back of the cathedral detract from its height. The 18th-century restoration rounded the pointed arches, covered the Gothic columns with Corinthian pillars, and redecorated the walls.
205
+
206
+ The dome has no lantern, its plain ceiling being pierced by two large side windows. There are four chapels on either side, besides that at the end and those that open into the choir, the transept, and the sanctuary. It contains many paintings by eminent artists. A silver reredos, which was behind the altar, was carried away in the war of 1808, and converted into coin to meet the expenses of the campaign. There are two paintings by Francisco de Goya in the San Francesco chapel. Behind the Chapel of the Blessed Sacrament is a small Renaissance chapel built by Calixtus III. Beside the cathedral is the chapel dedicated to the Our Lady of the Forsaken (Mare de Déu dels desamparats).
207
+
208
+ The Tribunal de les Aigües (Water Court), a court dating from Moorish times that hears and mediates in matters relating to irrigation water, sits at noon every Thursday outside the Porta dels Apostols (Portal of the Apostles).[91]
209
+
210
+ In 1409, a hospital was founded and placed under the patronage of Santa Maria dels Innocents; to this was attached a confraternity devoted to recovering the bodies of the unfriended dead in the city and within a radius of 5 km (3.1 mi) around it. At the end of the 15th century this confraternity separated from the hospital, and continued its work under the name of "Cofradia para el ámparo de los desamparados". King Philip IV of Spain and the Duke of Arcos suggested the building of the new chapel, and in 1647 the Viceroy, Conde de Oropesa, who had been preserved from the bubonic plague, insisted on carrying out their project. The Blessed Virgin was proclaimed patroness of the city under the title of Virgen de los desamparados (Virgin of the Forsaken), and Archbishop Pedro de Urbina, on 31 June 1652, laid the cornerstone of the new chapel of this name. The archiepiscopal palace, a grain market in the time of the Moors, is simple in design, with an inside cloister and achapel. In 1357, the arch that connects it with the cathedral was built. Inside the council chamber are preserved the portraits of all the prelates of Valencia.
211
+
212
+ El Temple (the Temple), the ancient church of the Knights Templar, which passed into the hands of the Order of Montesa and was rebuilt in the reigns of Ferdinand VI and Charles III; the former convent of the Dominicans, at one time the headquarters of the Capitan General, the cloister of which has a Gothic wing and chapter room, large columns imitating palm trees; the Colegio del Corpus Christi, which is devoted to the Blessed Sacrament, and in which perpetual adoration is carried on; the Jesuit college, which was destroyed in 1868 by the revolutionary Committee of the Popular Front, but later rebuilt; and the Colegio de San Juan (also of the Society), the former college of the nobles, now a provincial institute for secondary instruction.
213
+
214
+ The largest plaza in Valencia is the Plaça del Ajuntament; it is home to the City Hall (Ajuntament) on its western side and the central post office (Edifici de Correus) on its eastern side, a cinema that shows classic movies, and many restaurants and bars. The plaza is triangular in shape, with a large cement lot at the southern end, normally surrounded by flower vendors. It serves as ground zero during the Les Falles when the fireworks of the Mascletà can be heard every afternoon. There is a large fountain at the northern end.
215
+
216
+ The Plaça de la Mare de Déu contains the Basilica of the Virgin and the Turia fountain, and is a popular spot for locals and tourists. Around the corner is the Plaça de la Reina, with the cathedral, orange trees, and many bars and restaurants.
217
+
218
+ The Turia River was diverted in the 1960s, after severe flooding, and the old riverbed is now the Turia gardens, which contain a children's playground, a fountain, and sports fields. The Palau de la Música is adjacent to the Turia gardens and the City of Arts and Sciences lies at one end. The Valencia Bioparc is a zoo, also located in the Turia riverbed.
219
+
220
+ Other gardens in Valencia include:
221
+
222
+ Valencia is also internationally famous for its football club, Valencia CF, one of the most successful clubs in Europe and La Liga, winning the Spanish league a total of six times including in 2002 and 2004 (the year it also won the UEFA Cup), and was a UEFA Champions League runner-up in 2000 and 2001. The club is currently owned by Peter Lim, a Singaporean businessman who bought the club in 2014. The team's stadium is the Mestalla, which can host up to 49000 fans. The club's city rival, Levante UD, also plays in La Liga and its stadium is Estadi Ciutat de València.
223
+
224
+ Valencia is the only city in Spain with two American football teams in LNFA Serie A, the national first division: Valencia Firebats and Valencia Giants. The Firebats have been national champions four times and have represented Valencia and Spain in the European playoffs since 2005. Both teams share the Jardín del Turia stadium.
225
+
226
+ Once a year between 2008–2012 the European Formula One Grand Prix took place in the Valencia Street Circuit. Valencia is among (with Barcelona, Porto and Monte Carlo) the only European cities ever to host Formula One World Championship Grands Prix on public roads in the middle of cities. The final race in 2012 European Grand Prix saw home driver Fernando Alonso win for Ferrari, in spite of starting halfway down the field. The Valencian Community motorcycle Grand Prix (Gran Premi de la Comunitat Valenciana de motociclisme) is part of the Grand Prix motorcycle racing season at the Circuit Ricardo Tormo (also known as Circuit de València), held in November in the nearby town of Cheste. Periodically the Spanish round of the Deutsche Tourenwagen Masters touring car racing Championship (DTM) is held in Valencia.
227
+
228
+ Valencia is also the home of the Asociación Española de Rugby League, who are the governing body for Rugby league in Spain. The city plays host to a number of clubs playing the sport and to date has hosted all the country's home international matches.[95] In 2015 Valencia hosted their first match in the Rugby league European Federation C competition, which was a qualifier for the 2017 Rugby League World Cup. Spain won the fixture 40-30[96]
229
+
230
+ These towns administratively are within of districts of Valencia.
231
+
232
+
233
+
234
+ Valencia is twinned with:[97]
235
+
236
+ Valencia and Xi'an (China) have signed a statement-of-intent, yet to be ratified.
en/59.html.txt ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ An airport is an aerodrome with extended facilities, mostly for commercial air transport.[1][2] Airports often have facilities to store and maintain aircraft, and a control tower. An airport consists of a landing area, which comprises an aerially accessible open space including at least one operationally active surface such as a runway for a plane to take off[3] or a helipad,[4] and often includes adjacent utility buildings such as control towers, hangars[5] and terminals. Larger airports may have airport aprons, taxiway bridges, air traffic control centres, passenger facilities such as restaurants and lounges, and emergency services. In some countries, the US in particular, airports also typically have one or more fixed-base operators, serving general aviation.
4
+
5
+ An airport solely serving helicopters is called a heliport. An airport for use by seaplanes and amphibious aircraft is called a seaplane base. Such a base typically includes a stretch of open water for takeoffs and landings, and seaplane docks for tying-up.
6
+
7
+ An international airport has additional facilities for customs and passport control as well as incorporating all the aforementioned elements. Such airports rank among the most complex and largest of all built typologies, with 15 of the top 50 buildings by floor area being airport terminals.[citation needed][6]
8
+
9
+ The terms aerodrome, airfield, and airstrip also refer to airports, and the terms heliport, seaplane base, and STOLport refer to airports dedicated exclusively to helicopters, seaplanes, and short take-off and landing aircraft.
10
+
11
+ In colloquial use in certain environments, the terms airport and aerodrome are often interchanged. However, in general, the term airport may imply or confer a certain stature upon the aviation facility that other aerodromes may not have achieved. In some jurisdictions, airport is a legal term of art reserved exclusively for those aerodromes certified or licensed as airports by the relevant national aviation authority after meeting specified certification criteria or regulatory requirements.[7]
12
+
13
+ That is to say, all airports are aerodromes, but not all aerodromes are airports. In jurisdictions where there is no legal distinction between aerodrome and airport, which term to use in the name of an aerodrome may be a commercial decision. In US technical/legal usage, landing area is used instead of aerodrome, and airport means "a landing area used regularly by aircraft for receiving or discharging passengers or cargo".[8]
14
+
15
+ Smaller or less-developed airfields, which represent the vast majority, often have a single runway shorter than 1,000 m (3,300 ft). Larger airports for airline flights generally have paved runways of 2,000 m (6,600 ft) or longer. Skyline Airport in Inkom, Idaho has a runway that is only 122 m (400 ft) long.[9]
16
+
17
+ In the United States, the minimum dimensions for dry, hard landing fields are defined by the FAR Landing And Takeoff Field Lengths. These include considerations for safety margins during landing and takeoff.
18
+
19
+ The longest public-use runway in the world is at Qamdo Bamda Airport in China. It has a length of 5,500 m (18,045 ft). The world's widest paved runway is at Ulyanovsk Vostochny Airport in Russia and is 105 m (344 ft) wide.
20
+
21
+ As of 2009[update], the CIA stated that there were approximately 44,000 "airports or airfields recognizable from the air" around the world, including 15,095 in the US, the US having the most in the world.[10][11]
22
+
23
+ Most of the world's large airports are owned by local, regional, or national government bodies who then lease the airport to private corporations who oversee the airport's operation. For example, in the UK the state-owned British Airports Authority originally operated eight of the nation's major commercial airports – it was subsequently privatized in the late 1980s, and following its takeover by the Spanish Ferrovial consortium in 2006, has been further divested and downsized to operating just Heathrow. Germany's Frankfurt Airport is managed by the quasi-private firm Fraport. While in India GMR Group operates, through joint ventures, Indira Gandhi International Airport and Rajiv Gandhi International Airport. Bengaluru International Airport and Chhatrapati Shivaji International Airport are controlled by GVK Group. The rest of India's airports are managed by the Airports Authority of India. In Pakistan nearly all civilian airports are owned and operated by the Pakistan Civil Aviation Authority except for Sialkot International Airport which has the distinction of being the first privately owned public airport in Pakistan and South Asia[citation needed].
24
+
25
+ In the US, commercial airports are generally operated directly by government entities or government-created airport authorities (also known as port authorities), such as the Los Angeles World Airports authority that oversees several airports in the Greater Los Angeles area, including Los Angeles International Airport[citation needed].
26
+
27
+ In Canada, the federal authority, Transport Canada, divested itself of all but the remotest airports in 1999/2000. Now most airports in Canada are owned and operated by individual legal authorities or are municipally owned.
28
+
29
+ Many US airports still lease part or all of their facilities to outside firms, who operate functions such as retail management and parking. All US commercial airport runways are certified by the FAA[12] under the Code of Federal Regulations Title 14 Part 139, "Certification of Commercial Service Airports"[13] but maintained by the local airport under the regulatory authority of the FAA.
30
+
31
+ Despite the reluctance to privatize airports in the US (contrary to the FAA sponsoring a privatization program since 1996), the government-owned, contractor-operated (GOCO) arrangement is the standard for the operation of commercial airports in the rest of the world.
32
+
33
+ The Airport & Airway Trust Fund (AATF) was created by the Airport and Airway Development in 1970 which finances aviation programs in the United States.[14] Airport Improvement Program (AIP), Facilities and Equipment (F&E), and Research, Engineering, and Development (RE&D) are the three major accounts of Federal Aviation Administration which are financed by the AATF, as well as pays for the FAA's Operation and Maintenance (O&M) account.[15] The funding of these accounts are dependent on the taxes the airports generate of revenues. Passenger tickets, fuel, and cargo tax are the taxes that are paid by the passengers and airlines help fund these accounts.[16]
34
+
35
+ Airports revenues are divided into three major parts: aeronautical revenue, non-aeronautical revenue, and non-operating revenue. Aeronautical revenue makes up 56%, non-aeronautical revenue makes up 40%, and non-operating revenue makes up 4% of the total revenue of airports.[17]
36
+
37
+ Aeronautical revenue are generated through airline rents and landing, passenger service, parking, and hangar fees. Landing fees are charged per aircraft for landing an airplane in the airport property.[18] Landing fees are calculated through the landing weight and the size of the aircraft which varies but most of the airports have a fixed rate and a charge extra for extra weight.[19] Passenger service fees are charges per passengers for the facilities used on a flight like water, food, wifi and shows which is paid while paying for an airline ticket.[citation needed] Aircraft parking is also a major revenue source for airports. Aircraft are parked for a certain amount of time before or after takeoff and have to pay to park there.[20] Every airport has his own rates of parking but at John F Kennedy airport in New York City charges $45 per hour for the plane of 100,000 pounds and the price increases with weight.[21]
38
+
39
+ Non-aeronautical revenue is gained through things other than aircraft operations. It includes lease revenue from compatible land-use development, non-aeronautical building leases, retail and concession sales, rental car operations, parking and in-airport advertising.[22] Concession revenue is one big part of non-aeronautical revenue airports makes through duty free, bookstores, restaurants and money exchange.[20] Car parking is a growing source of revenue for airports, as more people use the parking facilities of the airport. O'Hare International Airport in Chicago charges $2 per hour for every car.[23]
40
+
41
+ Airports are divided into landside and airside areas. The landside area is open to the public, while access to the airside area is tightly controlled. The airside area includes all parts of the airport around the aircraft, and the parts of the buildings that are accessible only to passengers and staff. Passengers and staff must be checked by security before being permitted to enter the airside area. Conversely, passengers arriving from an international flight must pass through border control and customs to access the landside area, where they can exit the airport. Many major airports will issue a secure keycard called an airside pass to employees, as some roles require employees to frequently move back and forth between landside and airside as part of their duties.
42
+
43
+ A terminal is a building with passenger facilities. Small airports have one terminal. Large ones often have multiple terminals, though some large airports like Amsterdam Airport Schiphol still have one terminal. The terminal has a series of gates, which provide passengers with access to the plane.
44
+
45
+ The following facilities are essential for departing passengers:
46
+
47
+ The following facilities are essential for arriving passengers:
48
+
49
+ For both sets of passengers, there must be a link between the passenger facilities and the aircraft, such as jet bridges or airstairs. There also needs to be a baggage handling system, to transport baggage from the baggage drop-off to departing planes, and from arriving planes to the baggage reclaim.
50
+
51
+ The area where the aircraft park to load passengers and baggage is known as an apron or ramp (or incorrectly[24], "the tarmac").
52
+
53
+ Airports with international flights have customs and immigration facilities. However, as some countries have agreements that allow travel between them without customs and immigrations, such facilities are not a definitive need for an international airport. International flights often require a higher level of physical security, although in recent years, many countries have adopted the same level of security for international and domestic travel.
54
+
55
+ "Floating airports" are being designed which could be located out at sea and which would use designs such as pneumatic stabilized platform technology.
56
+
57
+ Airport security normally requires baggage checks, metal screenings of individual persons, and rules against any object that could be used as a weapon. Since the September 11 attacks and the Real ID Act of 2005, airport security has dramatically increased and got tighter and stricter than ever before.
58
+
59
+ Most major airports provide commercial outlets for products and services. Most of these companies, many of which are internationally known brands, are located within the departure areas. These include clothing boutiques and restaurants and in the US amounted to $4.2 billion in 2015.[25] Prices charged for items sold at these outlets are generally higher than those outside the airport. However, some airports now regulate costs to keep them comparable to "street prices". This term is misleading as prices often match the manufacturers' suggested retail price (MSRP) but are almost never discounted.[citation needed]
60
+
61
+ Apart from major fast food chains, some airport restaurants offer regional cuisine specialties for those in transit so that they may sample local food or culture without leaving the airport.[26]
62
+
63
+ Some airport structures include on-site hotels built within or attached to a terminal building. Airport hotels have grown popular due to their convenience for transient passengers and easy accessibility to the airport terminal. Many airport hotels also have agreements with airlines to provide overnight lodging for displaced passengers.
64
+
65
+ Major airports in such countries as Russia and Japan offer miniature sleeping units within the airport that are available for rent by the hour. The smallest type is the capsule hotel popular in Japan. A slightly larger variety is known as a sleep box. An even larger type is provided by the company YOTEL.
66
+
67
+ Airports may also contain premium and VIP services. The premium and VIP services may include express check-in and dedicated check-in counters.
68
+ These services are usually reserved for first and business class passengers, premium frequent flyers, and members of the airline's clubs. Premium services may sometimes be open to passengers who are members of a different airline's frequent flyer program. This can sometimes be part of a reciprocal deal, as when multiple airlines are part of the same alliance, or as a ploy to attract premium customers away from rival airlines.
69
+
70
+ Sometimes these premium services will be offered to a non-premium passenger if the airline has made a mistake in handling of the passenger, such as unreasonable delays or mishandling of checked baggage.
71
+
72
+ Airline lounges frequently offer free or reduced cost food, as well as alcoholic and non-alcoholic beverages. Lounges themselves typically have seating, showers, quiet areas, televisions, computer, Wi-Fi and Internet access, and power outlets that passengers may use for their electronic equipment. Some airline lounges employ baristas, bartenders and gourmet chefs.
73
+
74
+ Airlines sometimes operate multiple lounges within the one airport terminal allowing ultra-premium customers, such as first class customers, additional services, which are not available to other premium customers. Multiple lounges may also prevent overcrowding of the lounge facilities.
75
+
76
+ In addition to people, airports move cargo around the clock. Cargo airlines often have their own on-site and adjacent infrastructure to transfer parcels between ground and air.
77
+
78
+ Cargo Terminal Facilities are areas where international airports export cargo has to be stored after customs clearance and prior to loading on the aircraft. Similarly import cargo that is offloaded needs to be in bond before the consignee decides to take delivery. Areas have to be kept aside for examination of export and import cargo by the airport authorities. Designated areas or sheds may be given to airlines or freight forward ring agencies.
79
+
80
+ Every cargo terminal has a landside and an airside. The landside is where the exporters and importers through either their agents or by themselves deliver or collect shipments while the airside is where loads are moved to or from the aircraft. In addition cargo terminals are divided into distinct areas – export, import and interline or transshipment.
81
+
82
+ Airports require parking lots, for passengers who may leave the cars at the airport for a long period of time. Large airports will also have car-rental firms, taxi ranks, bus stops and sometimes a train station.
83
+
84
+ Many large airports are located near railway trunk routes for seamless connection of multimodal transport, for instance Frankfurt Airport, Amsterdam Airport Schiphol, London Heathrow Airport, Tokyo Haneda Airport, Tokyo Narita Airport, London Gatwick Airport and London Stansted Airport. It is also common to connect an airport and a city with rapid transit, light rail lines or other non-road public transport systems. Some examples of this would include the AirTrain JFK at John F. Kennedy International Airport in New York, Link Light Rail that runs from the heart of downtown Seattle to Seattle–Tacoma International Airport, and the Silver Line T at Boston's Logan International Airport by the Massachusetts Bay Transportation Authority (MBTA). Such a connection lowers risk of missed flights due to traffic congestion. Large airports usually have access also through controlled-access highways ('freeways' or 'motorways') from which motor vehicles enter either the departure loop or the arrival loop.
85
+
86
+ The distances passengers need to move within a large airport can be substantial. It is common for airports to provide moving walkways, buses, and rail transport systems. Some airports like Hartsfield–Jackson Atlanta International Airport and London Stansted Airport have a transit system that connects some of the gates to a main terminal. Airports with more than one terminal have a transit system to connect the terminals together, such as John F. Kennedy International Airport, Mexico City International Airport and London Gatwick Airport.
87
+
88
+ There are three types of surface that aircraft operate on:
89
+
90
+ Air traffic control (ATC) is the task of managing aircraft movements and making sure they are safe, orderly and expeditious. At the largest airports, air traffic control is a series of highly complex operations that requires managing frequent traffic that moves in all three dimensions.
91
+
92
+ A "towered" or "controlled" airport has a control tower where the air traffic controllers are based. Pilots are required to maintain two-way radio communication with the controllers, and to acknowledge and comply with their instructions. A "non-towered" airport has no operating control tower and therefore two-way radio communications are not required, though it is good operating practice for pilots to transmit their intentions on the airport's common traffic advisory frequency (CTAF) for the benefit of other aircraft in the area. The CTAF may be a Universal Integrated Community (UNICOM), MULTICOM, Flight Service Station (FSS), or tower frequency.
93
+
94
+ The majority of the world's airports are small facilities without a tower. Not all towered airports have 24/7 ATC operations. In those cases, non-towered procedures apply when the tower is not in use, such as at night. Non-towered airports come under area (en-route) control. Remote and virtual tower (RVT) is a system in which ATC is handled by controllers who are not present at the airport itself.
95
+
96
+ Air traffic control responsibilities at airports are usually divided into at least two main areas: ground and tower, though a single controller may work both stations. The busiest airports may subdivide responsibilities further, with clearance delivery, apron control, and/or other specialized ATC stations.
97
+
98
+ Ground control is responsible for directing all ground traffic in designated "movement areas", except the traffic on runways. This includes planes, baggage trains, snowplows, grass cutters, fuel trucks, stair trucks, airline food trucks, conveyor belt vehicles and other vehicles. Ground Control will instruct these vehicles on which taxiways to use, which runway they will use (in the case of planes), where they will park, and when it is safe to cross runways. When a plane is ready to takeoff it will be turned over to tower control. Conversely, after a plane has landed it will depart the runway and be "handed over" from Tower to Ground Control.
99
+
100
+ Tower control is responsible for aircraft on the runway and in the controlled airspace immediately surrounding the airport. Tower controllers may use radar to locate an aircraft's position in 3D space, or they may rely on pilot position reports and visual observation. They coordinate the sequencing of aircraft in the traffic pattern and direct aircraft on how to safely join and leave the circuit. Aircraft which are only passing through the airspace must also contact tower control to be sure they remain clear of other traffic.
101
+
102
+ At all airports the use of a traffic pattern (often called a traffic circuit outside the US) is possible. They may help to assure smooth traffic flow between departing and arriving aircraft. There is no technical need within modern commercial aviation for performing this pattern, provided there is no queue. And due to the so-called SLOT-times, the overall traffic planning tend to assure landing queues are avoided. If for instance an aircraft approaches runway 17 (which has a heading of approx. 170 degrees) from the north (coming from 360/0 degrees heading towards 180 degrees), the aircraft will land as fast as possible by just turning 10 degrees and follow the glidepath, without orbit the runway for visual reasons, whenever this is possible. For smaller piston engined airplanes at smaller airfields without ILS equipment, things are very different though.
103
+
104
+ Generally, this pattern is a circuit consisting of five "legs" that form a rectangle (two legs and the runway form one side, with the remaining legs forming three more sides). Each leg is named (see diagram), and ATC directs pilots on how to join and leave the circuit. Traffic patterns are flown at one specific altitude, usually 800 or 1,000 ft (244 or 305 m) above ground level (AGL). Standard traffic patterns are left-handed, meaning all turns are made to the left. One of the main reason for this is that pilots sit on the left side of the airplane, and a Left-hand patterns improves their visibility of the airport and pattern. Right-handed patterns do exist, usually because of obstacles such as a mountain, or to reduce noise for local residents. The predetermined circuit helps traffic flow smoothly because all pilots know what to expect, and helps reduce the chance of a mid-air collision.
105
+
106
+ At controlled airports, a circuit can be in place but is not normally used. Rather, aircraft (usually only commercial with long routes) request approach clearance while they are still hours away from the airport; the destination airport can then plan a queue of arrivals, and planes will be guided into one queue per active runway for a "straight-in" approach. While this system keeps the airspace free and is simpler for pilots, it requires detailed knowledge of how aircraft are planning to use the airport ahead of time and is therefore only possible with large commercial airliners on pre-scheduled flights. The system has recently become so advanced that controllers can predict whether an aircraft will be delayed on landing before it even takes off; that aircraft can then be delayed on the ground, rather than wasting expensive fuel waiting in the air.
107
+
108
+ There are a number of aids, both visual and electronic, though not at all airports. A visual approach slope indicator (VASI) helps pilots fly the approach for landing. Some airports are equipped with a VHF omnidirectional range (VOR) to help pilots find the direction to the airport. VORs are often accompanied by a distance measuring equipment (DME) to determine the distance to the VOR. VORs are also located off airports, where they serve to provide airways for aircraft to navigate upon. In poor weather, pilots will use an instrument landing system (ILS) to find the runway and fly the correct approach, even if they cannot see the ground. The number of instrument approaches based on the use of the Global Positioning System (GPS) is rapidly increasing and may eventually become the primary means for instrument landings.
109
+
110
+ Larger airports sometimes offer precision approach radar (PAR), but these systems are more common at military air bases than civilian airports. The aircraft's horizontal and vertical movement is tracked via radar, and the controller tells the pilot his position relative to the approach slope. Once the pilots can see the runway lights, they may continue with a visual landing.
111
+
112
+ Airport guidance signs provide direction and information to taxiing aircraft and airport vehicles. Smaller aerodromes may have few or no signs, relying instead on diagrams and charts.
113
+
114
+ Many airports have lighting that help guide planes using the runways and taxiways at night or in rain or fog.
115
+
116
+ On runways, green lights indicate the beginning of the runway for landing, while red lights indicate the end of the runway. Runway edge lighting consists of white lights spaced out on both sides of the runway, indicating the edges. Some airports have more complicated lighting on the runways including lights that run down the centerline of the runway and lights that help indicate the approach (an approach lighting system, or ALS). Low-traffic airports may use pilot-controlled lighting to save electricity and staffing costs.
117
+
118
+ Along taxiways, blue lights indicate the taxiway's edge, and some airports have embedded green lights that indicate the centerline.
119
+
120
+ Weather observations at the airport are crucial to safe takeoffs and landings. In the US and Canada, the vast majority of airports, large and small, will either have some form of automated airport weather station, whether an AWOS, ASOS, or AWSS, a human observer or a combination of the two. These weather observations, predominantly in the METAR format, are available over the radio, through automatic terminal information service (ATIS), via the ATC or the flight service station.
121
+
122
+ Planes take-off and land into the wind to achieve maximum performance. Because pilots need instantaneous information during landing, a windsock can also be kept in view of the runway. Aviation windsocks are made with lightweight material, withstand strong winds and some are lit up after dark or in foggy weather. Because visibility of windsocks is limited, often multiple glow-orange windsocks are placed on both sides of the runway.[27]
123
+
124
+ Most airports have groundcrew handling the loading and unloading of passengers, crew, baggage and other services.[citation needed] Some groundcrew are linked to specific airlines operating at the airport.
125
+
126
+ Among the vehicles that serve an airliner on the ground are:
127
+
128
+ The length of time an aircraft remains on the ground in between consecutive flights is known as "turnaround time". Airlines pay great attention to minimizing turnaround times in an effort to keep aircraft use (flying time) high, with times scheduled as low as 25 minutes for jet aircraft operated by low-cost carriers on narrow-body aircraft.
129
+
130
+ Like industrial equipment or facility management, airports require tailor-made maintenance management due to their complexity. With many tangible assets spread over a large area in different environments, these infrastructures must therefore effectively monitor these assets and store spare parts to maintain them at an optimal level of service.[28]
131
+
132
+ To manage these airport assets, several solutions are competing for the market: CMMS (computerized maintenance management system) predominate, and mainly enable a company's maintenance activity to be monitored, planned, recorded and rationalized.[28]
133
+
134
+ Aviation safety is an important concern in the operation of an airport, and almost every airfield includes equipment and procedures for handling emergency situations. Airport crash tender crews are equipped for dealing with airfield accidents, crew and passenger extractions, and the hazards of highly flammable aviation fuel. The crews are also trained to deal with situations such as bomb threats, hijacking, and terrorist activities.
135
+
136
+ Hazards to aircraft include debris, nesting birds, and reduced friction levels due to environmental conditions such as ice, snow, or rain. Part of runway maintenance is airfield rubber removal which helps maintain friction levels. The fields must be kept clear of debris using cleaning equipment so that loose material does not become a projectile and enter an engine duct (see foreign object damage). In adverse weather conditions, ice and snow clearing equipment can be used to improve traction on the landing strip. For waiting aircraft, equipment is used to spray special deicing fluids on the wings.
137
+
138
+ Many airports are built near open fields or wetlands. These tend to attract bird populations, which can pose a hazard to aircraft in the form of bird strikes. Airport crews often need to discourage birds from taking up residence.
139
+
140
+ Some airports are located next to parks, golf courses, or other low-density uses of land. Other airports are located near densely populated urban or suburban areas.
141
+
142
+ An airport can have areas where collisions between aircraft on the ground tend to occur. Records are kept of any incursions where aircraft or vehicles are in an inappropriate location, allowing these "hot spots" to be identified. These locations then undergo special attention by transportation authorities (such as the FAA in the US) and airport administrators.
143
+
144
+ During the 1980s, a phenomenon known as microburst became a growing concern due to aircraft accidents caused by microburst wind shear, such as Delta Air Lines Flight 191. Microburst radar was developed as an aid to safety during landing, giving two to five minutes' warning to aircraft in the vicinity of the field of a microburst event.
145
+
146
+ Some airfields now have a special surface known as soft concrete at the end of the runway (stopway or blastpad) that behaves somewhat like styrofoam, bringing the plane to a relatively rapid halt as the material disintegrates. These surfaces are useful when the runway is located next to a body of water or other hazard, and prevent the planes from overrunning the end of the field.
147
+
148
+ Airports often have on-site firefighters to respond to emergencies. These use specialized vehicles, known as airport crash tenders.
149
+
150
+ Aircraft noise is a major cause of noise disturbance to residents living near airports. Sleep can be affected if the airports operate night and early morning flights. Aircraft noise occurs not only from take-offs and landings, but also from ground operations including maintenance and testing of aircraft. Noise can have other health effects as well. Other noise and environmental concerns are vehicle traffic causing noise and pollution on roads leading to airport. [29]
151
+
152
+ The construction of new airports or addition of runways to existing airports, is often resisted by local residents because of the effect on countryside, historical sites, and local flora and fauna. Due to the risk of collision between birds and aircraft, large airports undertake population control programs where they frighten or shoot birds.[citation needed]
153
+
154
+ The construction of airports has been known to change local weather patterns. For example, because they often flatten out large areas, they can be susceptible to fog in areas where fog rarely forms. In addition, they generally replace trees and grass with pavement, they often change drainage patterns in agricultural areas, leading to more flooding, run-off and erosion in the surrounding land.[30][citation needed]
155
+
156
+ Some of the airport administrations prepare and publish annual environmental reports to show how they consider these environmental concerns in airport management issues and how they protect environment from airport operations. These reports contain all environmental protection measures performed by airport administration in terms of water, air, soil and noise pollution, resource conservation and protection of natural life around the airport.
157
+
158
+ A 2019 report from the Cooperative Research Programs of the US Transportation Research Board showed all airports have a role to play in advancing greenhouse gas (GHG) reduction initiatives. Small airports have demonstrated leadership by using their less complex organizational structure to implement newer technologies and to serve as a providing ground for their feasibility. Large airports have the economic stability and staff resources necessary to grow in-house expertise and fund comprehensive new programs.[31]
159
+
160
+ A growing number of airports are installing solar photovoltaic arrays to offset their electricity use.[32][33] The National Renewable Energy Lab has shown this can be done safely.[34]
161
+
162
+ The world's first airport to be fully powered by solar energy is located at Kochi, India. Another airport known for considering environmental concerns is Seymour Airport in the Galapagos Islands.
163
+
164
+ An airbase, sometimes referred to as an air station or airfield, provides basing and support of military aircraft. Some airbases, known as military airports, provide facilities similar to their civilian counterparts. For example, RAF Brize Norton in the UK has a terminal which caters to passengers for the Royal Air Force's scheduled flights to the Falkland Islands. Some airbases are co-located with civilian airports, sharing the same ATC facilities, runways, taxiways and emergency services, but with separate terminals, parking areas and hangars. Bardufoss Airport, Bardufoss Air Station in Norway and Pune Airport in India are examples of this.
165
+
166
+ An aircraft carrier is a warship that functions as a mobile airbase. Aircraft carriers allow a naval force to project air power without having to depend on local bases for land-based aircraft. After their development in World War I, aircraft carriers replaced the battleship as the centrepiece of a modern fleet during World War II.
167
+
168
+ Most airports in the United States are designated "private-use airports" meaning that, whether publicly- or privately-owned, the airport is not open or available for use by the public (although use of the airport may be made available by invitation of the owner or manager).
169
+
170
+ Airports are uniquely represented by their IATA airport code and ICAO airport code.
171
+
172
+ Most airport names include the location. Many airport names honour a public figure, commonly a politician (e.g., Charles de Gaulle Airport, George Bush Intercontinental Airport, O.R. Tambo International Airport), a monarch (e.g. Chhatrapati Shivaji International Airport, King Shaka International Airport), a cultural leader (e.g. Liverpool John Lennon Airport, Leonardo da Vinci-Fiumicino Airport, Louis Armstrong New Orleans International Airport) or a prominent figure in aviation history of the region (e.g. Sydney Kingsford Smith Airport), sometimes even famous writers (e.g. Allama Iqbal International Airport) and explorers (e.g. Venice Marco Polo Airport).
173
+
174
+ Some airports have unofficial names, possibly so widely circulated that its official name is little used or even known.[citation needed]
175
+
176
+ Some airport names include the word "International" to indicate their ability to handle international air traffic. This includes some airports that do not have scheduled international airline services (e.g. Port Elizabeth International Airport).
177
+
178
+ The earliest aircraft takeoff and landing sites were grassy fields.[35] The plane could approach at any angle that provided a favorable wind direction. A slight improvement was the dirt-only field, which eliminated the drag from grass. However, these functioned well only in dry conditions. Later, concrete surfaces would allow landings regardless of meteorological conditions.
179
+
180
+ The title of "world's oldest airport" is disputed. College Park Airport in Maryland, US, established in 1909 by Wilbur Wright, is generally agreed to be the world's oldest continuously operating airfield,[36] although it serves only general aviation traffic.
181
+
182
+ Beijing Nanyuan Airport in China, which was built to accommodate planes in 1904, and airships in 1907, opened in 1910.[37] It was in operation until September 2019. Pearson Field Airport in Vancouver, Washington, United States, was built to accommodate planes in 1905 and airships in 1911, and is still in use as of January 2020.[citation needed]
183
+
184
+ Hamburg Airport opened in January 1911, making it the oldest commercial airport in the world which is still in operation. Bremen Airport opened in 1913 and remains in use, although it served as an American military field between 1945 and 1949. Amsterdam Airport Schiphol opened on September 16, 1916, as a military airfield, but has accepted civil aircraft only since December 17, 1920, allowing Sydney Airport—which started operations in January 1920—to claim to be one of the world's oldest continuously operating commercial airports.[38] Minneapolis-Saint Paul International Airport in the US opened in 1920 and has been in continuous commercial service since. It serves about 35,000,000 passengers each year and continues to expand, recently opening a new 11,000-foot (3,355 m) runway. Of the airports constructed during this early period in aviation, it is one of the largest and busiest that is still currently operating. Rome Ciampino Airport, opened 1916, is also a contender, as well as the Don Mueang International Airport near Bangkok, Thailand, which opened in 1914.
185
+ Increased aircraft traffic during World War I led to the construction of landing fields. Aircraft had to approach these from certain directions and this led to the development of aids for directing the approach and landing slope.
186
+
187
+ Following the war, some of these military airfields added civil facilities for handling passenger traffic. One of the earliest such fields was Paris – Le Bourget Airport at Le Bourget, near Paris. The first airport to operate scheduled international commercial services was Hounslow Heath Aerodrome in August 1919, but it was closed and supplanted by Croydon Airport in March 1920.[39] In 1922, the first permanent airport and commercial terminal solely for commercial aviation was opened at Flughafen Devau near what was then Königsberg, East Prussia. The airports of this era used a paved "apron", which permitted night flying as well as landing heavier aircraft.
188
+
189
+ The first lighting used on an airport was during the latter part of the 1920s; in the 1930s approach lighting came into use. These indicated the proper direction and angle of descent. The colours and flash intervals of these lights became standardized under the International Civil Aviation Organization (ICAO). In the 1940s, the slope-line approach system was introduced. This consisted of two rows of lights that formed a funnel indicating an aircraft's position on the glideslope. Additional lights indicated incorrect altitude and direction.
190
+
191
+ After World War II, airport design became more sophisticated. Passenger buildings were being grouped together in an island, with runways arranged in groups about the terminal. This arrangement permitted expansion of the facilities. But it also meant that passengers had to travel further to reach their plane.
192
+
193
+ An improvement in the landing field was the introduction of grooves in the concrete surface. These run perpendicular to the direction of the landing aircraft and serve to draw off excess rainwater that could build up in front of the plane's wheels.
194
+
195
+ Airport construction boomed during the 1960s with the increase in jet aircraft traffic. Runways were extended out to 3,000 m (9,800 ft). The fields were constructed out of reinforced concrete using a slip-form machine that produces a continuous slab with no disruptions along the length. The early 1960s also saw the introduction of jet bridge systems to modern airport terminals, an innovation which eliminated outdoor passenger boarding. These systems became commonplace in the United States by the 1970s.[citation needed]
196
+
197
+ The malicious use of UAVs has led to the deployment of counter unmanned air system (C-UAS) technologies such as the Aaronia AARTOS which have been installed on major international airports[40][41].
198
+
199
+ Airports have played major roles in films and television programs due to their very nature as a transport and international hub, and sometimes because of distinctive architectural features of particular airports. One such example of this is The Terminal, a film about a man who becomes permanently grounded in an airport terminal and must survive only on the food and shelter provided by the airport. They are also one of the major elements in movies such as The V.I.P.s, Speed, Airplane!, Airport (1970), Die Hard 2, Soul Plane, Jackie Brown, Get Shorty, Home Alone, Liar Liar, Passenger 57, Final Destination (2000), Unaccompanied Minors, Catch Me If You Can, Rendition and The Langoliers. They have also played important parts in television series like Lost, The Amazing Race, America's Next Top Model, Cycle 10 which have significant parts of their story set within airports. In other programmes and films, airports are merely indicative of journeys, e.g. Good Will Hunting.
200
+
201
+ Several computer simulation games put the player in charge of an airport. These include the Airport Tycoon series, SimAirport and Airport CEO.
202
+
203
+ Each national aviation authority has a source of information about airports in their country. This will contain information on airport elevation, airport lighting, runway information, communications facilities and frequencies, hours of operation, nearby NAVAIDs and contact information where prior arrangement for landing is necessary.
204
+
205
+ Infraero is responsible for the airports in Brazil
206
+
207
+ Lists:
en/590.html.txt ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A boat is a watercraft of a large range of types and sizes, but generally smaller than a ship, which is distinguished by its larger size, shape, cargo or passenger capacity, or its ability to carry boats.
4
+
5
+ Small boats are typically found on inland waterways such as rivers and lakes, or in protected coastal areas. However, some boats, such as the whaleboat, were intended for use in an offshore environment. In modern naval terms, a boat is a vessel small enough to be carried aboard a ship. Anomalous definitions exist, as lake freighters 1,000 feet (300 m) long on the Great Lakes are called "boats".
6
+
7
+ Boats vary in proportion and construction methods with their intended purpose, available materials, or local traditions. Canoes have been used since prehistoric times and remain in use throughout the world for transportation, fishing, and sport. Fishing boats vary widely in style partly to match local conditions. Pleasure craft used in recreational boating include ski boats, pontoon boats, and sailboats. House boats may be used for vacationing or long-term residence. Lighters are used to convey cargo to and from large ships unable to get close to shore. Lifeboats have rescue and safety functions.
8
+
9
+ Boats can be propelled by manpower (e.g. rowboats and paddle boats), wind (e.g. sailboats), and motor (including gasoline, diesel, and electric).
10
+
11
+ Boats have served as transportation since the earliest times.[1] Circumstantial evidence, such as the early settlement of Australia over 40,000 years ago, findings in Crete dated 130,000 years ago,[2] and in Flores dated to 900,000 years ago,[3] suggest that boats have been used since prehistoric times. The earliest boats are thought to have been dugouts,[4] and the oldest boats found by archaeological excavation date from around 7,000–10,000 years ago. The oldest recovered boat in the world, the Pesse canoe, found in the Netherlands, is a dugout made from the hollowed tree trunk of a Pinus sylvestris that was constructed somewhere between 8200 and 7600 BC. This canoe is exhibited in the Drents Museum in Assen, Netherlands.[5][6] Other very old dugout boats have also been recovered.[7][8][9]
12
+ Rafts have operated for at least 8,000 years.[10]
13
+ A 7,000-year-old seagoing reed boat has been found in Kuwait.[11]
14
+ Boats were used between 4000 and 3000 BC in Sumer,[1] ancient Egypt[12] and in the Indian Ocean.[1]
15
+
16
+ Boats played an important role in the commerce between the Indus Valley Civilization and Mesopotamia.[13] Evidence of varying models of boats has also been discovered at various Indus Valley archaeological sites.[14][15]
17
+ Uru craft originate in Beypore, a village in south Calicut, Kerala, in southwestern India. This type of mammoth wooden ship was constructed[when?] solely of teak, with a transport capacity of 400 tonnes. The ancient Arabs and Greeks used such boats as trading vessels.[16]
18
+
19
+ The historians Herodotus, Pliny the Elder and Strabo record the use of boats for commerce, travel, and military purposes.[14]
20
+
21
+ Boats can be categorized into three main types:
22
+
23
+ The hull is the main, and in some cases only, structural component of a boat. It provides both capacity and buoyancy. The keel is a boat's "backbone", a lengthwise structural member to which the perpendicular frames are fixed. On most boats a deck covers the hull, in part or whole. While a ship often has several decks, a boat is unlikely to have more than one. Above the deck are often lifelines connected to stanchions, bulwarks perhaps topped by gunnels, or some combination of the two. A cabin may protrude above the deck forward, aft, along the centerline, or covering much of the length of the boat. Vertical structures dividing the internal spaces are known as bulkheads.
24
+
25
+ The forward end of a boat is called the bow, the aft end the stern. Facing forward the right side is referred to as starboard and the left side as port.
26
+
27
+ Until the mid-19th century most boats were made of natural materials, primarily wood, although reed, bark and animal skins were also used. Early boats include the bound-reed style of boat seen in Ancient Egypt, the birch bark canoe, the animal hide-covered kayak[17] and coracle and the dugout canoe made from a single log.
28
+
29
+ By the mid-19th century, many boats had been built with iron or steel frames but still planked in wood. In 1855 ferro-cement boat construction was patented by the French, who coined the name "ferciment". This is a system by which a steel or iron wire framework is built in the shape of a boat's hull and covered over with cement. Reinforced with bulkheads and other internal structure it is strong but heavy, easily repaired, and, if sealed properly, will not leak or corrode.[18]
30
+
31
+ As the forests of Britain and Europe continued to be over-harvested to supply the keels of larger wooden boats, and the Bessemer process (patented in 1855) cheapened the cost of steel, steel ships and boats began to be more common. By the 1930s boats built entirely of steel from frames to plating were seen replacing wooden boats in many industrial uses and fishing fleets. Private recreational boats of steel remain uncommon. In 1895 WH Mullins produced steel boats of galvanized iron and by 1930 became the world's largest producer of pleasure boats.
32
+
33
+ Mullins also offered boats in aluminum from 1895 through 1899 and once again in the 1920s,[19][1] but it wasn't until the mid-20th century that aluminium gained widespread popularity. Though much more expensive than steel, aluminum alloys exist that do not corrode in salt water, allowing a similar load carrying capacity to steel at much less weight.
34
+
35
+ Around the mid-1960s, boats made of fiberglass (aka "glassfibre") became popular, especially for recreational boats. Fiberglass is also known as "GRP" (glass-reinforced plastic) in the UK, and "FRP" (for fiber-reinforced plastic) in the US. Fiberglass boats are strong, and do not rust, corrode, or rot. Instead, they are susceptible to structural degradation from sunlight and extremes in temperature over their lifespan. Fiberglass structures can be made stiffer with sandwich panels, where the fiberglass encloses a lightweight core such as balsa[20] or foam.
36
+
37
+ Cold moulding is a modern construction method, using wood as the structural component. In cold moulding very thin strips of wood are layered over a form. Each layer is coated with resin, followed by another directionally alternating layer laid on top. Subsequent layers may be stapled or otherwise mechanically fastened to the previous, or weighted or vacuum bagged to provide compression and stabilization until the resin sets.
38
+
39
+ The most common means of boat propulsion are as follows:
40
+
41
+ A boat displaces its weight in water, regardless whether it is made of wood, steel, fiberglass, or even concrete. If weight is added to the boat, the volume of the hull drawn below the waterline will increase to keep the balance above and below the surface equal. Boats have a natural or designed level of buoyancy. Exceeding it will cause the boat first to ride lower in the water, second to take on water more readily than when properly loaded, and ultimately, if overloaded by any combination of structure, cargo, and water, sink.
42
+
43
+ As commercial vessels must be correctly loaded to be safe, and as the sea becomes less buoyant in brackish areas such as the Baltic, the Plimsoll line was introduced to prevent overloading.
44
+
45
+ Since 1998 all new leisure boats and barges built in Europe between 2.5m and 24m must comply with the EU's Recreational Craft Directive (RCD). The Directive establishes four categories that permit the allowable wind and wave conditions for vessels in each class:[21]
46
+
47
+ A boat on the Ganges River
48
+
49
+ Babur crossing river Son; folio from an illustrated manuscript of ‘Babur-Namah’, Mughal, Akbar Period, AD 1598
50
+
51
+ A tugboat is used for towing or pushing another larger ship
52
+
53
+ A ship's derelict lifeboat, built of steel, rusting away in the wetlands of Folly Island, South Carolina, United States
54
+
55
+ A boat in an Egyptian tomb, painted around 1450 BC
56
+
57
+ Dugout boats in the courtyard of the Old Military Hospital in the Historic Center of Quito
58
+
59
+ Ming Dynasty Chinese painting of the Wanli Emperor enjoying a boat ride on a river with an entourage of guards and courtiers
60
+
61
+ Worlds longest dragon boat on display in Phnom Penh, Cambodia
62
+
63
+ At 17 metres long, the Severn-class lifeboats are the largest operational lifeboats in the UK
64
+
65
+ Aluminum flat-bottomed boats ashore for storage
66
+
67
+ A boat shaped like a sauce bottle that was sailed across the Atlantic Ocean by Tom McClean
68
+
69
+ Anchored boats in Portovenere, Italy
70
+
71
+ A boat in Utrecht, Netherlands
en/5900.html.txt ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Death Valley is a desert valley in Eastern California, in the northern Mojave Desert, bordering the Great Basin Desert. It is one of the hottest places on Earth, along with deserts in the Middle East and the Sahara.[3]
2
+
3
+ Death Valley's Badwater Basin is the point of lowest elevation in North America, at 282 feet (86 m) below sea level.[1] It is 84.6 miles (136.2 km) east-southeast of Mount Whitney, the highest point in the contiguous United States, with an elevation of 14,505 feet (4,421 m).[4] On the afternoon of July 10, 1913, the United States Weather Bureau recorded a high temperature of 134 °F (56.7 °C) at Furnace Creek in Death Valley,[5] which stands as the highest ambient air temperature ever recorded at the surface of the Earth.[6]
4
+
5
+ Lying mostly in Inyo County, California, near the border of California and Nevada, in the Great Basin, east of the Sierra Nevada mountains, Death Valley constitutes much of Death Valley National Park and is the principal feature of the Mojave and Colorado Deserts Biosphere Reserve. It runs from north to south between the Amargosa Range on the east and the Panamint Range on the west; the Grapevine Mountains and the Owlshead Mountains form its northern and southern boundaries, respectively.[7] It has an area of about 3,000 sq mi (7,800 km2).[8] The highest point in Death Valley is Telescope Peak, in the Panamint Range, which has an elevation of 11,043 feet (3,366 m).[9]
6
+
7
+ Death Valley is a graben—a downdropped block of land between two mountain ranges.[10] It lies at the southern end of a geological trough, Walker Lane, which runs north to Oregon. The valley is bisected by a right lateral strike slip fault system, comprising the Death Valley Fault and the Furnace Creek Fault. The eastern end of the left lateral Garlock Fault intersects the Death Valley Fault. Furnace Creek and the Amargosa River flow through part of the valley and eventually disappear into the sands of the valley floor.
8
+
9
+ Death Valley also contains salt pans. According to current geological consensus, at various times during the middle of the Pleistocene era, which ended roughly 10,000–12,000 years ago, an inland lake, Lake Manly, formed in Death Valley. The lake was nearly 100 miles (160 km) long and 600 feet (180 m) deep, the end-basin in a chain of lakes that began with Mono Lake, in the north, and continued through basins down the Owens River Valley, through Searles and China Lakes and the Panamint Valley, to the immediate west.[11]
10
+
11
+ As the area turned to desert, the water evaporated, leaving an abundance of evaporitic salts, such as common sodium salts and borax, which were later exploited during the modern history of the region, primarily 1883 to 1907.[12]
12
+
13
+ Death Valley has a subtropical, hot desert climate (Köppen: BWh), with long, extremely hot summers; short, mild winters; and little rainfall.
14
+
15
+ The valley is extremely dry because it lies in the rain shadow of four major mountain ranges (including the Sierra Nevada and Panamint Range). Moisture moving inland from the Pacific Ocean must pass eastward over the mountains to reach Death Valley; as air masses are forced upward by each range, they cool and moisture condenses, to fall as rain or snow on the western slopes. When the air masses reach Death Valley, most of the moisture has already been lost and there is little left to fall as precipitation.[13]
16
+
17
+ The extreme heat of Death Valley is attributable to a confluence of geographic and topographic factors. Scientists have identified a number of key contributors:[13]
18
+
19
+ Severe heat and dryness contribute to perpetual drought-like conditions in Death Valley and prevent much cloud formation from passing through the confines of the valley, where precipitation is often in the form of a virga.[16]
20
+
21
+ The depth and shape of Death Valley strongly influence its climate. The valley is a long, narrow basin that descends below sea level and is walled by high, steep mountain ranges. The clear, dry air and sparse plant cover allow sunlight to heat the desert surface. Summer nights provide little relief: overnight lows may dip just into the 82 to 98 °F (28 to 37 °C) range. Moving masses of super-heated air blow through the valley, creating extremely high temperatures.[17]
22
+
23
+ The hottest air temperature ever recorded in Death Valley was 134 °F (56.7 °C), on July 10, 1913, at Greenland Ranch (now Furnace Creek),[6] which is the highest atmospheric temperature ever recorded on earth.[5] (A report of a temperature of 58 °C (136.4 °F) in Libya in 1922 was later determined to be inaccurate.)[6] During the heat wave that peaked with that record, five consecutive days reached 129 °F (54 °C) or higher. Some meteorologists dispute the accuracy of the 1913 temperature measurement,[18] and it may ultimately be negated. On June 30, 2013, a verified temperature of 129.2 °F (54.0 °C) was recorded and is tied with Mitribah, Kuwait, for the hottest reliably measured air temperature ever recorded on earth.[19] The valley's lowest temperature, recorded at Greenland Ranch in January 1913, was 15 °F (−9 °C).[20]
24
+
25
+ The highest surface temperature ever recorded in Death Valley was 201.0 °F (93.9 °C), on July 15, 1972, at Furnace Creek, which is the highest ground surface temperature ever recorded on earth, as well as the only recorded surface temperature of above 200 °F (93.3 °C).[21]
26
+
27
+ The greatest number of consecutive days with a maximum temperature of at least 100 °F (38 °C) was 154, in the summer of 2001. The summer of 1996 had 40 days over 120 °F (49 °C), and 105 days over 110 °F (43 °C). The summer of 1917 had 52 days when the temperature reached 120 °F (49 °C) or above, 43 of them consecutive.
28
+
29
+ The highest overnight or low temperature recorded in Death Valley is 110 °F (43 °C), recorded on July 5, 1918.[22] However this value is disputed; a record high low of 107 °F (42 °C) on July 12, 2012 is considered reliable.[23] This is one of the highest values ever recorded[24] Also on July 12, 2012, the mean 24-hour temperature recorded at Death Valley was 117.5 °F (47.5 °C), which makes it the world's warmest 24-hour temperature on record.[25]
30
+
31
+ Four major mountain ranges lie between Death Valley and the ocean, each one adding to an increasingly drier rain shadow effect, and in 1929, 1953, and 1989, no rain was recorded for the whole year.[17] The period from 1931 to 1934 was the driest stretch on record with only 0.64 inches (16 mm) of rain over a 40-month period.[16]
32
+ The average annual precipitation in Death Valley is 2.36 inches (60 mm), while the Greenland Ranch station averaged 1.58 in (40 mm).[26] The wettest month on record is January 1995, when 2.59 inches (66 mm) fell on Death Valley.[16] The wettest period on record was mid-2004 to mid-2005, in which nearly 6 inches (150 mm) of rain fell in total, leading to ephemeral lakes in the valley and the region and tremendous wildflower blooms.[27] Snow with accumulation has only been recorded in January 1922, while scattered flakes have been recorded on other occasions.
33
+
34
+ In 2005, Death Valley received four times its average annual rainfall of 1.5 inches (38 mm). As it has done before for hundreds of years, the lowest spot in the valley filled with a wide, shallow lake, but the extreme heat and aridity immediately began evaporating the ephemeral lake.
35
+
36
+ The pair of images (seen at right) from NASA's Landsat 5 satellite documents the short history of Death Valley's Lake Badwater: formed in February 2005 (top) and evaporated by February 2007 (bottom). In 2005, a big pool of greenish water stretched most of the way across the valley floor. By May 2005 the valley floor had resumed its more familiar role as Badwater Basin, a salt-coated salt flats. In time, this freshly dissolved and recrystallized salt will darken.[31]
37
+
38
+ The western margin of Death Valley is traced by alluvial fans. During flash floods, rainfall from the steep mountains to the west pours through narrow canyons, picking up everything from fine clay to large rocks. When these torrents reach the mouths of the canyons, they widen and slow, branching out into distributary channels. The paler the fans, the younger they are.
39
+
40
+ In spite of the overwhelming heat and sparse rainfall, Death Valley exhibits considerable biodiversity. Wildflowers, watered by snowmelt, carpet the desert floor each spring, continuing into June.[27] Bighorn sheep, red-tailed hawks, and wild burros may be seen. Death Valley has over 600 springs and ponds. Salt Creek, a mile-long shallow depression in the center of the valley, supports Death Valley Pupfish.[32] These isolated pupfish populations are remnants of the wetter Pleistocene climate.[32]
41
+
42
+ Darwin Falls, on the western edge of Death Valley Monument, falls 100 feet (30 m) into a large pond surrounded by willows and cottonwood trees. Over 80 species of birds have been spotted around the pond.[33]
43
+
44
+ Death Valley is home to the Timbisha tribe of Native Americans, formerly known as the Panamint Shoshone, who have inhabited the valley for at least the past millennium. The Timbisha name for the valley, tümpisa, means "rock paint" and refers to the red ochre paint that can be made from a type of clay found in the valley. Some families still live in the valley at Furnace Creek. Another village was in Grapevine Canyon near the present site of Scotty's Castle. It was called in the Timbisha language maahunu, whose meaning is uncertain, although it is known that hunu means "canyon".
45
+
46
+ The valley received its English name in 1849 during the California Gold Rush. It was called Death Valley by prospectors[34] and others who sought to cross the valley on their way to the gold fields, after 13 pioneers perished from one early expedition of wagon trains.[35][36] During the 1850s, gold and silver were extracted in the valley. In the 1880s, borax was discovered and extracted by mule-drawn wagons.
47
+
48
+ Death Valley National Monument was proclaimed on February 11, 1933, by President Herbert Hoover, placing the area under federal protection. In 1994, the monument was redesignated as Death Valley National Park, as well as being substantially expanded to include Saline and Eureka Valleys.
49
+
50
+ A number of movies have been filmed in Death Valley, such as the following:[37]
en/5901.html.txt ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Death Valley is a desert valley in Eastern California, in the northern Mojave Desert, bordering the Great Basin Desert. It is one of the hottest places on Earth, along with deserts in the Middle East and the Sahara.[3]
2
+
3
+ Death Valley's Badwater Basin is the point of lowest elevation in North America, at 282 feet (86 m) below sea level.[1] It is 84.6 miles (136.2 km) east-southeast of Mount Whitney, the highest point in the contiguous United States, with an elevation of 14,505 feet (4,421 m).[4] On the afternoon of July 10, 1913, the United States Weather Bureau recorded a high temperature of 134 °F (56.7 °C) at Furnace Creek in Death Valley,[5] which stands as the highest ambient air temperature ever recorded at the surface of the Earth.[6]
4
+
5
+ Lying mostly in Inyo County, California, near the border of California and Nevada, in the Great Basin, east of the Sierra Nevada mountains, Death Valley constitutes much of Death Valley National Park and is the principal feature of the Mojave and Colorado Deserts Biosphere Reserve. It runs from north to south between the Amargosa Range on the east and the Panamint Range on the west; the Grapevine Mountains and the Owlshead Mountains form its northern and southern boundaries, respectively.[7] It has an area of about 3,000 sq mi (7,800 km2).[8] The highest point in Death Valley is Telescope Peak, in the Panamint Range, which has an elevation of 11,043 feet (3,366 m).[9]
6
+
7
+ Death Valley is a graben—a downdropped block of land between two mountain ranges.[10] It lies at the southern end of a geological trough, Walker Lane, which runs north to Oregon. The valley is bisected by a right lateral strike slip fault system, comprising the Death Valley Fault and the Furnace Creek Fault. The eastern end of the left lateral Garlock Fault intersects the Death Valley Fault. Furnace Creek and the Amargosa River flow through part of the valley and eventually disappear into the sands of the valley floor.
8
+
9
+ Death Valley also contains salt pans. According to current geological consensus, at various times during the middle of the Pleistocene era, which ended roughly 10,000–12,000 years ago, an inland lake, Lake Manly, formed in Death Valley. The lake was nearly 100 miles (160 km) long and 600 feet (180 m) deep, the end-basin in a chain of lakes that began with Mono Lake, in the north, and continued through basins down the Owens River Valley, through Searles and China Lakes and the Panamint Valley, to the immediate west.[11]
10
+
11
+ As the area turned to desert, the water evaporated, leaving an abundance of evaporitic salts, such as common sodium salts and borax, which were later exploited during the modern history of the region, primarily 1883 to 1907.[12]
12
+
13
+ Death Valley has a subtropical, hot desert climate (Köppen: BWh), with long, extremely hot summers; short, mild winters; and little rainfall.
14
+
15
+ The valley is extremely dry because it lies in the rain shadow of four major mountain ranges (including the Sierra Nevada and Panamint Range). Moisture moving inland from the Pacific Ocean must pass eastward over the mountains to reach Death Valley; as air masses are forced upward by each range, they cool and moisture condenses, to fall as rain or snow on the western slopes. When the air masses reach Death Valley, most of the moisture has already been lost and there is little left to fall as precipitation.[13]
16
+
17
+ The extreme heat of Death Valley is attributable to a confluence of geographic and topographic factors. Scientists have identified a number of key contributors:[13]
18
+
19
+ Severe heat and dryness contribute to perpetual drought-like conditions in Death Valley and prevent much cloud formation from passing through the confines of the valley, where precipitation is often in the form of a virga.[16]
20
+
21
+ The depth and shape of Death Valley strongly influence its climate. The valley is a long, narrow basin that descends below sea level and is walled by high, steep mountain ranges. The clear, dry air and sparse plant cover allow sunlight to heat the desert surface. Summer nights provide little relief: overnight lows may dip just into the 82 to 98 °F (28 to 37 °C) range. Moving masses of super-heated air blow through the valley, creating extremely high temperatures.[17]
22
+
23
+ The hottest air temperature ever recorded in Death Valley was 134 °F (56.7 °C), on July 10, 1913, at Greenland Ranch (now Furnace Creek),[6] which is the highest atmospheric temperature ever recorded on earth.[5] (A report of a temperature of 58 °C (136.4 °F) in Libya in 1922 was later determined to be inaccurate.)[6] During the heat wave that peaked with that record, five consecutive days reached 129 °F (54 °C) or higher. Some meteorologists dispute the accuracy of the 1913 temperature measurement,[18] and it may ultimately be negated. On June 30, 2013, a verified temperature of 129.2 °F (54.0 °C) was recorded and is tied with Mitribah, Kuwait, for the hottest reliably measured air temperature ever recorded on earth.[19] The valley's lowest temperature, recorded at Greenland Ranch in January 1913, was 15 °F (−9 °C).[20]
24
+
25
+ The highest surface temperature ever recorded in Death Valley was 201.0 °F (93.9 °C), on July 15, 1972, at Furnace Creek, which is the highest ground surface temperature ever recorded on earth, as well as the only recorded surface temperature of above 200 °F (93.3 °C).[21]
26
+
27
+ The greatest number of consecutive days with a maximum temperature of at least 100 °F (38 °C) was 154, in the summer of 2001. The summer of 1996 had 40 days over 120 °F (49 °C), and 105 days over 110 °F (43 °C). The summer of 1917 had 52 days when the temperature reached 120 °F (49 °C) or above, 43 of them consecutive.
28
+
29
+ The highest overnight or low temperature recorded in Death Valley is 110 °F (43 °C), recorded on July 5, 1918.[22] However this value is disputed; a record high low of 107 °F (42 °C) on July 12, 2012 is considered reliable.[23] This is one of the highest values ever recorded[24] Also on July 12, 2012, the mean 24-hour temperature recorded at Death Valley was 117.5 °F (47.5 °C), which makes it the world's warmest 24-hour temperature on record.[25]
30
+
31
+ Four major mountain ranges lie between Death Valley and the ocean, each one adding to an increasingly drier rain shadow effect, and in 1929, 1953, and 1989, no rain was recorded for the whole year.[17] The period from 1931 to 1934 was the driest stretch on record with only 0.64 inches (16 mm) of rain over a 40-month period.[16]
32
+ The average annual precipitation in Death Valley is 2.36 inches (60 mm), while the Greenland Ranch station averaged 1.58 in (40 mm).[26] The wettest month on record is January 1995, when 2.59 inches (66 mm) fell on Death Valley.[16] The wettest period on record was mid-2004 to mid-2005, in which nearly 6 inches (150 mm) of rain fell in total, leading to ephemeral lakes in the valley and the region and tremendous wildflower blooms.[27] Snow with accumulation has only been recorded in January 1922, while scattered flakes have been recorded on other occasions.
33
+
34
+ In 2005, Death Valley received four times its average annual rainfall of 1.5 inches (38 mm). As it has done before for hundreds of years, the lowest spot in the valley filled with a wide, shallow lake, but the extreme heat and aridity immediately began evaporating the ephemeral lake.
35
+
36
+ The pair of images (seen at right) from NASA's Landsat 5 satellite documents the short history of Death Valley's Lake Badwater: formed in February 2005 (top) and evaporated by February 2007 (bottom). In 2005, a big pool of greenish water stretched most of the way across the valley floor. By May 2005 the valley floor had resumed its more familiar role as Badwater Basin, a salt-coated salt flats. In time, this freshly dissolved and recrystallized salt will darken.[31]
37
+
38
+ The western margin of Death Valley is traced by alluvial fans. During flash floods, rainfall from the steep mountains to the west pours through narrow canyons, picking up everything from fine clay to large rocks. When these torrents reach the mouths of the canyons, they widen and slow, branching out into distributary channels. The paler the fans, the younger they are.
39
+
40
+ In spite of the overwhelming heat and sparse rainfall, Death Valley exhibits considerable biodiversity. Wildflowers, watered by snowmelt, carpet the desert floor each spring, continuing into June.[27] Bighorn sheep, red-tailed hawks, and wild burros may be seen. Death Valley has over 600 springs and ponds. Salt Creek, a mile-long shallow depression in the center of the valley, supports Death Valley Pupfish.[32] These isolated pupfish populations are remnants of the wetter Pleistocene climate.[32]
41
+
42
+ Darwin Falls, on the western edge of Death Valley Monument, falls 100 feet (30 m) into a large pond surrounded by willows and cottonwood trees. Over 80 species of birds have been spotted around the pond.[33]
43
+
44
+ Death Valley is home to the Timbisha tribe of Native Americans, formerly known as the Panamint Shoshone, who have inhabited the valley for at least the past millennium. The Timbisha name for the valley, tümpisa, means "rock paint" and refers to the red ochre paint that can be made from a type of clay found in the valley. Some families still live in the valley at Furnace Creek. Another village was in Grapevine Canyon near the present site of Scotty's Castle. It was called in the Timbisha language maahunu, whose meaning is uncertain, although it is known that hunu means "canyon".
45
+
46
+ The valley received its English name in 1849 during the California Gold Rush. It was called Death Valley by prospectors[34] and others who sought to cross the valley on their way to the gold fields, after 13 pioneers perished from one early expedition of wagon trains.[35][36] During the 1850s, gold and silver were extracted in the valley. In the 1880s, borax was discovered and extracted by mule-drawn wagons.
47
+
48
+ Death Valley National Monument was proclaimed on February 11, 1933, by President Herbert Hoover, placing the area under federal protection. In 1994, the monument was redesignated as Death Valley National Park, as well as being substantially expanded to include Saline and Eureka Valleys.
49
+
50
+ A number of movies have been filmed in Death Valley, such as the following:[37]
en/5902.html.txt ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The Nile (Arabic: النيل‎, romanized: an-Nīl, Arabic pronunciation: [an'niːl], Bohairic Coptic: ⲫⲓⲁⲣⲟ Pronounced [pʰjaˈro][4], Nobiin: Áman Dawū[5]) is a major north-flowing river in northeastern Africa, and is the longest river in Africa and the disputed longest river in the world,[6][7] as the Brazilian government says that the Amazon River is longer than the Nile.[8][9] The Nile is about 6,650 km (4,130 mi)[n 1] long and its drainage basin covers eleven countries: Tanzania, Uganda, Rwanda, Burundi, the Democratic Republic of the Congo, Kenya, Ethiopia, Eritrea, South Sudan, Republic of the Sudan, and Egypt.[11] In particular, the Nile is the primary water source of Egypt and Sudan.[12]
6
+
7
+ The Nile has two major tributaries – the White Nile and the Blue Nile. The White Nile is considered to be the headwaters and primary stream of the Nile itself. The Blue Nile, however, is the source of most of the water, containing 80% of the water and silt. The White Nile is longer and rises in the Great Lakes region of central Africa, with the most distant source still undetermined but located in either Rwanda or Burundi. It flows north through Tanzania, Lake Victoria, Uganda and South Sudan. The Blue Nile begins at Lake Tana in Ethiopia[13] and flows into Sudan from the southeast. The two rivers meet just north of the Sudanese capital of Khartoum.[14]
8
+
9
+ The northern section of the river flows north almost entirely through the Sudanese desert to Egypt, then ends in a large delta and flows into the Mediterranean Sea. Egyptian civilization and Sudanese kingdoms have depended on the river since ancient times. Most of the population and cities of Egypt lie along those parts of the Nile valley north of Aswan, and nearly all the cultural and historical sites of Ancient Egypt are found along river banks.
10
+
11
+ The standard English names "White Nile" and "Blue Nile", to refer to the river's source, derive from Arabic names formerly applied only to the Sudanese stretches which meet at Khartoum.[15]
12
+
13
+ In the ancient Egyptian language, the Nile is called Ḥ'pī (Hapy) or Iteru, meaning "river". In Coptic, the word ⲫⲓⲁⲣⲟ, pronounced piaro (Sahidic) or phiaro (Bohairic), means "the river" (lit. p(h).iar-o "the.canal-great"), and comes from the same ancient name.[16]
14
+
15
+ In Nobiin the river is called Áman Dawū, meaning "the great water".[5]
16
+
17
+ In Egyptian Arabic, the Nile is called en-Nīl while in Standard Arabic it is called an-Nīl. In Biblical Hebrew: הַיְאוֹר, Ha-Ye'or or הַשִׁיחוֹר, Ha-Shiḥor.
18
+
19
+ The English name Nile and the Arabic names en-Nîl and an-Nîl both derive from the Latin Nilus and the Ancient Greek Νεῖλος.[17][18] Beyond that, however, the etymology is disputed.[18][19] Hesiod at his Theogony refers that Nilus (Νεῖλος) was one of the Potamoi (river gods), son of Oceanus and Tethys.[20] Another derivation of Nile might be related to the term Nil (Sanskrit: नील, romanized: nila; Egyptian Arabic: نيلة‎),[16] which refers to Indigofera tinctoria, one of the original sources of indigo dye;[21] or Nymphaea caerulea, known as "The Sacred Blue Lily of the Nile", which was found scattered over Tutankhamen's corpse when it was located in 1922.[22]
20
+
21
+ Another possible etymology derives it from a Semitic Nahal, meaning "river".[23]
22
+
23
+ With a total length of about 6,650 km (4,130 mi)[n 1] between the region of Lake Victoria and the Mediterranean Sea, the Nile is the longest river on Earth. The drainage basin of the Nile covers 3,254,555 square kilometers (1,256,591 sq mi), about 10% of the area of Africa.[25] Compared to other major rivers, though, the Nile carries little water (5% of the Congo's river, for example).[26] The Nile basin is complex, and because of this, the discharge at any given point along the mainstem depends on many factors including weather, diversions, evaporation and evapotranspiration, and groundwater flow.
24
+
25
+ Above Khartoum, the Nile is also known as the White Nile, a term also used in a limited sense to describe the section between Lake No and Khartoum. At Khartoum the river is joined by the Blue Nile. The White Nile starts in equatorial East Africa, and the Blue Nile begins in Ethiopia. Both branches are on the western flanks of the East African Rift.
26
+
27
+ The source of the Nile is sometimes considered to be Lake Victoria, but the lake has feeder rivers of considerable size. The Kagera River, which flows into Lake Victoria near the Tanzanian town of Bukoba, is the longest feeder, although sources do not agree on which is the longest tributary of the Kagera and hence the most distant source of the Nile itself.[27] It is either the Ruvyironza, which emerges in Bururi Province, Burundi,[28] or the Nyabarongo, which flows from Nyungwe Forest in Rwanda.[29] The two feeder rivers meet near Rusumo Falls on the Rwanda-Tanzania border.
28
+
29
+ In 2010, an exploration party[30] went to a place described as the source of the Rukarara tributary,[31] and by hacking a path up steep jungle-choked mountain slopes in the Nyungwe forest found (in the dry season) an appreciable incoming surface flow for many kilometres upstream, and found a new source, giving the Nile a length of 6,758 km (4,199 mi).
30
+
31
+ Gish Abay is reportedly the place where the "holy water" of the first drops of the Blue Nile develop.[32]
32
+
33
+ The Nile leaves Lake Nalubaale (Victoria) at Ripon Falls near Jinja, Uganda, as the Victoria Nile. It flows north for some 130 kilometers (81 mi), to Lake Kyoga. The last part of the approximately 200 kilometers (120 mi) river section starts from the western shores of the lake and flows at first to the west until just south of Masindi Port, where the river turns north, then makes a great half circle to the east and north until Karuma Falls. For the remaining part it flows merely westerly through the Murchison Falls until it reaches the very northern shores of Lake Albert where it forms a significant river delta. The lake itself is on the border of DR Congo, but the Nile is not a border river at this point. After leaving Lake Albert, the river continues north through Uganda and is known as the Albert Nile.
34
+
35
+ The Nile river flows into South Sudan just south of Nimule, where it is known as the Bahr al Jabal ("Mountain River"[33]). Just south of the town it has the confluence with the Achwa River. The Bahr al Ghazal, itself 716 kilometers (445 mi) long, joins the Bahr al Jabal at a small lagoon called Lake No, after which the Nile becomes known as the Bahr al Abyad, or the White Nile, from the whitish clay suspended in its waters. When the Nile floods it leaves a rich silty deposit which fertilizes the soil. The Nile no longer floods in Egypt since the completion of the Aswan Dam in 1970. An anabranch river, the Bahr el Zeraf, flows out of the Nile's Bahr al Jabal section and rejoins the White Nile.
36
+
37
+ The flow rate of the Bahr al Jabal at Mongalla, South Sudan is almost constant throughout the year and averages 1,048 m3/s (37,000 cu ft/s). After Mongalla, the Bahr Al Jabal enters the enormous swamps of the Sudd region of South Sudan. More than half of the Nile's water is lost in this swamp to evaporation and transpiration. The average flow rate of the White Nile at the tails of the swamps is about 510 m3/s (18,000 cu ft/s). From here it soon meets with the Sobat River at Malakal. On an annual basis, the White Nile upstream of Malakal contributes about fifteen percent of the total outflow of the Nile.[34]
38
+
39
+ The average flow of the White Nile at Lake Kawaki Malakal, just below the Sobat River, is 924 m3/s (32,600 cu ft/s); the peak flow is approximately 1,218 m3/s (43,000 cu ft/s) in October and minimum flow is about 609 m3/s (21,500 cu ft/s) in April. This fluctuation is due to the substantial variation in the flow of the Sobat, which has a minimum flow of about 99 m3/s (3,500 cu ft/s) in March and a peak flow of over 680 m3/s (24,000 cu ft/s) in October.[35] During the dry season (January to June) the White Nile contributes between 70 percent and 90 percent of the total discharge from the Nile.
40
+
41
+ Below Renk the White Nile enters Sudan, it flows north to Khartoum and meets the Blue Nile.
42
+
43
+ The course of the Nile in Sudan is distinctive. It flows over six groups of cataracts, from the sixth at Sabaloka just north of Khartoum northward to Abu Hamed. Due to the tectonic uplift of the Nubian Swell, the river is then diverted to flow for over 300 km south-west following the structure of the Central African Shear Zone embracing the Bayuda Desert. At Al Dabbah it resumes its northward course towards the first Cataract at Aswan forming the 'S'-shaped Great Bend of the Nile[36] already mentioned by Eratosthenes.[37]
44
+
45
+ In the north of Sudan the river enters Lake Nasser (known in Sudan as Lake Nubia), the larger part of which is in Egypt.
46
+
47
+ Below the Aswan High Dam, at the northern limit of Lake Nasser, the Nile resumes its historic course.
48
+
49
+ North of Cairo, the Nile splits into two branches (or distributaries) that feed the Mediterranean: the Rosetta Branch to the west and the Damietta to the east, forming the Nile Delta.
50
+
51
+ The annual sediment transport by the Nile in Egypt has been quantified.[38]
52
+
53
+ Below the confluence with the Blue Nile the only major tributary is the Atbara River, roughly halfway to the sea, which originates in Ethiopia north of Lake Tana, and is around 800 kilometers (500 mi) long. The Atbara flows only while there is rain in Ethiopia and dries very rapidly. During the dry period of January to June, it typically dries up north of Khartoum.
54
+
55
+ The Blue Nile (Amharic: ዓባይ, ʿĀbay[40][41]) springs from Lake Tana in the Ethiopian Highlands. The Blue Nile flows about 1,400 kilometres to Khartoum, where the Blue Nile and White Nile join to form the Nile.[42] Ninety percent of the water and ninety-six percent of the transported sediment carried by the Nile[43] originates in Ethiopia, with fifty-nine percent of the water from the Blue Nile (the rest being from the Tekezé, Atbarah, Sobat, and small tributaries). The erosion and transportation of silt only occurs during the Ethiopian rainy season in the summer, however, when rainfall is especially high on the Ethiopian Plateau; the rest of the year, the great rivers draining Ethiopia into the Nile (Sobat, Blue Nile, Tekezé, and Atbarah) have a weaker flow. In harsh and arid seasons and droughts the Blue Nile dries out completely.[44]
56
+
57
+ The flow of the Blue Nile varies considerably over its yearly cycle and is the main contribution to the large natural variation of the Nile flow. During the dry season the natural discharge of the Blue Nile can be as low as 113 m3/s (4,000 cu ft/s), although upstream dams regulate the flow of the river. During the wet season the peak flow of the Blue Nile often exceeds 5,663 m3/s (200,000 cu ft/s) in late August (a difference of a factor of 50).
58
+
59
+ Before the placement of dams on the river the yearly discharge varied by a factor of 15 at Aswan. Peak flows of over 8,212 m3/s (290,000 cu ft/s) occurred during late August and early September, and minimum flows of about 552 m3/s (19,500 cu ft/s) occurred during late April and early May.
60
+
61
+ The Bahr al Ghazal and the Sobat River are the two most important tributaries of the White Nile in terms of discharge.
62
+
63
+ The Bahr al Ghazal's drainage basin is the largest of any of the Nile's sub-basins, measuring 520,000 square kilometers (200,000 sq mi) in size, but it contributes a relatively small amount of water, about 2 m3/s (71 cu ft/s) annually, due to tremendous volumes of water being lost in the Sudd wetlands.
64
+
65
+ The Sobat River, which joins the Nile a short distance below Lake No, drains about half as much land, 225,000 km2 (86,900 sq mi), but contributes 412 cubic meters per second (14,500 cu ft/s) annually to the Nile.[45] When in flood the Sobat carries a large amount of sediment, adding greatly to the White Nile's color.[46]
66
+
67
+ The Yellow Nile is a former tributary that connected the Ouaddaï Highlands of eastern Chad to the Nile River Valley c. 8000 to c. 1000 BCE.[47] Its remains are known as the Wadi Howar. The wadi passes through Gharb Darfur near the northern border with Chad and meets up with the Nile near the southern point of the Great Bend.
68
+
69
+ The Nile (iteru in Ancient Egyptian) has been the lifeline of civilization in Egypt since the Stone Age, with most of the population and all of the cities of Egypt resting along those parts of the Nile valley lying north of Aswan. However, the Nile used to run much more westerly through what is now Wadi Hamim and Wadi al Maqar in Libya and flow into the Gulf of Sidra.[48] As sea level rose at the end of the most recent ice age, the stream which is now the northern Nile pirated the ancestral Nile near Asyut,[49] this change in climate also led to the creation of the current Sahara desert, around 3400 BC.[50]
70
+
71
+ The present Nile is at least the fifth river that has flowed north from the Ethiopian Highlands. Satellite imagery was used to identify dry watercourses in the desert to the west of the Nile. A canyon, now filled by surface drift, represents an ancestral Nile called the Eonile that flowed during the later Miocene (23–5.3 million years before present). The Eonile transported clastic sediments to the Mediterranean; several natural gas fields have been discovered within these sediments.
72
+
73
+ During the late-Miocene Messinian salinity crisis, when the Mediterranean Sea was a closed basin and evaporated to the point of being empty or nearly so, the Nile cut its course down to the new base level until it was several hundred metres below world ocean level at Aswan and 2,400 m (7,900 ft) below Cairo.[51][52] This created a very long and deep canyon which was filled with sediment when the Mediterranean was recreated.[53] At some point the sediments raised the riverbed sufficiently for the river to overflow westward into a depression to create Lake Moeris.
74
+
75
+ Lake Tanganyika drained northwards into the Nile until the Virunga Volcanoes blocked its course in Rwanda. The Nile was much longer at that time, with its furthest headwaters in northern Zambia.
76
+
77
+ There are two theories about the age of the integrated Nile. One is that the integrated drainage of the Nile is of young age and that the Nile basin was formerly broken into series of separate basins, only the most northerly of which fed a river following the present course of the Nile in Egypt and Sudan. Rushdi Said postulated that Egypt itself supplied most of the waters of the Nile during the early part of its history.[54]
78
+
79
+ The other theory is that the drainage from Ethiopia via rivers equivalent to the Blue Nile, the Atbara and the Takazze flowed to the Mediterranean via the Egyptian Nile since well back into Tertiary times.[55]
80
+
81
+ Salama suggested that during the Paleogene and Neogene Periods (66 million to 2.588 million years ago) a series of separate closed continental basins each occupied one of the major parts of the Sudanese Rift System: Mellut rift, White Nile rift, Blue Nile rift, Atbara rift and Sag El Naam rift.[56]
82
+ The Mellut Rift Basin is nearly 12 kilometers (7.5 mi) deep at its central part. This rift is possibly still active, with reported tectonic activity in its northern and southern boundaries. The Sudd swamps which form the central part of the basin may still be subsiding. The White Nile Rift System, although shallower than the Bahr el Arab rift, is about 9 kilometers (5.6 mi) deep. Geophysical exploration of the Blue Nile Rift System estimated the depth of the sediments to be 5–9 kilometers (3.1–5.6 mi). These basins were not interconnected until their subsidence ceased, and the rate of sediment deposition was enough to fill and connect them. The Egyptian Nile connected to the Sudanese Nile, which captures the Ethiopian and Equatorial headwaters during the current stages of tectonic activity in the Eastern, Central and Sudanese Rift Systems.[57] The connection of the different Niles occurred during cyclic wet periods. The River Atbara overflowed its closed basin during the wet periods that occurred about 100,000 to 120,000 years ago. The Blue Nile connected to the main Nile during the 70,000–80,000 years B.P. wet period. The White Nile system in Bahr El Arab and White Nile Rifts remained a closed lake until the connection of the Victoria Nile to the main system some 12,500 years ago during the African humid period.
83
+
84
+ The Greek historian Herodotus wrote that "Egypt was the gift of the Nile". An unending source of sustenance, it played a crucial role in the development of Egyptian civilization. Because the river overflowed its banks annually and deposited new layers of silt, the surrounding land was very fertile. The Ancient Egyptians cultivated and traded wheat, flax, papyrus and other crops around the Nile. Wheat was a crucial crop in the famine-plagued Middle East. This trading system secured Egypt's diplomatic relationships with other countries, and contributed to economic stability. Far-reaching trade has been carried on along the Nile since ancient times. A tune, Hymn to the Nile, was created and sung by the ancient Egyptian peoples about the flooding of the Nile River and all of the miracles it brought to Ancient Egyptian civilization.[58]
85
+
86
+ Water buffalo were introduced from Asia and the Assyrians introduced camels in the 7th century BC. These animals were killed for meat, and were domesticated and used for ploughing—or in the camels' case, carriage. Water was vital to both people and livestock. The Nile was also a convenient and efficient means of transportation for people and goods.
87
+
88
+ The Nile was also an important part of ancient Egyptian spiritual life. Hapi was the god of the annual floods, and both he and the pharaoh were thought to control the flooding. The Nile was considered to be a causeway from life to death and the afterlife. The east was thought of as a place of birth and growth, and the west was considered the place of death, as the god Ra, the Sun, underwent birth, death, and resurrection each day as he crossed the sky. Thus, all tombs were west of the Nile, because the Egyptians believed that in order to enter the afterlife, they had to be buried on the side that symbolized death.
89
+
90
+ As the Nile was such an important factor in Egyptian life, the ancient calendar was even based on the three cycles of the Nile. These seasons, each consisting of four months of thirty days each, were called Akhet, Peret, and Shemu. Akhet, which means inundation, was the time of the year when the Nile flooded, leaving several layers of fertile soil behind, aiding in agricultural growth.[59] Peret was the growing season, and Shemu, the last season, was the harvest season when there were no rains.[59]
91
+
92
+ Owing to their failure to penetrate the sudd wetlands of South Sudan, the upper reaches of the White Nile remained largely unknown to the ancient Greeks and Romans. Various expeditions failed to determine the river's source. Agatharcides records that in the time of Ptolemy II Philadelphus, a military expedition had penetrated far enough along the course of the Blue Nile to determine that the summer floods were caused by heavy seasonal rainstorms in the Ethiopian Highlands, but no European of antiquity is known to have reached Lake Tana.
93
+
94
+ The Tabula Rogeriana depicted the source as three lakes in 1154.
95
+
96
+ Europeans began to learn about the origins of the Nile in the fourteenth century when the Pope sent monks as emissaries to Mongolia who passed India, the Middle East and Africa, and described being told of the source of the Nile in Abyssinia (Ethiopia)[61][62] Later in the fifteenth and sixteenth centuries, travelers to Ethiopia visited Lake Tana and the source of the Blue Nile in the mountains south of the lake. Although James Bruce claimed to be the first European to have visited the headwaters,[63] modern writers give the credit to the Jesuit Pedro Páez. Páez's account of the source of the Nile[64] is a long and vivid account of Ethiopia. It was published in full only in the early twentieth century, although it was featured in works of Páez's contemporaries, including Baltazar Téllez,[65] Athanasius Kircher[66] and by Johann Michael Vansleb.[67]
97
+
98
+ Europeans had been resident in Ethiopia since the late fifteenth century, and one of them may have visited the headwaters even earlier without leaving a written trace. The Portuguese João Bermudes published the first description of the Tis Issat Falls in his 1565 memoirs, compared them to the Nile Falls alluded to in Cicero's De Republica.[68] Jerónimo Lobo describes the source of the Blue Nile, visiting shortly after Pedro Páez. Telles also used his account.
99
+
100
+ The White Nile was even less understood. The ancients mistakenly believed that the Niger River represented the upper reaches of the White Nile. For example, Pliny the Elder wrote that the Nile had its origins "in a mountain of lower Mauretania", flowed above ground for "many days" distance, then went underground, reappeared as a large lake in the territories of the Masaesyli, then sank again below the desert to flow underground "for a distance of 20 days' journey till it reaches the nearest Ethiopians."[69] A merchant named Diogenes reported that the Nile's water attracted game such as buffalo.
101
+
102
+ Lake Victoria was first sighted by Europeans in 1858 when British explorer John Hanning Speke reached its southern shore while traveling with Richard Francis Burton to explore central Africa and locate the great lakes. Believing he had found the source of the Nile on seeing this "vast expanse of open water" for the first time, Speke named the lake after the then Queen of the United Kingdom. Burton, recovering from illness and resting further south on the shores of Lake Tanganyika, was outraged that Speke claimed to have proved his discovery to be the true source of the Nile when Burton regarded this as still unsettled. A very public quarrel ensued, which sparked a great deal of intense debate within the scientific community and interest by other explorers keen to either confirm or refute Speke's discovery. British explorer and missionary David Livingstone pushed too far west and entered the Congo River system instead. It was ultimately Welsh-American explorer Henry Morton Stanley who confirmed Speke's discovery, circumnavigating Lake Victoria and reporting the great outflow at Ripon Falls on the lake's northern shore.
103
+
104
+ European involvement in Egypt goes back to the time of Napoleon. Laird Shipyard of Liverpool sent an iron steamer to the Nile in the 1830s. With the completion of the Suez Canal and the British takeover of Egypt in the 1882, more British river steamers followed.
105
+
106
+ The Nile is the area's natural navigation channel, giving access to Khartoum and Sudan by steamer. The Siege of Khartoum was broken with purpose-built sternwheelers shipped from England and steamed up the river to retake the city. After this came regular steam navigation of the river. With British Forces in Egypt in the First World War and the inter-war years, river steamers provided both security and sightseeing to the Pyramids and Thebes. Steam navigation remained integral to the two countries as late as 1962. Sudan steamer traffic was a lifeline as few railways or roads were built in that country. Most paddle steamers have been retired to shorefront service, but modern diesel tourist boats remain on the river.
107
+
108
+ The Nile has long been used to transport goods along its length. Winter winds blow south, up river, so ships could sail up river, and down river using the flow of the river.
109
+ While most Egyptians still live in the Nile valley, the 1970 completion of the Aswan High Dam ended the summer floods and their renewal of the fertile soil, fundamentally changing farming practices. The Nile supports much of the population living along its banks, enabling Egyptians to live in otherwise inhospitable regions of the Sahara. The river's flow is disturbed at several points by the Cataracts of the Nile, which are sections of faster-flowing water with many small islands, shallow water, and rocks, which form an obstacle to navigation by boats. The Sudd wetlands in Sudan also forms a formidable navigation obstacle and impede water flow, to the extent that Sudan had once attempted to canalize (the Jonglei Canal) to bypass the swamps.[71][72]
110
+
111
+ Nile cities include Khartoum, Aswan, Luxor (Thebes), and the Giza – Cairo conurbation. The first cataract, the closest to the mouth of the river, is at Aswan, north of the Aswan Dam. This part of the river is a regular tourist route, with cruise ships and traditional wooden sailing boats known as feluccas. Many cruise ships ply the route between Luxor and Aswan, stopping at Edfu and Kom Ombo along the way. Security concerns have limited cruising on the northernmost portion for many years.
112
+
113
+ A computer simulation study to plan the economic development of the Nile was directed by H.A.W. Morrice and W.N. Allan, for the Ministry of Hydro-power of the Republic of the Sudan, during 1955–1957[73][74][75] Morrice was their Hydrological Adviser, and Allan his predecessor. M.P. Barnett directed the software development and computer operations. The calculations were enabled by accurate monthly inflow data collected for 50 years. The underlying principle was the use of over-year storage, to conserve water from rainy years for use in dry years. Irrigation, navigation and other needs were considered. Each computer run postulated a set of reservoirs and operating equations for the release of water as a function of the month and the levels upstream. The behavior that would have resulted given the inflow data was modeled. Over 600 models were run. Recommendations were made to the Sudanese authorities. The calculations were run on an IBM 650 computer. Simulation studies to design water resources are discussed further in the article on hydrology transport models, that have been used since the 1980s to analyze water quality.
114
+
115
+ Despite the development of many reservoirs, drought during the 1980s led to widespread starvation in Ethiopia and Sudan, but Egypt was nourished by water impounded in Lake Nasser. Drought has proven to be a major cause of fatality in the Nile river basin. According to a report by the Strategic Foresight Group around 170 million people have been affected by droughts in the last century with half a million lives lost.[76] From the 70 incidents of drought which took place between 1900 and 2012, 55 incidents took place in Ethiopia, Sudan, South Sudan, Kenya and Tanzania.[76]
116
+
117
+ The Nile's water has affected the politics of East Africa and the Horn of Africa for many decades. The dispute between Egypt and Ethiopia over the $4.5 billion Grand Ethiopian Renaissance Dam — Africa's largest, with a reservoir about the size of London – has become a national preoccupation in both countries, stoking patriotism, deep-seated fears and even murmurs of war.[77] Countries including Uganda, Sudan, Ethiopia and Kenya have complained about Egyptian domination of its water resources. The Nile Basin Initiative promotes a peaceful cooperation among those states.[78][79]
118
+
119
+ Several attempts have been made to establish agreements between the countries sharing the Nile waters. On 14 May 2010 at Entebbe, Ethiopia, Rwanda, Tanzania and Uganda signed a new agreement on sharing the Nile water even though this agreement raised strong opposition from Egypt and Sudan. Ideally, such international agreements should promote equitable and efficient usage of the Nile basin's water resources. Without a better understanding about the availability of the future water resources of the Nile, it is possible that conflicts could arise between these countries relying on the Nile for their water supply, economic and social developments.[12]
120
+
121
+ In 1951, the American John Goddard together with two French explorers became the first to successfully navigate the entire Nile river from its source in Burundi at the potential headsprings of the Kagera River in Burundi to its mouth on the Mediterranean Sea, a journey of approximately 6,800 km (4,200 mi). Their 9-month journey is described in the book Kayaks down the Nile.[80]
122
+
123
+ The White Nile Expedition, led by South African national Hendrik Coetzee, navigated the White Nile's entire length of approximately 5,800 kilometres (3,600 mi). The expedition began at the White Nile's beginning at Lake Victoria in Uganda, on 17 January 2004 and arrived safely at the Mediterranean in Rosetta, four and a half months later.[81]
124
+
125
+ The Blue Nile Expedition, led by geologist Pasquale Scaturro and his partner, kayaker and documentary filmmaker Gordon Brown became the first known people to descend the entire Blue Nile, from Lake Tana in Ethiopia to the beaches of Alexandria on the Mediterranean. Their approximately 5,230-kilometre (3,250 mi) journey took 114 days, from 25 December 2003 to 28 April 2004. Though their expedition included others, Brown and Scaturro were the only ones to complete the entire journey.[82] Although they descended whitewater manually the team used outboard motors for much of their journey.
126
+
127
+ On 29 January 2005 Canadian Les Jickling and New Zealander Mark Tanner completed the first human powered transit of Ethiopia's Blue Nile. Their journey of over 5,000 kilometres (3,100 mi) took five months. They recount that they paddled through two war zones, regions notorious for bandits, and were arrested at gunpoint.[83]
128
+
129
+ [clarification needed]
130
+
131
+ The following bridges cross the Blue Nile and connect Khartoum to Khartoum North:
132
+
133
+ The following bridges cross the White Nile and connect Khartoum to Omdurman:
134
+
135
+ the following bridges cross from Omdurman: to Khartoum North:
136
+
137
+ The following bridges cross to Tuti from Khartoum states three cities
138
+
139
+ Other bridges
140
+
141
+ Riverboat on the Nile, Egypt 1900
142
+
143
+ Marsh along the Nile
144
+
145
+ A river boat crossing the Nile in Uganda
146
+
147
+ Murchison Falls in Uganda, between Lake Victoria and Lake Kyoga
148
+
149
+ The Nile in Luxor
150
+
151
+ The Nile flows through Cairo, here contrasting ancient customs of daily life with the modern city of today.
152
+
153
+ The following is an annotated bibliography of key written documents for the Western exploration of the Nile.
154
+
155
+ 17th century
156
+
157
+ 18th century
158
+
159
+ 1800–1850
160
+
161
+ 1850–1900
en/5903.html.txt ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ A vampire is a creature from folklore that subsists by feeding on the vital essence (generally in the form of blood) of the living. In European folklore, vampires are undead creatures that often visited loved ones and caused mischief or deaths in the neighborhoods they inhabited while they were alive. They wore shrouds and were often described as bloated and of ruddy or dark countenance, markedly different from today's gaunt, pale vampire which dates from the early 19th century.
6
+
7
+ Vampiric entities have been recorded in most cultures; the term vampire was popularized in Western Europe after reports of an 18th-century mass hysteria of a pre-existing folk belief in the Balkans and Eastern Europe that in some cases resulted in corpses being staked and people being accused of vampirism.[1] Local variants in Eastern Europe were also known by different names, such as shtriga in Albania, vrykolakas in Greece and strigoi in Romania.
8
+
9
+ In modern times, the vampire is generally held to be a fictitious entity, although belief in similar vampiric creatures such as the chupacabra still persists in some cultures. Early folk belief in vampires has sometimes been ascribed to the ignorance of the body's process of decomposition after death and how people in pre-industrial societies tried to rationalize this, creating the figure of the vampire to explain the mysteries of death. Porphyria was linked with legends of vampirism in 1985 and received much media exposure, but has since been largely discredited.[2][3]
10
+
11
+ The charismatic and sophisticated vampire of modern fiction was born in 1819 with the publication of "The Vampyre" by the English writer John Polidori; the story was highly successful and arguably the most influential vampire work of the early 19th century.[4] Bram Stoker's 1897 novel Dracula is remembered as the quintessential vampire novel and provided the basis of the modern vampire legend, even though it was published after Joseph Sheridan Le Fanu's 1872 novel Carmilla. The success of this book spawned a distinctive vampire genre, still popular in the 21st century, with books, films, television shows, and video games. The vampire has since become a dominant figure in the horror genre.
12
+
13
+ The Oxford English Dictionary dates the first appearance of the English word vampire (as vampyre) in English from 1734, in a travelogue titled Travels of Three English Gentlemen published in The Harleian Miscellany in 1745.[5] Vampires had already been discussed in French[6] and German literature.[7] After Austria gained control of northern Serbia and Oltenia with the Treaty of Passarowitz in 1718, officials noted the local practice of exhuming bodies and "killing vampires".[7] These reports, prepared between 1725 and 1732, received widespread publicity.[7] The English term was derived (possibly via French vampyre) from the German Vampir, in turn derived in the early 18th century from the Serbian vampir (Serbian Cyrillic: вампир).[8][9][10][11]
14
+
15
+ The Serbian form has parallels in virtually all Slavic languages: Bulgarian and Macedonian вампир (vampir), Bosnian: vampir/вампир, Croatian vampir, Czech and Slovak upír, Polish wąpierz, and (perhaps East Slavic-influenced) upiór, Ukrainian упир (upyr), Russian упырь (upyr'), Belarusian упыр (upyr), from Old East Slavic упирь (upir') (many of these languages have also borrowed forms such as "vampir/wampir" subsequently from the West; these are distinct from the original local words for the creature). The exact etymology is unclear.[12] Among the proposed proto-Slavic forms are *ǫpyrь and *ǫpirь.[13]
16
+
17
+ Another less widespread theory is that the Slavic languages have borrowed the word from a Turkic term for "witch" (e.g., Tatar ubyr).[13][14] Czech linguist Václav Machek proposes Slovak verb "vrepiť sa" (stick to, thrust into), or its hypothetical anagram "vperiť sa" (in Czech, the archaic verb "vpeřit" means "to thrust violently") as an etymological background, and thus translates "upír" as "someone who thrusts, bites".[15] An early use of the Old Russian word is in the anti-pagan treatise "Word of Saint Grigoriy" (Russian Слово святого Григория), dated variously to the 11th–13th centuries, where pagan worship of upyri is reported.[16][17]
18
+
19
+ The notion of vampirism has existed for millennia. Cultures such as the Mesopotamians, Hebrews, Ancient Greeks, Manipuri and Romans had tales of demons and spirits which are considered precursors to modern vampires. Despite the occurrence of vampiric creatures in these ancient civilizations, the folklore for the entity known today as the vampire originates almost exclusively from early 18th-century southeastern Europe,[1] when verbal traditions of many ethnic groups of the region were recorded and published. In most cases, vampires are revenants of evil beings, suicide victims, or witches, but they can also be created by a malevolent spirit possessing a corpse or by being bitten by a vampire. Belief in such legends became so pervasive that in some areas it caused mass hysteria and even public executions of people believed to be vampires.[18]
20
+
21
+ It is difficult to make a single, definitive description of the folkloric vampire, though there are several elements common to many European legends. Vampires were usually reported as bloated in appearance, and ruddy, purplish, or dark in colour; these characteristics were often attributed to the recent drinking of blood. Blood was often seen seeping from the mouth and nose when one was seen in its shroud or coffin and its left eye was often open.[19] It would be clad in the linen shroud it was buried in, and its teeth, hair, and nails may have grown somewhat, though in general fangs were not a feature.[20] Although vampires were generally described as undead, some folk tales spoke of them as living beings.[21][22]
22
+
23
+ The causes of vampiric generation were many and varied in original folklore. In Slavic and Chinese traditions, any corpse that was jumped over by an animal, particularly a dog or a cat, was feared to become one of the undead.[23] A body with a wound that had not been treated with boiling water was also at risk. In Russian folklore, vampires were said to have once been witches or people who had rebelled against the Russian Orthodox Church while they were alive.[24]
24
+
25
+ Cultural practices often arose that were intended to prevent a recently deceased loved one from turning into an undead revenant. Burying a corpse upside-down was widespread, as was placing earthly objects, such as scythes or sickles,[25] near the grave to satisfy any demons entering the body or to appease the dead so that it would not wish to arise from its coffin. This method resembles the ancient Greek practice of placing an obolus in the corpse's mouth to pay the toll to cross the River Styx in the underworld. It has been argued that instead, the coin was intended to ward off any evil spirits from entering the body, and this may have influenced later vampire folklore. This tradition persisted in modern Greek folklore about the vrykolakas, in which a wax cross and piece of pottery with the inscription "Jesus Christ conquers" were placed on the corpse to prevent the body from becoming a vampire.[26]
26
+
27
+ Other methods commonly practised in Europe included severing the tendons at the knees or placing poppy seeds, millet, or sand on the ground at the grave site of a presumed vampire; this was intended to keep the vampire occupied all night by counting the fallen grains,[27] indicating an association of vampires with arithmomania. Similar Chinese narratives state that if a vampiric being came across a sack of rice, it would have to count every grain; this is a theme encountered in myths from the Indian subcontinent, as well as in South American tales of witches and other sorts of evil or mischievous spirits or beings.[28]
28
+
29
+ In Albanian folklore, the dhampir is the hybrid child of the karkanxholl (a lycanthropic creature with an iron mail shirt) or the lugat (a water-dwelling ghost or monster). The dhampir sprung of a karkanxholl has the unique ability to discern the karkanxholl; from this derives the expression the dhampir knows the lugat. The lugat cannot be seen, he can only be killed by the dhampir, who himself is usually the son of a lugat. In different regions, animals can be revenants as lugats; also, living people during their sleep. Dhampiraj is also an Albanian surname.[29]
30
+
31
+ Many rituals were used to identify a vampire. One method of finding a vampire's grave involved leading a virgin boy through a graveyard or church grounds on a virgin stallion—the horse would supposedly balk at the grave in question.[24] Generally a black horse was required, though in Albania it should be white.[30] Holes appearing in the earth over a grave were taken as a sign of vampirism.[31]
32
+
33
+ Corpses thought to be vampires were generally described as having a healthier appearance than expected, plump and showing little or no signs of decomposition.[32] In some cases, when suspected graves were opened, villagers even described the corpse as having fresh blood from a victim all over its face.[33] Evidence that a vampire was active in a given locality included death of cattle, sheep, relatives or neighbours. Folkloric vampires could also make their presence felt by engaging in minor poltergeist-styled activity, such as hurling stones on roofs or moving household objects,[34] and pressing on people in their sleep.[35]
34
+
35
+ Apotropaics—items able to ward off revenants—are common in vampire folklore. Garlic is a common example,[36] a branch of wild rose and hawthorn are said to harm vampires, and in Europe, sprinkling mustard seeds on the roof of a house was said to keep them away.[38] Other apotropaics include sacred items, for example a crucifix, rosary, or holy water. Vampires are said to be unable to walk on consecrated ground, such as that of churches or temples, or cross running water.[37]
36
+
37
+ Although not traditionally regarded as an apotropaic, mirrors have been used to ward off vampires when placed, facing outwards, on a door (in some cultures, vampires do not have a reflection and sometimes do not cast a shadow, perhaps as a manifestation of the vampire's lack of a soul).[39] This attribute is not universal (the Greek vrykolakas/tympanios was capable of both reflection and shadow), but was used by Bram Stoker in Dracula and has remained popular with subsequent authors and filmmakers.[40]
38
+
39
+ Some traditions also hold that a vampire cannot enter a house unless invited by the owner; after the first invitation they can come and go as they please.[39] Though folkloric vampires were believed to be more active at night, they were not generally considered vulnerable to sunlight.[40]
40
+
41
+ Methods of destroying suspected vampires varied, with staking the most commonly cited method, particularly in southern Slavic cultures.[42] Ash was the preferred wood in Russia and the Baltic states,[43] or hawthorn in Serbia,[44] with a record of oak in Silesia.[45][46] Aspen was also used for stakes, as it was believed that Christ's cross was made from aspen (aspen branches on the graves of purported vampires were also believed to prevent their risings at night).[47] Potential vampires were most often staked through the heart, though the mouth was targeted in Russia and northern Germany[48][49] and the stomach in north-eastern Serbia.[50]
42
+
43
+ Piercing the skin of the chest was a way of "deflating" the bloated vampire. This is similar to a practice of "anti-vampire burial": burying sharp objects, such as sickles, with the corpse, so that they may penetrate the skin if the body bloats sufficiently while transforming into a revenant.[51]
44
+
45
+ Decapitation was the preferred method in German and western Slavic areas, with the head buried between the feet, behind the buttocks or away from the body.[42] This act was seen as a way of hastening the departure of the soul, which in some cultures, was said to linger in the corpse. The vampire's head, body, or clothes could also be spiked and pinned to the earth to prevent rising.[52]
46
+
47
+ Romani people drove steel or iron needles into a corpse's heart and placed bits of steel in the mouth, over the eyes, ears and between the fingers at the time of burial. They also placed hawthorn in the corpse's sock or drove a hawthorn stake through the legs. In a 16th-century burial near Venice, a brick forced into the mouth of a female corpse has been interpreted as a vampire-slaying ritual by the archaeologists who discovered it in 2006.[54] In Bulgaria, over 100 skeletons with metal objects, such as plough bits, embedded in the torso have been discovered.[53]
48
+
49
+ Further measures included pouring boiling water over the grave or complete incineration of the body. In the Balkans, a vampire could also be killed by being shot or drowned, by repeating the funeral service, by sprinkling holy water on the body, or by exorcism. In Romania, garlic could be placed in the mouth, and as recently as the 19th century, the precaution of shooting a bullet through the coffin was taken. For resistant cases, the body was dismembered and the pieces burned, mixed with water, and administered to family members as a cure. In Saxon regions of Germany, a lemon was placed in the mouth of suspected vampires.[55]
50
+
51
+ Tales of supernatural beings consuming the blood or flesh of the living have been found in nearly every culture around the world for many centuries.[56] The term vampire did not exist in ancient times. Blood drinking and similar activities were attributed to demons or spirits who would eat flesh and drink blood; even the devil was considered synonymous with the vampire.[57]
52
+
53
+ Almost every nation has associated blood drinking with some kind of revenant or demon, or in some cases a deity. In India, for example, tales of vetālas, ghoulish beings that inhabit corpses, have been compiled in the Baitāl Pacīsī; a prominent story in the Kathāsaritsāgara tells of King Vikramāditya and his nightly quests to capture an elusive one.[58] Piśāca, the returned spirits of evil-doers or those who died insane, also bear vampiric attributes.[59]
54
+
55
+ The Persians were one of the first civilizations to have tales of blood-drinking demons: creatures attempting to drink blood from men were depicted on excavated pottery shards.[60] Ancient Babylonia and Assyria had tales of the mythical Lilitu,[61] synonymous with and giving rise to Lilith (Hebrew לילית) and her daughters the Lilu from Hebrew demonology. Lilitu was considered a demon and was often depicted as subsisting on the blood of babies,[61] and estries, female shapeshifting, blood-drinking demons, were said to roam the night among the population, seeking victims. According to Sefer Hasidim, estries were creatures created in the twilight hours before God rested. An injured estrie could be healed by eating bread and salt given to her by her attacker.[62]
56
+
57
+ Greco-Roman mythology described the Empusae,[63] the Lamia,[64] and the striges. Over time the first two terms became general words to describe witches and demons respectively. Empusa was the daughter of the goddess Hecate and was described as a demonic, bronze-footed creature. She feasted on blood by transforming into a young woman and seduced men as they slept before drinking their blood.[63] The Lamia preyed on young children in their beds at night, sucking their blood, as did the gelloudes or Gello.[64] Like the Lamia, the striges feasted on children, but also preyed on adults. They were described as having the bodies of crows or birds in general, and were later incorporated into Roman mythology as strix, a kind of nocturnal bird that fed on human flesh and blood.[65]
58
+
59
+ Many myths surrounding vampires originated during the medieval period. The 12th-century British historians and chroniclers Walter Map and William of Newburgh recorded accounts of revenants,[18][66] though records in English legends of vampiric beings after this date are scant.[67] The Old Norse draugr is another medieval example of an undead creature with similarities to vampires.[68] Vampiric beings were rarely written about in Jewish literature; the 16th-century rabbi David ben Solomon ibn Abi Zimra (Radbaz) wrote of an uncharitable old woman whose body was unguarded and unburied for three days after she died and rose as a vampiric entity, killing hundreds of people. He linked this event to the lack of a shmirah (guarding) after death as the corpse could be a vessel for evil spirits.[69]
60
+
61
+ Vampires properly originating in folklore were widely reported from Eastern Europe in the late 17th and 18th centuries. These tales formed the basis of the vampire legend that later entered Germany and England, where they were subsequently embellished and popularized. One of the earliest recordings of vampire activity came from the region of Istria in modern Croatia, in 1672.[70] Local reports cited the local vampire Jure Grando of the village Kringa as the cause of panic among the villagers.[71] A former peasant, Jure died in 1656. Local villagers claimed he returned from the dead and began drinking blood from the people and sexually harassing his widow. The village leader ordered a stake to be driven through his heart, but when the method failed to kill him, he was subsequently beheaded with better results.[72]
62
+
63
+ During the 18th century, there was a frenzy of vampire sightings in Eastern Europe, with frequent stakings and grave diggings to identify and kill the potential revenants. Even government officials engaged in the hunting and staking of vampires.[73] Despite being called the Age of Enlightenment, during which most folkloric legends were quelled, the belief in vampires increased dramatically, resulting in a mass hysteria throughout most of Europe.[18] The panic began with an outbreak of alleged vampire attacks in East Prussia in 1721 and in the Habsburg Monarchy from 1725 to 1734, which spread to other localities. Two infamous vampire cases, the first to be officially recorded, involved the corpses of Petar Blagojevich and Miloš Čečar from Serbia. Blagojevich was reported to have died at the age of 62, but allegedly returned after his death asking his son for food. When the son refused, he was found dead the following day. Blagojevich supposedly returned and attacked some neighbours who died from loss of blood.[73]
64
+
65
+ In the second case, Miloš, an ex-soldier-turned-farmer who allegedly was attacked by a vampire years before, died while haying. After his death, people began to die in the surrounding area and it was widely believed that Miloš had returned to prey on the neighbours.[74][75] Another infamous Serbian vampire legend recounts the story of a certain Sava Savanović, who lives in a watermill and kills and drinks blood from the millers. The character was later used in a story written by Serbian writer Milovan Glišić and in the Yugoslav 1973 horror film Leptirica inspired by the story.[76]
66
+
67
+ The two incidents were well-documented. Government officials examined the bodies, wrote case reports, and published books throughout Europe.[75] The hysteria, commonly referred to as the "18th-Century Vampire Controversy", raged for a generation. The problem was exacerbated by rural epidemics of so-called vampire attacks, undoubtedly caused by the higher amount of superstition that was present in village communities, with locals digging up bodies and in some cases, staking them.[77]
68
+
69
+ In 1597, King James wrote a dissertation on witchcraft titled Daemonologie in which he wrote the belief that demons could possess both the living and the dead. Within his classification of demons, he explained the concept through the notion that incubi and succubae could possess the corpse of the deceased and walk the earth. As a devil borrows a dead body, it would seem so visibly and naturally to any man who converses with them and that any substance within the body would remain intolerably cold to others which they abuse.[78]
70
+
71
+ In 1645 the Greek librarian of the Vatican, Leo Allatius, produced the first methodological description of the Balkan beliefs in vampires (Greek: vrykolakas) in his work De Graecorum hodie quorundam opinationibus ("On certain modern opinions among the Greeks").[79]
72
+
73
+ In 1652, the Wallachian Voivode Matei Basarab passed the first law that mentioned the belief in vampires (in Romanian "Strigoi"), called Îndreptarea legii (The right-making of the law). The paragraph contains the opinion and recommendation of the Patriarch Postnicul over "The deceased, which they will learn to be Strigoi, which is called vrykolakas, what needs to be done". The Patriarch proceeds in describing the belief:[80]
74
+
75
+ I've heard in many cities and towns, it's said, some dreadful things being done, which are below praise and great foolishness and lack of knowledge of people over the work of the devil. For that our enemy, the most unclean, the devil where he finds an empty place to dwell and do his will, there he indeed dwells and many times with deceiving apparitions towards lots of [bad] deeds he lures the people and leads them towards his will in order that every wretch people like them to sink and drown in the depth of the damnation of the eternal fire. There are some foolish people that say that many times when people die, they rise and become Strigoi and kill those alive, which death comes in a violent way and quick towards many people.
76
+
77
+ The patriarch describes the Strigoi sightings (especially the blood on a long time deceased body) as demonic deceiving and forbids anyone, especially the clergy, from desecrating the graves or burning the bodies of the dead, calling it a sin for which they end up in Hell.
78
+ Even though it wasn't permitted to desecrate the grave of the dead person in any way or to burn the dead body, the patriarch offers some remedies in then event of such demonic apparitions:
79
+
80
+ And then you must know if they will learn about such a [dead] body which is the work of the devil, call the priest to read the Paraklesis of the Theotokos and he shall perform the House blessing service, and shall perform liturgy and make Holy Water in aid of everyone and shall also give Koliva as alms and thereafter he shall say the curse of the devil exorcism Exorcism of St. John Chrysostom. And the both exorcisms performed at Baptism you shall read towards those bones [of the dead]. And then the Holy Water from the House Blessing liturgy you shall splash the people which will happen to be there and then more Holy Water you shall pour over that dead body and with the gift of Christ, the devil shall perish.[81]
81
+
82
+
83
+
84
+ From 1679, Philippe Rohr devotes an essay to the dead who chew their shrouds in their graves, a subject resumed by Otto in 1732, and then by Michael Ranft in 1734. The subject was based on the observation that when digging up graves, it was discovered that some corpses had at some point either devoured the interior fabric of their coffin or their own limbs.[82] Ranft described in his treatise of a tradition in some parts of Germany, that to prevent the dead from masticating they placed a mound of dirt under their chin in the coffin, placed a piece of money and a stone in the mouth, or tied a handkerchief tightly around the throat.[83] In 1732 an anonymous writer writing as "the doctor Weimar" discusses the non-putrefaction of these creatures, from a theological point of view.[84] In 1733, Johann Christoph Harenberg wrote a general treatise on vampirism and the Marquis d'Argens cites local cases. Theologians and clergymen also address the topic.[85]
85
+
86
+ Some theological disputes arose. The non-decay of vampires' bodies could recall the incorruption of the bodies of the saints of the Catholic Church. A paragraph on vampires was included in the second edition (1749) of De servorum Dei beatificatione et sanctorum canonizatione, On the beatification of the servants of God and on canonization of the blessed, written by Prospero Lambertini (Pope Benedict XIV).[86] In his opinion, while the incorruption of the bodies of saints was the effect of a divine intervention, all the phenomena attributed to vampires were purely natural or the fruit of "imagination, terror and fear". In other words, vampires did not exist.[87]
87
+
88
+ Dom Augustine Calmet, a French theologian and scholar, published a comprehensive treatise in 1751 titled Treatise on the Apparitions of Spirits and on Vampires or Revenants which investigated the existence of vampires, demons, and spectres. Calmet conducted extensive research and amassed judicial reports of vampiric incidents and extensively researched theological and mythological accounts as well, using the scientific method in his analysis to come up with methods for determining the validity for cases of this nature. As he stated in his treatise:[88]
89
+
90
+ They see, it is said, men who have been dead for several months, come back to earth, talk, walk, infest villages, ill use both men and beasts, suck the blood of their near relations, make them ill, and finally cause their death; so that people can only save themselves from their dangerous visits and their hauntings by exhuming them, impaling them, cutting off their heads, tearing out the heart, or burning them. These revenants are called by the name of oupires or vampires, that is to say, leeches; and such particulars are related of them, so singular, so detailed, and invested with such probable circumstances and such judicial information, that one can hardly refuse to credit the belief which is held in those countries, that these revenants come out of their tombs and produce those effects which are proclaimed of them.
91
+
92
+ Calmet had numerous readers, including both a critical Voltaire and numerous supportive demonologists who interpreted the treatise as claiming that vampires existed.[77] In the Philosophical Dictionary, Voltaire wrote:[89]
93
+
94
+ These vampires were corpses, who went out of their graves at night to suck the blood of the living, either at their throats or stomachs, after which they returned to their cemeteries. The persons so sucked waned, grew pale, and fell into consumption; while the sucking corpses grew fat, got rosy, and enjoyed an excellent appetite. It was in Poland, Hungary, Silesia, Moravia, Austria, and Lorraine, that the dead made this good cheer.
95
+
96
+ The controversy in Austria only ceased when Empress Maria Theresa of Austria sent her personal physician, Gerard van Swieten, to investigate the claims of vampiric entities. He concluded that vampires did not exist and the Empress passed laws prohibiting the opening of graves and desecration of bodies, sounding the end of the vampire epidemics. Other European countries followed suit. Despite this condemnation, the vampire lived on in artistic works and in local folklore.[77]
97
+
98
+ Beings having many of the attributes of European vampires appear in the folklore of Africa, Asia, North and South America, and India. Classified as vampires, all share the thirst for blood.[90]
99
+
100
+ Various regions of Africa have folktales featuring beings with vampiric abilities: in West Africa the Ashanti people tell of the iron-toothed and tree-dwelling asanbosam,[91] and the Ewe people of the adze, which can take the form of a firefly and hunts children.[92] The eastern Cape region has the impundulu, which can take the form of a large taloned bird and can summon thunder and lightning, and the Betsileo people of Madagascar tell of the ramanga, an outlaw or living vampire who drinks the blood and eats the nail clippings of nobles.[93]
101
+
102
+ The Loogaroo is an example of how a vampire belief can result from a combination of beliefs, here a mixture of French and African Vodu or voodoo. The term Loogaroo possibly comes from the French loup-garou (meaning "werewolf") and is common in the culture of Mauritius. The stories of the Loogaroo are widespread through the Caribbean Islands and Louisiana in the United States.[94] Similar female monsters are the Soucouyant of Trinidad, and the Tunda and Patasola of Colombian folklore, while the Mapuche of southern Chile have the bloodsucking snake known as the Peuchen.[95] Aloe vera hung backwards behind or near a door was thought to ward off vampiric beings in South American folklore.[28] Aztec mythology described tales of the Cihuateteo, skull-faced spirits of those who died in childbirth who stole children and entered into sexual liaisons with the living, driving them mad.[24]
103
+
104
+ During the late 18th and 19th centuries the belief in vampires was widespread in parts of New England, particularly in Rhode Island and eastern Connecticut. There are many documented cases of families disinterring loved ones and removing their hearts in the belief that the deceased was a vampire who was responsible for sickness and death in the family, although the term "vampire" was never used to describe the dead. The deadly disease tuberculosis, or "consumption" as it was known at the time, was believed to be caused by nightly visitations on the part of a dead family member who had died of consumption themselves.[96] The most famous, and most recently recorded, case of suspected vampirism is that of nineteen-year-old Mercy Brown, who died in Exeter, Rhode Island in 1892. Her father, assisted by the family physician, removed her from her tomb two months after her death, cut out her heart and burned it to ashes.[97]
105
+
106
+ Vampires have appeared in Japanese cinema since the late 1950s; the folklore behind it is western in origin.[98] The Nukekubi is a being whose head and neck detach from its body to fly about seeking human prey at night.[99] Legends of female vampiric beings who can detach parts of their upper body also occur in the Philippines, Malaysia and Indonesia. There are two main vampiric creatures in the Philippines: the Tagalog Mandurugo ("blood-sucker") and the Visayan Manananggal ("self-segmenter"). The mandurugo is a variety of the aswang that takes the form of an attractive girl by day, and develops wings and a long, hollow, threadlike tongue by night. The tongue is used to suck up blood from a sleeping victim.[100] The manananggal is described as being an older, beautiful woman capable of severing its upper torso in order to fly into the night with huge batlike wings and prey on unsuspecting, sleeping pregnant women in their homes. They use an elongated proboscislike tongue to suck fetuses from these pregnant women. They also prefer to eat entrails (specifically the heart and the liver) and the phlegm of sick people.[100]
107
+
108
+ The Malaysian Penanggalan is a woman who obtained her beauty through the active use of black magic or other unnatural means, and is most commonly described in local folklore to be dark or demonic in nature. She is able to detach her fanged head which flies around in the night looking for blood, typically from pregnant women.[101] Malaysians hung jeruju (thistles) around the doors and windows of houses, hoping the Penanggalan would not enter for fear of catching its intestines on the thorns.[102] The Leyak is a similar being from Balinese folklore of Indonesia.[103] A Kuntilanak or Matianak in Indonesia,[104] or Pontianak or Langsuir in Malaysia,[105] is a woman who died during childbirth and became undead, seeking revenge and terrorising villages. She appeared as an attractive woman with long black hair that covered a hole in the back of her neck, with which she sucked the blood of children. Filling the hole with her hair would drive her off. Corpses had their mouths filled with glass beads, eggs under each armpit, and needles in their palms to prevent them from becoming langsuir. This description would also fit the Sundel Bolongs.[106]
109
+
110
+ Jiangshi, sometimes called "Chinese vampires" by Westerners, are reanimated corpses that hop around, killing living creatures to absorb life essence (qì) from their victims. They are said to be created when a person's soul (魄 pò) fails to leave the deceased's body.[107] Jiangshi are usually represented as mindless creatures with no independent thought.[108] This monster has greenish-white furry skin, perhaps derived from fungus or mould growing on corpses.[109] Jiangshi legends have inspired a genre of jiangshi films and literature in Hong Kong and East Asia. Films like Encounters of the Spooky Kind and Mr. Vampire were released during the jiangshi cinematic boom of the 1980s and 1990s.[110][111]
111
+
112
+ In modern fiction, the vampire tends to be depicted as a suave, charismatic villain.[20] Despite the general disbelief in vampiric entities, occasional sightings of vampires are reported. Vampire hunting societies still exist, but they are largely formed for social reasons.[18] Allegations of vampire attacks swept through Malawi during late 2002 and early 2003, with mobs stoning one person to death and attacking at least four others, including Governor Eric Chiwaya, based on the belief that the government was colluding with vampires.[112]
113
+
114
+ In early 1970 local press spread rumours that a vampire haunted Highgate Cemetery in London. Amateur vampire hunters flocked in large numbers to the cemetery. Several books have been written about the case, notably by Sean Manchester, a local man who was among the first to suggest the existence of the "Highgate Vampire" and who later claimed to have exorcised and destroyed a whole nest of vampires in the area.[113] In January 2005, rumours circulated that an attacker had bitten a number of people in Birmingham, England, fuelling concerns about a vampire roaming the streets. Local police stated that no such crime had been reported and that the case appears to be an urban legend.[114]
115
+
116
+ In 2006, a physics professor at the University of Central Florida wrote a paper arguing that it is mathematically impossible for vampires to exist, based on geometric progression. According to the paper, if the first vampire had appeared on 1 January 1600, if it fed once a month (which is less often than what is depicted in films and folklore), and if every victim turned into a vampire, then within two and a half years the entire human population of the time would have become vampires.[115]
117
+
118
+ In one of the more notable cases of vampiric entities in the modern age, the chupacabra ("goat-sucker") of Puerto Rico and Mexico is said to be a creature that feeds upon the flesh or drinks the blood of domesticated animals, leading some to consider it a kind of vampire. The "chupacabra hysteria" was frequently associated with deep economic and political crises, particularly during the mid-1990s.[116]
119
+
120
+ In Europe, where much of the vampire folklore originates, the vampire is usually considered a fictitious being; many communities may have embraced the revenant for economic purposes. In some cases, especially in small localities, beliefs are still rampant and sightings or claims of vampire attacks occur frequently. In Romania during February 2004, several relatives of Toma Petre feared that he had become a vampire. They dug up his corpse, tore out his heart, burned it, and mixed the ashes with water in order to drink it.[117]
121
+
122
+ In September / October 2017, mob violence in Malawi related to a vampire scare killed about 6 people accused of being vampires.[118] A similar spate of vigilante violence linked to vampire rumours occurred there in 2002.[119]
123
+
124
+ Vampirism and the vampire lifestyle also represent a relevant part of modern day's occultist movements.[120] The mythos of the vampire, his magickal qualities, allure, and predatory archetype express a strong symbolism that can be used in ritual, energy work, and magick, and can even be adopted as a spiritual system.[121] The vampire has been part of the occult society in Europe for centuries and has spread into the American subculture as well for more than a decade, being strongly influenced by and mixed with the neo gothic aesthetics.[122]
125
+
126
+ "Coven" has been used as a collective noun for vampires, possibly based on the Wiccan usage. An alternative collective noun is a "house" of vampires.[123]
127
+
128
+ Commentators have offered many theories for the origins of vampire beliefs and related mass hysteria. Everything ranging from premature burial to the early ignorance of the body's decomposition cycle after death has been cited as the cause for the belief in vampires.[124]
129
+
130
+ Paul Barber in his book Vampires, Burial and Death has described that belief in vampires resulted from people of pre-industrial societies attempting to explain the natural, but to them inexplicable, process of death and decomposition.[124]
131
+
132
+ People sometimes suspected vampirism when a cadaver did not look as they thought a normal corpse should when disinterred. Rates of decomposition vary depending on temperature and soil composition, and many of the signs are little known. This has led vampire hunters to mistakenly conclude that a dead body had not decomposed at all or, ironically, to interpret signs of decomposition as signs of continued life.[125]
133
+
134
+ Corpses swell as gases from decomposition accumulate in the torso and the increased pressure forces blood to ooze from the nose and mouth. This causes the body to look "plump", "well-fed", and "ruddy"—changes that are all the more striking if the person was pale or thin in life. In the Arnold Paole case, an old woman's exhumed corpse was judged by her neighbours to look more plump and healthy than she had ever looked in life.[126] The exuding blood gave the impression that the corpse had recently been engaging in vampiric activity.[33]
135
+
136
+ Darkening of the skin is also caused by decomposition.[127] The staking of a swollen, decomposing body could cause the body to bleed and force the accumulated gases to escape the body. This could produce a groan-like sound when the gases moved past the vocal cords, or a sound reminiscent of flatulence when they passed through the anus. The official reporting on the Petar Blagojevich case speaks of "other wild signs which I pass by out of high respect".[128]
137
+
138
+ After death, the skin and gums lose fluids and contract, exposing the roots of the hair, nails, and teeth, even teeth that were concealed in the jaw. This can produce the illusion that the hair, nails, and teeth have grown. At a certain stage, the nails fall off and the skin peels away, as reported in the Blagojevich case—the dermis and nail beds emerging underneath were interpreted as "new skin" and "new nails".[128]
139
+
140
+ It has also been hypothesized that vampire legends were influenced by individuals being buried alive because of shortcomings in the medical knowledge of the time. In some cases in which people reported sounds emanating from a specific coffin, it was later dug up and fingernail marks were discovered on the inside from the victim trying to escape. In other cases the person would hit their heads, noses or faces and it would appear that they had been "feeding".[129] A problem with this theory is the question of how people presumably buried alive managed to stay alive for any extended period without food, water or fresh air. An alternate explanation for noise is the bubbling of escaping gases from natural decomposition of bodies.[130] Another likely cause of disordered tombs is grave robbery.[131]
141
+
142
+ Folkloric vampirism has been associated with clusters of deaths from unidentifiable or mysterious illnesses, usually within the same family or the same small community.[96] The epidemic allusion is obvious in the classical cases of Petar Blagojevich and Arnold Paole, and even more so in the case of Mercy Brown and in the vampire beliefs of New England generally, where a specific disease, tuberculosis, was associated with outbreaks of vampirism. As with the pneumonic form of bubonic plague, it was associated with breakdown of lung tissue which would cause blood to appear at the lips.[132]
143
+
144
+ In 1985 biochemist David Dolphin proposed a link between the rare blood disorder porphyria and vampire folklore. Noting that the condition is treated by intravenous haem, he suggested that the consumption of large amounts of blood may result in haem being transported somehow across the stomach wall and into the bloodstream. Thus vampires were merely sufferers of porphyria seeking to replace haem and alleviate their symptoms.[133]
145
+
146
+ The theory has been rebuffed medically as suggestions that porphyria sufferers crave the haem in human blood, or that the consumption of blood might ease the symptoms of porphyria, are based on a misunderstanding of the disease. Furthermore, Dolphin was noted to have confused fictional (bloodsucking) vampires with those of folklore, many of whom were not noted to drink blood.[134] Similarly, a parallel is made between sensitivity to sunlight by sufferers, yet this was associated with fictional and not folkloric vampires. In any case, Dolphin did not go on to publish his work more widely.[135] Despite being dismissed by experts, the link gained media attention[136] and entered popular modern folklore.[137]
147
+
148
+ Rabies has been linked with vampire folklore. Dr Juan Gómez-Alonso, a neurologist at Xeral Hospital in Vigo, Spain, examined this possibility in a report in Neurology. The susceptibility to garlic and light could be due to hypersensitivity, which is a symptom of rabies. The disease can also affect portions of the brain that could lead to disturbance of normal sleep patterns (thus becoming nocturnal) and hypersexuality. Legend once said a man was not rabid if he could look at his own reflection (an allusion to the legend that vampires have no reflection). Wolves and bats, which are often associated with vampires, can be carriers of rabies. The disease can also lead to a drive to bite others and to a bloody frothing at the mouth.[138][139]
149
+
150
+ In his 1931 treatise On the Nightmare, Welsh psychoanalyst Ernest Jones asserted that vampires are symbolic of several unconscious drives and defence mechanisms. Emotions such as love, guilt, and hate fuel the idea of the return of the dead to the grave. Desiring a reunion with loved ones, mourners may project the idea that the recently dead must in return yearn the same. From this arises the belief that folkloric vampires and revenants visit relatives, particularly their spouses, first.[140]
151
+
152
+ In cases where there was unconscious guilt associated with the relationship, the wish for reunion may be subverted by anxiety. This may lead to repression, which Sigmund Freud had linked with the development of morbid dread.[141] Jones surmised in this case the original wish of a (sexual) reunion may be drastically changed: desire is replaced by fear; love is replaced by sadism, and the object or loved one is replaced by an unknown entity. The sexual aspect may or may not be present.[142] Some modern critics have proposed a simpler theory: People identify with immortal vampires because, by so doing, they overcome, or at least temporarily escape from, their fear of dying.[143]
153
+
154
+ The innate sexuality of bloodsucking can be seen in its intrinsic connection with cannibalism and a folkloric one with incubus-like behaviour. Many legends report various beings draining other fluids from victims, an unconscious association with semen being obvious. Finally Jones notes that when more normal aspects of sexuality are repressed, regressed forms may be expressed, in particular sadism; he felt that oral sadism is integral in vampiric behaviour.[144]
155
+
156
+ The reinvention of the vampire myth in the modern era is not without political overtones.[145] The aristocratic Count Dracula, alone in his castle apart from a few demented retainers, appearing only at night to feed on his peasantry, is symbolic of the parasitic ancien régime. In his entry for "Vampires" in the Dictionnaire philosophique (1764), Voltaire notices how the mid-18th century coincided with the decline of the folkloric belief in the existence of vampires but that now "there were stock-jobbers, brokers, and men of business, who sucked the blood of the people in broad daylight; but they were not dead, though corrupted. These true suckers lived not in cemeteries, but in very agreeable palaces".[146]
157
+
158
+ Marx defined capital as "dead labour which, vampire-like, lives only by sucking living labour, and lives the more, the more labour it sucks".[147] Werner Herzog, in his Nosferatu the Vampyre, gives this political interpretation an extra ironic twist when protagonist Jonathon Harker, a middle-class solicitor, becomes the next vampire; in this way the capitalist bourgeois becomes the next parasitic class.[148]
159
+
160
+ A number of murderers have performed seemingly vampiric rituals upon their victims. Serial killers Peter Kürten and Richard Trenton Chase were both called "vampires" in the tabloids after they were discovered drinking the blood of the people they murdered. Similarly, in 1932, an unsolved murder case in Stockholm, Sweden was nicknamed the "Vampire murder", because of the circumstances of the victim's death.[149] The late-16th-century Hungarian countess and mass murderess Elizabeth Báthory became particularly infamous in later centuries' works, which depicted her bathing in her victims' blood in order to retain beauty or youth.[150]
161
+
162
+ Vampire lifestyle is a term for a contemporary subculture of people, largely within the Goth subculture, who consume the blood of others as a pastime; drawing from the rich recent history of popular culture related to cult symbolism, horror films, the fiction of Anne Rice, and the styles of Victorian England.[151] Active vampirism within the vampire subculture includes both blood-related vampirism, commonly referred to as sanguine vampirism, and psychic vampirism, or supposed feeding from pranic energy.[120][152]
163
+
164
+ Although many cultures have stories about them, vampire bats have only recently become an integral part of the traditional vampire lore. Vampire bats were integrated into vampire folklore after they were discovered on the South American mainland in the 16th century.[153] There are no vampire bats in Europe, but bats and owls have long been associated with the supernatural and omens, mainly because of their nocturnal habits,[153][154] and in modern English heraldic tradition, a bat means "Awareness of the powers of darkness and chaos".[155]
165
+
166
+ The three species of vampire bats are all endemic to Latin America, and there is no evidence to suggest that they had any Old World relatives within human memory. It is therefore impossible that the folkloric vampire represents a distorted presentation or memory of the vampire bat. The bats were named after the folkloric vampire rather than vice versa; the Oxford English Dictionary records their folkloric use in English from 1734 and the zoological not until 1774. The vampire bat's bite is usually not harmful to a person, but the bat has been known to actively feed on humans and large prey such as cattle and often leaves the trademark, two-prong bite mark on its victim's skin.[153]
167
+
168
+ The literary Dracula transforms into a bat several times in the novel, and vampire bats themselves are mentioned twice in it. The 1927 stage production of Dracula followed the novel in having Dracula turn into a bat, as did the film, where Béla Lugosi would transform into a bat.[153] The bat transformation scene was used again by Lon Chaney Jr. in 1943's Son of Dracula.[156]
169
+
170
+ The vampire is now a fixture in popular fiction. Such fiction began with 18th-century poetry and continued with 19th-century short stories, the first and most influential of which was John Polidori's "The Vampyre" (1819), featuring the vampire Lord Ruthven.[157] Lord Ruthven's exploits were further explored in a series of vampire plays in which he was the antihero. The vampire theme continued in penny dreadful serial publications such as Varney the Vampire (1847) and culminated in the pre-eminent vampire novel in history: Dracula by Bram Stoker, published in 1897.[158]
171
+
172
+ Over time, some attributes now regarded as integral became incorporated into the vampire's profile: fangs and vulnerability to sunlight appeared over the course of the 19th century, with Varney the Vampire and Count Dracula both bearing protruding teeth,[159] and Murnau's Nosferatu (1922) fearing daylight.[160] The cloak appeared in stage productions of the 1920s, with a high collar introduced by playwright Hamilton Deane to help Dracula 'vanish' on stage.[161] Lord Ruthven and Varney were able to be healed by moonlight, although no account of this is known in traditional folklore.[162] Implied though not often explicitly documented in folklore, immortality is one attribute which features heavily in vampire film and literature. Much is made of the price of eternal life, namely the incessant need for blood of former equals.[163]
173
+
174
+ The vampire or revenant first appeared in poems such as The Vampire (1748) by Heinrich August Ossenfelder, Lenore (1773) by Gottfried August Bürger, Die Braut von Corinth (The Bride of Corinth) (1797) by Johann Wolfgang von Goethe, Robert Southey's Thalaba the Destroyer (1801), John Stagg's "The Vampyre" (1810), Percy Bysshe Shelley's "The Spectral Horseman" (1810) ("Nor a yelling vampire reeking with gore") and "Ballad" in St. Irvyne (1811) about a reanimated corpse, Sister Rosa, Samuel Taylor Coleridge's unfinished Christabel and Lord Byron's The Giaour.[164]
175
+
176
+ Byron was also credited with the first prose fiction piece concerned with vampires: "The Vampyre" (1819). This was in reality authored by Byron's personal physician, John Polidori, who adapted an enigmatic fragmentary tale of his illustrious patient, "Fragment of a Novel" (1819), also known as "The Burial: A Fragment".[18][158] Byron's own dominating personality, mediated by his lover Lady Caroline Lamb in her unflattering roman-a-clef Glenarvon (a Gothic fantasia based on Byron's wild life), was used as a model for Polidori's undead protagonist Lord Ruthven. The Vampyre was highly successful and the most influential vampire work of the early 19th century.[165]
177
+
178
+ Varney the Vampire was a popular landmark mid-Victorian era gothic horror story by James Malcolm Rymer and Thomas Peckett Prest, which first appeared from 1845 to 1847 in a series of pamphlets generally referred to as penny dreadfuls because of their inexpensive price and typically gruesome contents.[157] The story was published in book form in 1847 and runs to 868 double-columned pages. It has a distinctly suspenseful style, using vivid imagery to describe the horrifying exploits of Varney.[162] Another important addition to the genre was Sheridan Le Fanu's lesbian vampire story Carmilla (1871). Like Varney before her, the vampiress Carmilla is portrayed in a somewhat sympathetic light as the compulsion of her condition is highlighted.[166]
179
+
180
+ No effort to depict vampires in popular fiction was as influential or as definitive as Bram Stoker's Dracula (1897).[167] Its portrayal of vampirism as a disease of contagious demonic possession, with its undertones of sex, blood and death, struck a chord in Victorian Europe where tuberculosis and syphilis were common. The vampiric traits described in Stoker's work merged with and dominated folkloric tradition, eventually evolving into the modern fictional vampire.[157]
181
+
182
+ Drawing on past works such as The Vampyre and Carmilla, Stoker began to research his new book in the late 19th century, reading works such as The Land Beyond the Forest (1888) by Emily Gerard and other books about Transylvania and vampires. In London, a colleague mentioned to him the story of Vlad Ţepeş, the "real-life Dracula", and Stoker immediately incorporated this story into his book. The first chapter of the book was omitted when it was published in 1897, but it was released in 1914 as "Dracula's Guest".[168] Many experts believe, this deleted opening was based on the Austrian princess Eleonore von Schwarzenberg.[169]
183
+
184
+ The latter part of the 20th century saw the rise of multi-volume vampire epics. The first of these was Gothic romance writer Marilyn Ross's Barnabas Collins series (1966–71), loosely based on the contemporary American TV series Dark Shadows. It also set the trend for seeing vampires as poetic tragic heroes rather than as the more traditional embodiment of evil. This formula was followed in novelist Anne Rice's highly popular and influential Vampire Chronicles (1976–2003).[170]
185
+
186
+ The 21st century brought more examples of vampire fiction, such as J. R. Ward's Black Dagger Brotherhood series, and other highly popular vampire books which appeal to teenagers and young adults. Such vampiric paranormal romance novels and allied vampiric chick-lit and vampiric occult detective stories are a remarkably popular and ever-expanding contemporary publishing phenomenon.[171] L. A. Banks' The Vampire Huntress Legend Series, Laurell K. Hamilton's erotic Anita Blake: Vampire Hunter series, and Kim Harrison's The Hollows series, portray the vampire in a variety of new perspectives, some of them unrelated to the original legends. Vampires in the Twilight series (2005–2008) by Stephenie Meyer ignore the effects of garlic and crosses and are not harmed by sunlight, although it does reveal their supernatural status.[172] Richelle Mead further deviates from traditional vampires in her ''Vampire Academy'' series (2007–present), basing the novels on Romanian lore with two races of vampires, one good and one evil, as well as half-vampires.[173]
187
+
188
+ Considered one of the preeminent figures of the classic horror film, the vampire has proven to be a rich subject for the film and gaming industries. Dracula is a major character in more films than any other but Sherlock Holmes, and many early films were either based on the novel Dracula or closely derived from it. These included the 1922 German silent film Nosferatu, directed by F. W. Murnau and featuring the first film portrayal of Dracula—although names and characters were intended to mimic Dracula's, Murnau could not obtain permission to do so from Stoker's widow, and had to alter many aspects of the story for the film. Universal's Dracula (1931), starring Béla Lugosi as the Count, was the first talking film to portray Dracula. The decade saw several more vampire films, most notably Dracula's Daughter in 1936.[174]
189
+
190
+ The legend of the vampire continued through the film industry when Dracula was reincarnated in the pertinent Hammer Horror series of films, starring Christopher Lee as the Count. The successful 1958 Dracula starring Lee was followed by seven sequels. Lee returned as Dracula in all but two of these and became well known in the role.[175] By the 1970s, vampires in films had diversified with works such as Count Yorga, Vampire (1970), an African Count in 1972's Blacula, the BBC's Count Dracula featuring French actor Louis Jourdan as Dracula and Frank Finlay as Abraham Van Helsing, and a Nosferatu-like vampire in 1979's Salem's Lot, and a remake of Nosferatu itself, titled Nosferatu the Vampyre with Klaus Kinski the same year. Several films featured the characterization of a female, often lesbian, vampire such as Hammer Horror's The Vampire Lovers (1970), based on Carmilla, though the plotlines still revolved around a central evil vampire character.[175]
191
+
192
+ The Gothic soap opera Dark Shadows, on American television from 1966 to 1971 and produced by Dan Curtis, featured the vampire character Barnabas Collins, portrayed by Canadian actor Jonathan Frid, which proved partly responsible for making the series one of the most popular of its type, amassing a total of 1,225 episodes in its nearly five-year run. The pilot for the later Dan Curtis 1972 television series Kolchak: The Night Stalker revolved around reporter Carl Kolchak hunting a vampire on the Las Vegas Strip. Later films showed more diversity in plotline, with some focusing on the vampire-hunter, such as Blade in the Marvel Comics' Blade films and the film Buffy the Vampire Slayer.[157] Buffy, released in 1992, foreshadowed a vampiric presence on television, with its adaptation to a long-running hit series of the same name and its spin-off Angel. Still others showed the vampire as a protagonist, such as 1983's The Hunger, 1994's Interview with the Vampire and its indirect sequel of sorts Queen of the Damned, and the 2007 series Moonlight. The 1992 film Bram Stoker's Dracula became the then-highest grossing vampire film ever.[176]
193
+
194
+ In his documentary "Vampire Princess" (2007) the investigative Austrian author and director Klaus T. Steindl discovered in 2007 the historical inspiration for Bram Stoker's legendary Dracula character (see also Literature - Bram Stoker: Dracula's Guest[168]): "Many experts believe, the deleted opening was actually based on a woman. Archaeologists, historians, and forensic scientists revisit the days of vampire hysteria in the eighteenth-century Czech Republic and re-open the unholy grave of dark princess Eleonore von Schwarzenberg. They uncover her story, once buried and long forgotten, now raised from the dead."[169]
195
+
196
+ This increase of interest in vampiric plotlines led to the vampire being depicted in films such as Underworld and Van Helsing, the Russian Night Watch and a TV miniseries remake of Salem's Lot, both from 2004. The series Blood Ties premiered on Lifetime Television in 2007, featuring a character portrayed as Henry Fitzroy, an illegitimate-son-of-Henry-VIII-of-England-turned-vampire, in modern-day Toronto, with a female former Toronto detective in the starring role. A 2008 series from HBO, entitled True Blood, gives a Southern Gothic take on the vampire theme.[172]
197
+
198
+ In 2008 the BBC Three series Being Human became popular in Britain. It featured an unconventional trio of a vampire, a werewolf and a ghost who are sharing a flat in Bristol.[177][178] Another popular vampire-related show is CW's The Vampire Diaries. The continuing popularity of the vampire theme has been ascribed to a combination of two factors: the representation of sexuality and the perennial dread of mortality.[179]
199
+
200
+ The role-playing game Vampire: The Masquerade has been influential upon modern vampire fiction and elements of its terminology, such as embrace and sire, appear in contemporary fiction.[157] Popular video games about vampires include Castlevania, which is an extension of the original Bram Stoker novel Dracula, and Legacy of Kain.[180] The role-playing game Dungeons & Dragons features vampires.[181]
201
+
202
+ Notes
203
+
204
+ Bibliography