de-francophones commited on
Commit
63e28c0
1 Parent(s): fd65c8a

a5d4a07398ae925224b5d6ad64abf183e1ffa1e90b368e7ca4238419b2f27b8c

Browse files
en/5319.html.txt ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A bucket is typically a watertight, vertical cylinder or truncated cone or square, with an open top and a flat bottom, attached to a semicircular carrying handle called the bail.[1][2]
2
+
3
+ A bucket is usually an open-top container. In contrast, a pail can have a top or lid and is a shipping container. In common usage, the two terms are often used interchangeably.
4
+
5
+ There are many types of buckets;
6
+
7
+ As a shipping container, the word "pail" is a technical term for a bucket shaped package with a sealed top or lid which is used as a shipping container for chemicals and industrial products.[4]
8
+
9
+ Roman bronze situla from Germany, 2nd-3rd century
10
+
11
+ A wooden bucket
12
+
13
+ German 19th century leather fire-buckets. With wood, leather was the most common material for buckets before modern times
14
+
15
+ A man carrying two buckets
16
+
17
+ A young lady carrying a bucket, drawing by German artist Heinrich Zille.
18
+
19
+ A mop bucket with a wringer.
20
+
21
+ An excavator bucket.
22
+
23
+ A crusher bucket
24
+
25
+ A helicopter bucket.
26
+
27
+ Plastic yellow bucket
28
+
29
+ A metal bucket.
30
+
31
+ The bucket has been used in many phrases and idioms in the English language.[5]
32
+
33
+ As an obsolete unit of measurement, at least one source documents a bucket as being equivalent to 4 imperial gallons.[6]
en/532.html.txt ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The Ballets Russes (French: [balɛ ʁys]) was an itinerant ballet company based in Paris that performed between 1909 and 1929 throughout Europe and on tours to North and South America. The company never performed in Russia, where the Revolution disrupted society. After its initial Paris season, the company had no formal ties there.[1]
2
+
3
+ Originally conceived by impresario Sergei Diaghilev, the Ballets Russes is widely regarded as the most influential ballet company of the 20th century,[2] in part because it promoted ground-breaking artistic collaborations among young choreographers, composers, designers, and dancers, all at the forefront of their several fields. Diaghilev commissioned works from composers such as Igor Stravinsky, Claude Debussy, Sergei Prokofiev, Erik Satie, and Maurice Ravel, artists such as Vasily Kandinsky, Alexandre Benois, Pablo Picasso, and Henri Matisse, and costume designers Léon Bakst and Coco Chanel.
4
+
5
+ The company's productions created a huge sensation, completely reinvigorating the art of performing dance, bringing many visual artists to public attention, and significantly affecting the course of musical composition. It also introduced European and American audiences to tales, music, and design motifs drawn from Russian folklore. The influence of the Ballets Russes lasts to the present day.
6
+
7
+ The French plural form of the name, “Ballets Russes,” specifically refers to the company founded by Sergei Diaghilev and active during his lifetime. (In some publicity the company was advertised as Les Ballets Russes de Serge Diaghileff.) In English, the company is now commonly referred to as "the Ballets Russes" (plural, without italics), although in the early part of the 20th century, it was sometimes referred to as “The Russian Ballet” or “Diaghilev’s Russian Ballet.” To add to the confusion, some publicity material spelled the name in the singular.
8
+
9
+ The names Ballet Russe de Monte-Carlo” and the Original Ballet Russe (using the singular) refer to companies that formed after Diaghilev's death in 1929.
10
+
11
+ Sergei Diaghilev, the company's impresario (or "artistic director" in modern terms), was chiefly responsible for its success. He was uniquely prepared for the role; born into a wealthy Russian family of vodka distillers (though they went bankrupt when he was 18), he was accustomed to moving in the upper-class circles that provided the company's patrons and benefactors.
12
+
13
+ In 1890, he enrolled at the Faculty of Law, St. Petersburg, to prepare for a career in the civil service like many Russian young men of his class.[3] There he was introduced (through his cousin Dmitry Filosofov) to a student clique of artists and intellectuals calling themselves The Nevsky Pickwickians whose most influential member was Alexandre Benois; others included Léon Bakst, Walter Nouvel, and Konstantin Somov.[4] From childhood, Diaghilev had been passionately interested in music. However, his ambition to become a composer was dashed in 1894 when Nikolai Rimsky-Korsakov told him he had no talent.[5]
14
+
15
+ In 1898, several members of The Pickwickians founded the journal Mir iskusstva (World of Art) under the editorship of Diaghilev.[6] As early as 1902, Mir iskusstva included reviews of concerts, operas, and ballets in Russia. The latter were chiefly written by Benois, who exerted considerable influence on Diaghilev's thinking.[7] Mir iskusstva also sponsored exhibitions of Russian art in St. Petersburg, culminating in Diaghilev's important 1905 show of Russian portraiture at the Tauride Palace.[8]
16
+
17
+ Frustrated by the extreme conservatism of the Russian art world, Diaghilev organized the groundbreaking Exhibition of Russian Art at the Petit Palais in Paris in 1906, the first major showing of Russian art in the West. Its enormous success created a Parisian fascination with all things Russian. Diaghilev organized a 1907 season of Russian music at the Paris Opéra. In 1908, Diaghilev returned to the Paris Opéra with six performances of Modest Mussorgsky's opera Boris Godunov, starring basso Fyodor Chaliapin. This was Nikolai Rimsky-Korsakov's 1908 version (with additional cuts and re-arrangement of the scenes). The performances were a sensation, though the costs of producing grand opera were crippling.
18
+
19
+ In 1909, Diaghilev presented his first Paris "Saison Russe" devoted exclusively to ballet (although the company did not use the name "Ballets Russes" until the following year). Most of this original company were resident performers at the Imperial Ballet of Saint Petersburg, hired by Diaghilev to perform in Paris during the Imperial Ballet's summer holidays. The first season's repertory featured a variety of works chiefly choreographed by Michel Fokine, including Le Pavillon d'Armide, the Polovtsian Dances (from Prince Igor), Les Sylphides, and Cléopâtre. The season also included Le Festin, a pastiche set by several choreographers (including Fokine) to music by several Russian composers.
20
+
21
+ The principal productions are shown in the table below.
22
+
23
+ Léon Bakst (costumes)
24
+
25
+ Alexandre Benois (costumes)
26
+
27
+ Ivan Bilibin (costumes)
28
+
29
+ Edvard Grieg (Småtroll, op.71/3, from Lyric Pieces, Book X) (orch. Igor Stravinsky for "Variation")
30
+
31
+ Michel Fokine
32
+
33
+ Léon Bakst (costumes)
34
+
35
+ Léon Bakst (costumes)
36
+
37
+ Alexander Golovin (sets and costumes)
38
+
39
+ Natalia Goncharova (costumes)
40
+
41
+ Gabrielle Chanel (costumes)
42
+
43
+ Pablo Picasso (sets)
44
+
45
+ Joan Miró (sets and costumes)
46
+
47
+ Coco Chanel (costumes)
48
+
49
+ Juan Gris (costumes)
50
+
51
+ When Sergei Diaghilev died of diabetes in Venice on 19 August 1929, the Ballets Russes was left with substantial debts. As the Great Depression began, its property was claimed by its creditors and the company of dancers dispersed.
52
+
53
+ In 1931, Colonel Wassily de Basil (a Russian émigré entrepreneur from Paris) and René Blum (ballet director at the Monte Carlo Opera) founded the Ballets Russes de Monte-Carlo, giving its first performances there in 1932.[9] Diaghilev alumni Léonide Massine and George Balanchine worked as choreographers with the company and Tamara Toumanova was a principal dancer.
54
+
55
+ Artistic differences led to a split between Blum and de Basil,[10] after which de Basil renamed his company initially "Ballets Russes de Colonel W. de Basil".[11] Blum retained the name "Ballet Russe de Monte Carlo", while de Basil created a new company. In 1938, he called it "The Covent Garden Russian Ballet"[11] and then renamed it the "Original Ballet Russe" in 1939.[11][12]
56
+
57
+ After World War II began, the Ballet Russe de Monte Carlo left Europe and toured extensively in the United States and South America. As dancers retired and left the company, they often founded dance studios in the United States or South America or taught at other former company dancers' studios. With Balanchine's founding of the School of American Ballet, and later the New York City Ballet, many outstanding former Ballet Russe de Monte Carlo dancers went to New York to teach in his school. When they toured the United States, Cyd Charisse, the film actress and dancer, was taken into the cast.
58
+
59
+ The Original Ballet Russe toured mostly in Europe. Its alumni were influential in teaching classical Russian ballet technique in European schools.
60
+
61
+ The successor companies were the subject of the 2005 documentary film Ballets Russes.
62
+
63
+ The Ballets Russes was noted for the high standard of its dancers, most of whom had been classically trained at the great Imperial schools in Moscow and St. Petersburg. Their high technical standards contributed a great deal to the company's success in Paris, where dance technique had declined markedly since the 1830s.
64
+
65
+ Principal female dancers included: Anna Pavlova, Tamara Karsavina, Olga Spessivtseva, Mathilde Kschessinska, Ida Rubinstein, Bronislava Nijinska, Lydia Lopokova, Diana Gould, Sophie Pflanz, and Alicia Markova, among others; many earned international renown with the company, including Ekaterina Galanta and Valentina Kachouba.[13][14] Prima ballerina Xenia Makletzova was dismissed from the company in 1916 and sued by Diaghilev; she countersued for breach of contract, and won $4500 in a Massachusetts court.[15][16]
66
+
67
+ The Ballets Russes was even more remarkable for raising the status of the male dancer, largely ignored by choreographers and ballet audiences since the early 19th century. Among the male dancers were Michel Fokine, Serge Lifar, Léonide Massine, Anton Dolin, George Balanchine, Valentin Zeglovsky, Theodore Kosloff, Adolph Bolm, and the legendary Vaslav Nijinsky, considered the most popular and talented dancer in the company's history.
68
+
69
+ After the Russian Revolution of 1917, in later years, younger dancers were taken from those trained in Paris by former Imperial dancers, within the large community of Russian exiles. Recruits were even accepted from America and included a young Ruth Page who joined the troupe in Monte Carlo during 1925.[17][18][19]
70
+
71
+ The company featured and premiered now-famous (and sometimes notorious) works by the great choreographers Marius Petipa and Michel Fokine, as well as new works by Vaslav Nijinsky, Bronislava Nijinska, Léonide Massine, and the young George Balanchine at the start of his career.
72
+
73
+ The choreography of Michel Fokine was of paramount importance in the initial success of the Ballets Russes. Fokine had graduated from the Imperial Ballet School in Saint Petersburg in 1898, and eventually become First Soloist at the Mariinsky Theater. In 1907, Fokine choreographed his first work for the Imperial Russian Ballet, Le Pavillon d'Armide. In the same year, he created Chopiniana to piano music by the composer Frédéric Chopin as orchestrated by Alexander Glazunov. This was an early example of creating choreography to an existing score rather than to music specifically written for the ballet, a departure from the normal practice at the time.
74
+
75
+ Fokine established an international reputation with his works choreographed during the first four seasons (1909–1912) of the Ballets Russes. These included the Polovtsian Dances (from Prince Igor), Le Pavillon d'Armide (a revival of his 1907 production for the Imperial Russian Ballet), Les Sylphides (a reworking of his earlier Chopiniana), The Firebird, Le Spectre de la Rose, Petrushka, and Daphnis and Chloé . After a longstanding tumultuous relationship with Diaghilev, Fokine left the Ballets Russes at the end of the 1912 season.[20]
76
+
77
+ Vaslav Nijinsky had attended the Imperial Ballet School, St. Petersburg since the age of eight. He graduated in 1907 and joined the Imperial Ballet where he immediately began to take starring roles. Diaghilev invited him to join the Ballets Russes for its first Paris season.
78
+
79
+ In 1912, Diaghilev gave Nijinsky his first opportunity as a choreographer, for his production of L'Après-midi d'un faune to Claude Debussy's symphonic poem Prélude à l'après-midi d'un faune. Featuring Nijinsky himself as the Faun, the ballet's frankly erotic nature caused a sensation.[citation needed] The following year, Nijinsky choreographed a new work by Debussy composed expressly for the Ballets Russes, Jeux. Indifferently received by the public, Jeux was eclipsed two weeks later by the premiere of Igor Stravinsky's The Rite of Spring (Le Sacre du printemps), also choreographed by Nijinsky.
80
+
81
+ Nijinsky eventually retired from dance and choreography, after he was diagnosed with schizophrenia in 1919.
82
+
83
+ Léonide Massine was born in Moscow,[21] where he studied both acting and dancing at the Imperial School. On the verge of becoming an actor, Massine was invited by Sergei Diaghilev to join the Ballets Russes, as he was seeking a replacement for Vaslav Nijinsky. Diaghilev encouraged Massine's creativity and his entry into choreography.
84
+
85
+ Massine's most famous creations for the Ballets Russes were Parade, El sombrero de tres picos, and Pulcinella. In all three of these works, he collaborated with Pablo Picasso, who designed the sets and costumes.
86
+
87
+ Massine extended Fokine's choreographic innovations, especially those relating to narrative and character. His ballets incorporated both folk dance and demi-charactère dance, a style using classical technique to perform character dance. Massine created contrasts in his choreography, such as synchronized yet individual movement, or small-group dance patterns within the corps de ballet.
88
+
89
+ Bronislava Nijinska was the younger sister of Vaslav Nijinsky. She trained at the Imperial Ballet School in St. Petersburg, joining the Imperial Ballet company in 1908. From 1909, she (like her brother) was a member of Diaghilev's Ballets Russes.
90
+
91
+ In 1915, Nijinska and her husband fled to Kiev to escape World War I. There, she founded the École de movement, where she trained Ukrainian artists in modern dance. Her most prominent pupil was Serge Lifar (who later joined the Ballets Russes in 1923).
92
+
93
+ Following the Russian Revolution, Nijinska fled again to Poland, and then, in 1921, re-joined the Ballets Russes in Paris. In 1923, Diaghilev assigned her the choreography of Stravinsky's Les Noces. The result combines elements of her brother's choreography for The Rite of Spring with more traditional aspects of ballet, such as dancing en pointe. The following year, she choreographed three new works for the company: Les biches, Les Fâcheux, and Le train bleu.
94
+
95
+ Born Giorgi Melitonovitch Balanchivadze in Saint Petersburg, George Balanchine was trained at the Imperial School of Ballet. His education there was interrupted by the Russian Revolution of 1917. Balanchine graduated in 1921, after the school reopened. He subsequently studied music theory, composition, and advanced piano at the Petrograd Conservatory, graduating in 1923. During this time, he worked with the corps de ballet of the Mariinsky Theater. In 1924, Balanchine (and his first wife, ballerina Tamara Geva) fled to Paris while on tour of Germany with the Soviet State Dancers. He was invited by Sergei Diaghilev to join the Ballets Russes as a choreographer.[22]
96
+
97
+ Diaghilev invited the collaboration of contemporary fine artists in the design of sets and costumes. These included Alexandre Benois, Léon Bakst, Nicholas Roerich, Georges Braque, Natalia Goncharova, Mikhail Larionov, Pablo Picasso, Coco Chanel, Henri Matisse, André Derain, Joan Miró, Giorgio de Chirico, Salvador Dalí, Ivan Bilibin, Pavel Tchelitchev, Maurice Utrillo, and Georges Rouault.
98
+
99
+ Their designs contributed to the groundbreaking excitement of the company's productions. The scandal caused by the premiere performance in Paris of Stravinsky's The Rite of Spring has been partly attributed to the provocative aesthetic of the costumes of the Ballets Russes.[23]
100
+
101
+ Alexandre Benois had been the most influential member of The Nevsky Pickwickians and was one of the original founders (with Bakst and Diaghilev) of Mir iskusstva. His particular interest in ballet as an art form strongly influenced Diaghilev and was seminal in the formation of the Ballets Russes. In addition, Benois contributed scenic and costume designs to several of the company's earlier productions: Le Pavillon d'Armide, portions of Le Festin, and Giselle. Benois also participated with Igor Stravinsky and Michel Fokine in the creation of Petrushka, to which he contributed much of the scenario as well as the stage sets and costumes.
102
+
103
+ Léon Bakst was also an original member of both The Nevsky Pickwickians and Mir iskusstva. He participated as designer in productions of the Ballets Russes from its beginning in 1909 until 1921, creating sets and costumes for Scheherazade, The Firebird, Les Orientales, Le Spectre de la rose, L'Après-midi d'une faune, and Daphnis et Chloé, among other productions.
104
+
105
+ In 1917, Pablo Picasso designed sets and costumes in the Cubist style for three Diaghilev ballets, all with choreography by Léonide Massine: Parade, El sombrero de tres picos, and Pulcinella.
106
+
107
+ Natalia Goncharova was born in 1881 near Tula, Russia. Her art was inspired by Russian folk art, Fauvism, and cubism. She began designing for the Ballets Russes in 1921.
108
+
109
+ Although the Ballets Russes firmly established the 20th-century tradition of fine art theatre design, the company was not unique in its employment of fine artists. For instance, Savva Mamontov's Private Opera Company had made a policy of employing fine artists, such as Konstantin Korovin and Golovin, who went on to work for the Ballets Russes.
110
+
111
+ For his new productions, Diaghilev commissioned the foremost composers of the 20th century, including: Debussy, Milhaud, Poulenc, Prokofiev, Ravel, Satie, Respighi, Stravinsky, de Falla, and Strauss. He was also responsible for commissioning the first two significant British-composed ballets: Romeo and Juliet (composed in 1925 by nineteen-year-old Constant Lambert) and The Triumph of Neptune (composed in 1926 by Lord Berners).
112
+
113
+ The impresario also engaged conductors who were or became eminent in their field during the 20th century, including Pierre Monteux (1911–16 and 1924), Ernest Ansermet (1915–23), Edward Clark (1919–20) and Roger Désormière (1925–29).[24]
114
+
115
+ Diaghilev hired the young Stravinsky at a time when he was virtually unknown to compose the music for The Firebird, after the composer Anatoly Lyadov proved unreliable, and this was instrumental in launching Stravinsky's career in Europe and the United States of America.
116
+
117
+ Stravinsky's early ballet scores were the subject of much discussion. The Firebird (1910) was seen as an astonishingly accomplished work for such a young artist (Debussy is said to have remarked drily: "Well, you've got to start somewhere!"). Many contemporary audiences found Petrushka (1911) to be almost unbearably dissonant and confused. The Rite of Spring (1913) nearly caused an audience riot. It stunned people because of its willful rhythms and aggressive dynamics. The audience's negative reaction to it is now regarded as a theatrical scandal as notorious as the failed runs of Richard Wagner's Tannhäuser at Paris in 1861 and Jean-Georges Noverre's Les Fêtes Chinoises in London on the eve of the Seven Years' War. However, Stravinsky's early ballet scores are now widely considered masterpieces of the genre.[25]
118
+
119
+ Diaghilev always maintained that no camera could ever do justice to the artistry of his dancers, and it was long believed there was no film legacy of the Ballets Russes. However, in 2011 a 30-second newsreel film of a performance in Montreux, Switzerland, in June 1928 came to light. The ballet was Les Sylphides and the lead dancer was identified as Serge Lifar.[26]
120
+
121
+ Paris, 2008: In September 2008, on the eve of the 100th anniversary of the creation of the Ballets Russes, Sotheby's announced the staging of an exceptional exhibition of works lent mainly by French, British and Russian private collectors, museums and foundations. Some 150 paintings, designs, costumes, theatre decors, drawings, sculptures, photographs, manuscripts, and programs were exhibited in Paris, retracing the key moments in the history of the Ballets Russes. On display were costumes designed by André Derain (La Boutique fantasque, 1919) and Henri Matisse (Le chant du rossignol, 1920), and Léon Bakst.
122
+
123
+ Posters recalling the surge of creativity that surrounded the Ballets Russes included Pablo Picasso's iconic image of the Chinese Conjuror for the audacious production of Parade and Jean Cocteau's poster for Le Spectre de la rose. Costumes and stage designs presented included works by Alexander Benois, for Le Pavillon d'Armide and Petrushka; Léon Bakst, for La Péri and Le Dieu bleu; Mikhail Larionov, for Le Soleil à Minuit; and Natalia Goncharova, for The Firebird (1925 version). The exhibition also included important contemporary artists, whose works reflected the visual heritage of the Ballets Russes – notably an installation made of colorfully painted paper by the renowned Belgian artist Isabelle de Borchgrave, and items from the Imperial Porcelain Factory in St. Petersburg.[27]
124
+
125
+ Monte-Carlo, 2009: In May, in Monaco, two postage stamps "Centenary of Ballets Russians of Diaghilev" went out, created by Georgy Shishkin.
126
+
127
+ London, 2010–11: London's Victoria and Albert Museum presented a special exhibition entitled Diaghilev and the Golden Age of the Ballets Russes, 1909–1929 at the V&A South Kensington between 5 September 2010 and 9 January 2011.
128
+
129
+ Canberra, 2010–11: An exhibition of the company's costumes held by the National Gallery of Australia was held from 10 December 2010 – 1 May 2011 at the Gallery in Canberra. Entitled Ballets Russes: The Art of Costume, it included 150 costumes and accessories from 34 productions from 1909 to 1939; one third of the costumes had not been seen since they were last worn on stage. Along with costumes by Natalia Goncharova, Pablo Picasso, Henri Matisse, André Derain, Georges Braque, André Masson and Giorgio de Chirico, the exhibition also featured photographs, film, music and artists’ drawings.[28]
130
+
131
+ Washington, DC, 2013: Diaghilev and the Ballets Russes, 1909–1929: When Art Danced with Music. National Gallery of Art, East Building Mezzanine. 12 May— 2 September 2013. Organized by the Victoria and Albert Museum, London, in collaboration with the National Gallery of Art, Washington.[1]
132
+
133
+ Stockholm, 2014–2015: Sleeping Beauties – Dreams and Costumes. The Dance Museum in Stockholm owns about 250 original costumes from the Ballets Russes, in this exhibition about fifty of them are shown. (www.dansmuseet.se)
en/5320.html.txt ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Mesozoic Era ( /ˌmɛz.əˈzoʊ.ɪk, ˌmɛz.oʊ-, ˌmɛs-, ˌmiː.zə-, -zoʊ-, ˌmiː.sə-, -soʊ-/ mez-ə-ZOH-ik, mez-oh-, mess-, mee-zə-, -⁠zoh-, mee-sə-, -⁠soh-)[1][2] is an interval of geological time from about 252 to 66 million years ago. It is also called the Age of Reptiles and the Age of Conifers.[3]
4
+
5
+ The Mesozoic ("middle life") is one of three geologic eras of the Phanerozoic Eon, preceded by the Paleozoic ("ancient life") and succeeded by the Cenozoic ("new life"). The era is subdivided into three major periods: the Triassic, Jurassic, and Cretaceous, which are further subdivided into a number of epochs and stages.
6
+
7
+ The era began in the wake of the Permian–Triassic extinction event, the largest well-documented mass extinction in Earth's history, and ended with the Cretaceous–Paleogene extinction event, another mass extinction whose victims included the non-avian dinosaurs. The Mesozoic was a time of significant tectonic, climate, and evolutionary activity. The era witnessed the gradual rifting of the supercontinent Pangaea into separate landmasses that would move into their current positions during the next era. The climate of the Mesozoic was varied, alternating between warming and cooling periods. Overall, however, the Earth was hotter than it is today. Dinosaurs first appeared in the Mid-Triassic, and became the dominant terrestrial vertebrates in the Late Triassic or Early Jurassic, occupying this position for about 150 or 135 million years until their demise at the end of the Cretaceous. Birds first appeared in the Jurassic (however, true toothless birds appeared first in the Cretaceous), having evolved from a branch of theropod dinosaurs. The first mammals also appeared during the Mesozoic, but would remain small—less than 15 kg (33 lb)—until the Cenozoic. The flowering plants (angiosperms) arose in the Triassic or Jurassic and came to prominence in the late Cretaceous when they replaced the conifers and other gymnosperms as the dominant trees.
8
+
9
+ The phrase "Age of Reptiles" was introduced by the 19th century paleontologist Gideon Mantell who viewed it as dominated by diapsids such as Iguanodon, Megalosaurus, Plesiosaurus, and Pterodactylus.
10
+
11
+ Mesozoic means "middle life", deriving from the Greek prefix meso-/μεσο- for "between" and zōon/ζῷον meaning "animal" or "living being". The name "Mesozoic" was proposed in 1840 by the British geologist John Phillips (1800–1874).[4][5]
12
+
13
+ The Mesozoic era was originally described as the "secondary" era, following the primary or Paleozoic, and preceding the Tertiary.[6]
14
+
15
+ Following the Paleozoic, the Mesozoic extended roughly 186 million years, from 251.902 to 66 million years ago when the Cenozoic Era began. This time frame is separated into three geologic periods. From oldest to youngest:
16
+
17
+ The lower boundary of the Mesozoic is set by the Permian–Triassic extinction event, during which approximately 90% to 96% of marine species and 70% of terrestrial vertebrates became extinct.[7] It is also known as the "Great Dying" because it is considered the largest mass extinction in the Earth's history. The upper boundary of the Mesozoic is set at the Cretaceous–Paleogene extinction event (or K–Pg extinction event[8]), which may have been caused by an asteroid impactor that created Chicxulub Crater on the Yucatán Peninsula. Towards the Late Cretaceous, large volcanic eruptions are also believed to have contributed to the Cretaceous–Paleogene extinction event. Approximately 50% of all genera became extinct, including all of the non-avian dinosaurs.
18
+
19
+ The Triassic ranges roughly from 252 million to 201 million years ago, preceding the Jurassic Period. The period is bracketed between the Permian–Triassic extinction event and the Triassic–Jurassic extinction event, two of the "big five", and it is divided into three major epochs: Early, Middle, and Late Triassic.[9]
20
+
21
+ The Early Triassic, about 252 to 247 million years ago, was dominated by deserts in the interior of the Pangaea supercontinent. The Earth had just witnessed a massive die-off in which 95% of all life became extinct, and the most common vertebrate life on land were Lystrosaurus, labyrinthodonts, and Euparkeria along with many other creatures that managed to survive the Permian extinction. Temnospondyls evolved during this time and would be the dominant predator for much of the Triassic.[10]
22
+
23
+ The Middle Triassic, from 247 to 237 million years ago, featured the beginnings of the breakup of Pangaea and the opening of the Tethys Ocean. Ecosystems had recovered from the Permian extinction. Algae, sponge, corals, and crustaceans all had recovered, and new aquatic reptiles evolved, such as ichthyosaurs and nothosaurs. On land, pine forests flourished, as did groups of insects like mosquitoes and fruit flies. Reptiles began to get bigger and bigger, and the first crocodilians and dinosaurs evolved, which sparked competition with the large amphibians that had previously ruled the freshwater world, respectively mammal-like reptiles on land.[11]
24
+
25
+ Following the bloom of the Middle Triassic, the Late Triassic, from 237 to 201 million years ago, featured frequent heat spells and moderate precipitation (10–20 inches per year). The recent warming led to a boom of dinosaurian evolution on land as those one began to separate from each other (Nyasasaurus from 243 to 210 million years ago, approximately 235–30 ma, some of them separated into Sauropodomorphs, Theropods and Herrerasaurids), as well as the first pterosaurs. During the Late Triassic, some advanced cynodonts gave rise to the first Mammaliaformes. All this climatic change, however, resulted in a large die-out known as the Triassic–Jurassic extinction event, in which many archosaurs (excluding pterosaurs, dinosaurs and crocodylomorphs), most synapsids, and almost all large amphibians became extinct, as well as 34% of marine life, in the Earth's fourth mass extinction event. The cause is debatable;[12][13] flood basalt eruptions at the Central Atlantic magmatic province is cited as one possible cause.
26
+
27
+ The Jurassic ranges from 200 million years to 145 million years ago and features three major epochs: The Early Jurassic, the Middle Jurassic, and the Late Jurassic.[14]
28
+
29
+ The Early Jurassic spans from 200 to 175 million years ago.[14] The climate was tropical, much more humid than the Triassic. In the oceans, plesiosaurs, ichthyosaurs and ammonites were abundant. On land, dinosaurs and other archosaurs staked their claim as the dominant race, with theropods such as Dilophosaurus at the top of the food chain. The first true crocodiles evolved, pushing the large amphibians to near extinction. All-in-all, archosaurs rose to rule the world. Meanwhile, the first true mammals evolved, remaining relatively small but spreading widely; the Jurassic Castorocauda, for example, had adaptations for swimming, digging and catching fish. Fruitafossor, from the late Jurassic period about 150 million years ago, was about the size of a chipmunk, and its teeth, forelimbs and back suggest that it dug open the nests of social insects (probably termites, as ants had not yet appeared). The first multituberculates like Rugosodon evolved, while volaticotherians took to the skies.
30
+
31
+ The Middle Jurassic spans from 175 to 163 million years ago.[14] During this epoch, dinosaurs flourished as huge herds of sauropods, such as Brachiosaurus and Diplodocus, filled the fern prairies, chased by many new predators such as Allosaurus. Conifer forests made up a large portion of the forests. In the oceans, plesiosaurs were quite common, and ichthyosaurs flourished. This epoch was the peak of the reptiles.[15]
32
+
33
+ The Late Jurassic spans from 163 to 145 million years ago.[14] During this epoch, the first avialans, like Archaeopteryx, evolved from small coelurosaurian dinosaurs. The increase in sea levels opened up the Atlantic seaway, which has grown continually larger until today. The divided landmasses gave opportunity for the diversification of new dinosaurs.
34
+
35
+ The Cretaceous is the longest period of the Mesozoic, but has only two epochs: Early and Late Cretaceous.[16]
36
+
37
+ The Early Cretaceous spans from 145 to 100 million years ago.[16] The Early Cretaceous saw the expansion of seaways, and as a result, the decline and/or extinction of Laurasian sauropods. Some island-hopping dinosaurs, like Eustreptospondylus, evolved to cope with the coastal shallows and small islands of ancient Europe. Other dinosaurs rose up to fill the empty space that the Jurassic-Cretaceous extinction left behind, such as Carcharodontosaurus and Spinosaurus. Of the most successful was the Iguanodon, which spread to every continent. Seasons came back into effect and the poles got seasonally colder, but some dinosaurs still inhabited the polar forests year round, such as Leaellynasaura and Muttaburrasaurus. The poles were too cold for crocodiles, and became the last stronghold for large amphibians like Koolasuchus. Pterosaurs got larger as genera like Tapejara and Ornithocheirus evolved. Mammals continued to expand their range: eutriconodonts produced fairly large, wolverine-like predators like Repenomamus and Gobiconodon, early therians began to expand into metatherians and eutherians, and cimolodont multituberculates went on to become common in the fossil record.
38
+
39
+ The Late Cretaceous spans from 100 to 66 million years ago. The Late Cretaceous featured a cooling trend that would continue in the Cenozoic era. Eventually, tropics were restricted to the equator and areas beyond the tropic lines experienced extreme seasonal changes in weather. Dinosaurs still thrived, as new taxa such as Tyrannosaurus, Ankylosaurus, Triceratops and hadrosaurs dominated the food web. In the oceans, mosasaurs ruled, filling the role of the ichthyosaurs, which, after declining, had disappeared in the Cenomanian-Turonian boundary event. Though pliosaurs had gone extinct in the same event, long-necked plesiosaurs such as Elasmosaurus continued to thrive. Flowering plants, possibly appearing as far back as the Triassic, became truly dominant for the first time. Pterosaurs in the Late Cretaceous declined for poorly understood reasons, though this might be due to tendencies of the fossil record, as their diversity seems to be much higher than previously thought. Birds became increasingly common and diversified into a variety of enantiornithe and ornithurine forms. Though mostly small, marine hesperornithes became relatively large and flightless, adapted to life in the open sea. Metatherians and primitive eutherian also became common and even produced large and specialised genera like Didelphodon and Schowalteria. Still, the dominant mammals were multituberculates, cimolodonts in the north and gondwanatheres in the south. At the end of the Cretaceous, the Deccan traps and other volcanic eruptions were poisoning the atmosphere. As this continued, it is thought that a large meteor smashed into earth 66 million years ago, creating the Chicxulub Crater in an event known as the K-Pg Extinction (formerly K-T), the fifth and most recent mass extinction event, in which 75% of life became extinct, including all non-avian dinosaurs.[17] Everything over 10 kilograms became extinct. The age of the dinosaurs was over.[18][19]
40
+
41
+ Compared to the vigorous convergent plate mountain-building of the late Paleozoic, Mesozoic tectonic deformation was comparatively mild. The sole major Mesozoic orogeny occurred in what is now the Arctic, creating the Innuitian orogeny, the Brooks Range, the Verkhoyansk and Cherskiy Ranges in Siberia, and the Khingan Mountains in Manchuria.
42
+
43
+ This orogeny was related to the opening of the Arctic Ocean and subduction of the North China and Siberian cratons under the Pacific Ocean.[20] In contrast, the era featured the dramatic rifting of the supercontinent Pangaea, which gradually split into a northern continent, Laurasia, and a southern continent, Gondwana. This created the passive continental margin that characterizes most of the Atlantic coastline (such as along the U.S. East Coast) today.[21]
44
+
45
+ By the end of the era, the continents had rifted into nearly their present forms, though not their present positions. Laurasia became North America and Eurasia, while Gondwana split into South America, Africa, Australia, Antarctica and the Indian subcontinent, which collided with the Asian plate during the Cenozoic, giving rise to the Himalayas.
46
+
47
+ The Triassic was generally dry, a trend that began in the late Carboniferous, and highly seasonal, especially in the interior of Pangaea. Low sea levels may have also exacerbated temperature extremes. With its high specific heat capacity, water acts as a temperature-stabilizing heat reservoir, and land areas near large bodies of water—especially oceans—experience less variation in temperature. Because much of Pangaea's land was distant from its shores, temperatures fluctuated greatly, and the interior probably included expansive deserts. Abundant red beds and evaporites such as halite support these conclusions, but some evidence suggests the generally dry climate of was punctuated by episodes of increased rainfall.[22] The most important humid episodes were the Carnian Pluvial Event and one in the Rhaetian, a few million years before the Triassic–Jurassic extinction event.
48
+
49
+ Sea levels began to rise during the Jurassic, probably caused by an increase in seafloor spreading. The formation of new crust beneath the surface displaced ocean waters by as much as 200 m (656 ft) above today's sea level, flooding coastal areas. Furthermore, Pangaea began to rift into smaller divisions, creating new shoreline around the Tethys Ocean. Temperatures continued to increase, then began to stabilize. Humidity also increased with the proximity of water, and deserts retreated.
50
+
51
+ The climate of the Cretaceous is less certain and more widely disputed. Probably, higher levels of carbon dioxide in the atmosphere are thought to have almost eliminated the north–south temperature gradient: temperatures were about the same across the planet, and about 10°C higher than today. The circulation of oxygen to the deep ocean may also have been disrupted,[16][dubious – discuss] preventing the decomposition of large volumes of organic matter, which was eventually deposited as "black shale".
52
+
53
+ Not all data support these hypotheses, however. Even with the overall warmth, temperature fluctuations should have been sufficient for the presence of polar ice caps and glaciers, but there is no evidence of either. Quantitative models have also been unable to recreate the flatness of the Cretaceous temperature gradient.[citation needed]
54
+
55
+ Different studies have come to different conclusions about the amount of oxygen in the atmosphere during different parts of the Mesozoic, with some concluding oxygen levels were lower than the current level (about 21%) throughout the Mesozoic,[23][24] some concluding they were lower in the Triassic and part of the Jurassic but higher in the Cretaceous,[25][26][27] and some concluding they were higher throughout most or all of the Triassic, Jurassic and Cretaceous.[28][29]
56
+
57
+ The dominant land plant species of the time were gymnosperms, which are vascular, cone-bearing, non-flowering plants such as conifers that produce seeds without a coating. This is opposed to the earth's current flora, in which the dominant land plants in terms of number of species are angiosperms. One particular plant genus, Ginkgo, is thought to have evolved at this time and is represented today by a single species, Ginkgo biloba. As well, the extant genus Sequoia is believed to have evolved in the Mesozoic.[30]
58
+
59
+ Flowering plants radiated sometime in the early Cretaceous, first in the tropics, but the even temperature gradient allowed them to spread toward the poles throughout the period. By the end of the Cretaceous, angiosperms dominated tree floras in many areas, although some evidence suggests that biomass was still dominated by cycads and ferns until after the Cretaceous–Paleogene extinction. Some plant species had distributions that were markedly different from succeeding periods; for example, the Schizeales, a fern order, were skewed to the Northern Hemisphere in the Mesozoic, but are now better represented in the Southern Hemisphere.[31]
60
+
61
+ The extinction of nearly all animal species at the end of the Permian Period allowed for the radiation of many new lifeforms. In particular, the extinction of the large herbivorous pareiasaurs and carnivorous gorgonopsians left those ecological niches empty. Some were filled by the surviving cynodonts and dicynodonts, the latter of which subsequently became extinct.
62
+
63
+ Recent research indicates that it took much longer for the reestablishment of complex ecosystems with high biodiversity, complex food webs, and specialized animals in a variety of niches, beginning in the mid-Triassic 4M to 6M years after the extinction,[32] and not fully proliferated until 30M years after the extinction.[33] Animal life was then dominated by various archosaurs: dinosaurs, pterosaurs, and aquatic reptiles such as ichthyosaurs, plesiosaurs, and mosasaurs.
64
+
65
+ The climatic changes of the late Jurassic and Cretaceous favored further adaptive radiation. The Jurassic was the height of archosaur diversity, and the first birds and eutherian mammals also appeared. Some have argued that insects diversified in symbiosis with angiosperms, because insect anatomy, especially the mouth parts, seems particularly well-suited for flowering plants. However, all major insect mouth parts preceded angiosperms, and insect diversification actually slowed when they arrived, so their anatomy originally must have been suited for some other purpose.
en/5321.html.txt ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Asia-Pacific
6
+
7
+ Mediterranean and Middle East
8
+
9
+ Other campaigns
10
+
11
+ Coups
12
+
13
+ World War II (WWII or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945. It involved the vast majority of the world's countries—including all the great powers—forming two opposing military alliances: the Allies and the Axis. In a state of total war, directly involving more than 100 million people from more than 30 countries, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, blurring the distinction between civilian and military resources. World War II was the deadliest conflict in human history, marked by 70 to 85 million fatalities. Tens of millions of people died due to genocides (including the Holocaust), premeditated death from starvation, massacres, and disease. Aircraft played a major role in the conflict, including in the use of strategic bombing of population centres, and the only uses of nuclear weapons in war.
14
+
15
+ World War II is generally considered to have begun on 1 September 1939, with the invasion of Poland by Germany and subsequent declarations of war on Germany by France and the United Kingdom. From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours: Poland, Finland, Romania and the Baltic states. Following the onset of campaigns in North Africa and East Africa, and the Fall of France in mid-1940, the war continued primarily between the European Axis powers and the British Empire, with war in the Balkans, the aerial Battle of Britain, the Blitz, and the Battle of the Atlantic. On 22 June 1941, Germany led the European Axis powers in an invasion of the Soviet Union, opening the largest land theatre of war in history and trapping the Axis, crucially the German Wehrmacht, in a war of attrition.
16
+
17
+ Japan, which aimed to dominate Asia and the Pacific, was at war with the Republic of China by 1937. In December 1941, Japan launched a surprise attack on the United States as well as European colonies in East Asia and the Pacific. Following an immediate US declaration of war against Japan, supported by one from the UK, the European Axis powers declared war on the United States in solidarity with their ally. Japan soon captured much of the Western Pacific, but its advances were halted in 1942 after Japan lost the critical Battle of Midway; later, Germany and Italy were defeated in North Africa and at Stalingrad in the Soviet Union. Key setbacks in 1943—which included a series of German defeats on the Eastern Front, the Allied invasions of Sicily and Italy, and Allied offensives in the Pacific—cost the Axis its initiative and forced it into strategic retreat on all fronts. In 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained its territorial losses and turned towards Germany and its allies. During 1944 and 1945, the Japanese suffered reversals in mainland Asia, while the Allies crippled the Japanese Navy and captured key Western Pacific islands.
18
+
19
+ The war in Europe concluded with an invasion of Germany by the Western Allies and the Soviet Union, culminating in the capture of Berlin by Soviet troops, the suicide of Adolf Hitler and the German unconditional surrender on 8 May 1945. Following the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender on its terms, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki on 6 and 9 August, respectively. Faced with an imminent invasion of the Japanese archipelago, the possibility of additional atomic bombings, and the Soviet entry into the war against Japan and its invasion of Manchuria on 9 August, Japan announced its intention to surrender on 15 August 1945, cementing total victory in Asia for the Allies. In the wake of the war, Germany and Japan were occupied and war crimes tribunals were conducted against German and Japanese leaders.
20
+
21
+ World War II changed the political alignment and social structure of the globe. The United Nations (UN) was established to foster international co-operation and prevent future conflicts, and the victorious great powers—China, France, the Soviet Union, the United Kingdom, and the United States—became the permanent members of its Security Council. The Soviet Union and the United States emerged as rival superpowers, setting the stage for the nearly half-century-long Cold War. In the wake of European devastation, the influence of its great powers waned, triggering the decolonisation of Africa and Asia. Most countries whose industries had been damaged moved towards economic recovery and expansion. Political integration, especially in Europe, began as an effort to forestall future hostilities, end pre-war enmities and forge a sense of common identity.
22
+
23
+ The start of the war in Europe is generally held to be 1 September 1939,[1][2] beginning with the German invasion of Poland; the United Kingdom and France declared war on Germany two days later. The dates for the beginning of war in the Pacific include the start of the Second Sino-Japanese War on 7 July 1937,[3][4] or even the Japanese invasion of Manchuria on 19 September 1931.[5][6][7]
24
+
25
+ Others follow the British historian A. J. P. Taylor, who held that the Sino-Japanese War and war in Europe and its colonies occurred simultaneously, and the two wars merged in 1941. This article uses the conventional dating. Other starting dates sometimes used for World War II include the Italian invasion of Abyssinia on 3 October 1935.[8] The British historian Antony Beevor views the beginning of World War II as the Battles of Khalkhin Gol fought between Japan and the forces of Mongolia and the Soviet Union from May to September 1939.[9]
26
+
27
+ The exact date of the war's end is also not universally agreed upon. It was generally accepted at the time that the war ended with the armistice of 14 August 1945 (V-J Day), rather than the formal surrender of Japan, which was on 2 September 1945 that officially ended the war in Asia. A peace treaty with Japan was signed in 1951.[10] A treaty regarding Germany's future allowed the reunification of East and West Germany to take place in 1990 and resolved most post-World War II issues.[11] No formal peace treaty between Japan and the Soviet Union was ever signed.[12]
28
+
29
+ World War I had radically altered the political European map, with the defeat of the Central Powers—including Austria-Hungary, Germany, Bulgaria and the Ottoman Empire—and the 1917 Bolshevik seizure of power in Russia, which eventually led to the founding of the Soviet Union. Meanwhile, the victorious Allies of World War I, such as France, Belgium, Italy, Romania, and Greece, gained territory, and new nation-states were created out of the collapse of Austria-Hungary and the Ottoman and Russian Empires.
30
+
31
+ To prevent a future world war, the League of Nations was created during the 1919 Paris Peace Conference. The organisation's primary goals were to prevent armed conflict through collective security, military and naval disarmament, and settling international disputes through peaceful negotiations and arbitration.
32
+
33
+ Despite strong pacifist sentiment after World War I,[13] its aftermath still caused irredentist and revanchist nationalism in several European states. These sentiments were especially marked in Germany because of the significant territorial, colonial, and financial losses incurred by the Treaty of Versailles. Under the treaty, Germany lost around 13 percent of its home territory and all its overseas possessions, while German annexation of other states was prohibited, reparations were imposed, and limits were placed on the size and capability of the country's armed forces.[14]
34
+
35
+ The German Empire was dissolved in the German Revolution of 1918–1919, and a democratic government, later known as the Weimar Republic, was created. The interwar period saw strife between supporters of the new republic and hardline opponents on both the right and left. Italy, as an Entente ally, had made some post-war territorial gains; however, Italian nationalists were angered that the promises made by the United Kingdom and France to secure Italian entrance into the war were not fulfilled in the peace settlement. From 1922 to 1925, the Fascist movement led by Benito Mussolini seized power in Italy with a nationalist, totalitarian, and class collaborationist agenda that abolished representative democracy, repressed socialist, left-wing and liberal forces, and pursued an aggressive expansionist foreign policy aimed at making Italy a world power, promising the creation of a "New Roman Empire".[15]
36
+
37
+ Adolf Hitler, after an unsuccessful attempt to overthrow the German government in 1923, eventually became the Chancellor of Germany in 1933. He abolished democracy, espousing a radical, racially motivated revision of the world order, and soon began a massive rearmament campaign.[16] Meanwhile, France, to secure its alliance, allowed Italy a free hand in Ethiopia, which Italy desired as a colonial possession. The situation was aggravated in early 1935 when the Territory of the Saar Basin was legally reunited with Germany and Hitler repudiated the Treaty of Versailles, accelerated his rearmament programme, and introduced conscription.[17]
38
+
39
+ The United Kingdom, France and Italy formed the Stresa Front in April 1935 in order to contain Germany, a key step towards military globalisation; however, that June, the United Kingdom made an independent naval agreement with Germany, easing prior restrictions. The Soviet Union, concerned by Germany's goals of capturing vast areas of Eastern Europe, drafted a treaty of mutual assistance with France. Before taking effect, though, the Franco-Soviet pact was required to go through the bureaucracy of the League of Nations, which rendered it essentially toothless.[18] The United States, concerned with events in Europe and Asia, passed the Neutrality Act in August of the same year.[19]
40
+
41
+ Hitler defied the Versailles and Locarno treaties by remilitarising the Rhineland in March 1936, encountering little opposition due to appeasement.[20] In October 1936, Germany and Italy formed the Rome–Berlin Axis. A month later, Germany and Japan signed the Anti-Comintern Pact, which Italy joined the following year.[21]
42
+
43
+ The Kuomintang (KMT) party in China launched a unification campaign against regional warlords and nominally unified China in the mid-1920s, but was soon embroiled in a civil war against its former Chinese Communist Party allies[22] and new regional warlords. In 1931, an increasingly militaristic Empire of Japan, which had long sought influence in China[23] as the first step of what its government saw as the country's right to rule Asia, staged the Mukden Incident as a pretext to invade Manchuria and establish the puppet state of Manchukuo.[24]
44
+
45
+ China appealed to the League of Nations to stop the Japanese invasion of Manchuria. Japan withdrew from the League of Nations after being condemned for its incursion into Manchuria. The two nations then fought several battles, in Shanghai, Rehe and Hebei, until the Tanggu Truce was signed in 1933. Thereafter, Chinese volunteer forces continued the resistance to Japanese aggression in Manchuria, and Chahar and Suiyuan.[25] After the 1936 Xi'an Incident, the Kuomintang and communist forces agreed on a ceasefire to present a united front to oppose Japan.[26]
46
+
47
+ The Second Italo–Ethiopian War was a brief colonial war that began in October 1935 and ended in May 1936. The war began with the invasion of the Ethiopian Empire (also known as Abyssinia) by the armed forces of the Kingdom of Italy (Regno d'Italia), which was launched from Italian Somaliland and Eritrea.[27] The war resulted in the military occupation of Ethiopia and its annexation into the newly created colony of Italian East Africa (Africa Orientale Italiana, or AOI); in addition it exposed the weakness of the League of Nations as a force to preserve peace. Both Italy and Ethiopia were member nations, but the League did little when the former clearly violated Article X of the League's Covenant.[28] The United Kingdom and France supported imposing sanctions on Italy for the invasion, but they were not fully enforced and failed to end the Italian invasion.[29] Italy subsequently dropped its objections to Germany's goal of absorbing Austria.[30]
48
+
49
+ When civil war broke out in Spain, Hitler and Mussolini lent military support to the Nationalist rebels, led by General Francisco Franco. Italy supported the Nationalists to a greater extent than the Nazis did: altogether Mussolini sent to Spain more than 70,000 ground troops and 6,000 aviation personnel, as well as about 720 aircraft.[31] The Soviet Union supported the existing government, the Spanish Republic. More than 30,000 foreign volunteers, known as the International Brigades, also fought against the Nationalists. Both Germany and the Soviet Union used this proxy war as an opportunity to test in combat their most advanced weapons and tactics. The Nationalists won the civil war in April 1939; Franco, now dictator, remained officially neutral during World War II but generally favoured the Axis.[32] His greatest collaboration with Germany was the sending of volunteers to fight on the Eastern Front.[33]
50
+
51
+ In July 1937, Japan captured the former Chinese imperial capital of Peking after instigating the Marco Polo Bridge Incident, which culminated in the Japanese campaign to invade all of China.[34] The Soviets quickly signed a non-aggression pact with China to lend materiel support, effectively ending China's prior co-operation with Germany. From September to November, the Japanese attacked Taiyuan, engaged the Kuomintang Army around Xinkou,[35] and fought Communist forces in Pingxingguan.[36][37] Generalissimo Chiang Kai-shek deployed his best army to defend Shanghai, but, after three months of fighting, Shanghai fell. The Japanese continued to push the Chinese forces back, capturing the capital Nanking in December 1937. After the fall of Nanking, tens of thousands if not hundreds of thousands of Chinese civilians and disarmed combatants were murdered by the Japanese.[38][39]
52
+
53
+ In March 1938, Nationalist Chinese forces won their first major victory at Taierzhuang but then the city of Xuzhou was taken by Japanese in May.[40] In June 1938, Chinese forces stalled the Japanese advance by flooding the Yellow River; this manoeuvre bought time for the Chinese to prepare their defences at Wuhan, but the city was taken by October.[41] Japanese military victories did not bring about the collapse of Chinese resistance that Japan had hoped to achieve; instead, the Chinese government relocated inland to Chongqing and continued the war.[42][43]
54
+
55
+ In the mid-to-late 1930s, Japanese forces in Manchukuo had sporadic border clashes with the Soviet Union and Mongolia. The Japanese doctrine of Hokushin-ron, which emphasised Japan's expansion northward, was favoured by the Imperial Army during this time. With the Japanese defeat at Khalkin Gol in 1939, the ongoing Second Sino-Japanese War[44] and ally Nazi Germany pursuing neutrality with the Soviets, this policy would prove difficult to maintain. Japan and the Soviet Union eventually signed a Neutrality Pact in April 1941, and Japan adopted the doctrine of Nanshin-ron, promoted by the Navy, which took its focus southward, eventually leading to its war with the United States and the Western Allies.[45][46]
56
+
57
+ In Europe, Germany and Italy were becoming more aggressive. In March 1938, Germany annexed Austria, again provoking little response from other European powers.[47] Encouraged, Hitler began pressing German claims on the Sudetenland, an area of Czechoslovakia with a predominantly ethnic German population. Soon the United Kingdom and France followed the appeasement policy of British Prime Minister Neville Chamberlain and conceded this territory to Germany in the Munich Agreement, which was made against the wishes of the Czechoslovak government, in exchange for a promise of no further territorial demands.[48] Soon afterwards, Germany and Italy forced Czechoslovakia to cede additional territory to Hungary, and Poland annexed Czechoslovakia's Zaolzie region.[49]
58
+
59
+ Although all of Germany's stated demands had been satisfied by the agreement, privately Hitler was furious that British interference had prevented him from seizing all of Czechoslovakia in one operation. In subsequent speeches Hitler attacked British and Jewish "war-mongers" and in January 1939 secretly ordered a major build-up of the German navy to challenge British naval supremacy. In March 1939, Germany invaded the remainder of Czechoslovakia and subsequently split it into the German Protectorate of Bohemia and Moravia and a pro-German client state, the Slovak Republic.[50] Hitler also delivered 20 March 1939 ultimatum to Lithuania, forcing the concession of the Klaipėda Region, formerly the German Memelland.[51]
60
+
61
+ Greatly alarmed and with Hitler making further demands on the Free City of Danzig, the United Kingdom and France guaranteed their support for Polish independence; when Italy conquered Albania in April 1939, the same guarantee was extended to Romania and Greece.[52] Shortly after the Franco-British pledge to Poland, Germany and Italy formalised their own alliance with the Pact of Steel.[53] Hitler accused the United Kingdom and Poland of trying to "encircle" Germany and renounced the Anglo-German Naval Agreement and the German–Polish Non-Aggression Pact.[54]
62
+
63
+ The situation reached a general crisis in late August as German troops continued to mobilise against the Polish border. On 23 August, when tripartite negotiations about a military alliance between France, the United Kingdom and Soviet Union stalled,[55] the Soviet Union signed a non-aggression pact with Germany.[56] This pact had a secret protocol that defined German and Soviet "spheres of influence" (western Poland and Lithuania for Germany; eastern Poland, Finland, Estonia, Latvia and Bessarabia for the Soviet Union), and raised the question of continuing Polish independence.[57] The pact neutralised the possibility of Soviet opposition to a campaign against Poland and assured that Germany would not have to face the prospect of a two-front war, as it had in World War I. Immediately after that, Hitler ordered the attack to proceed on 26 August, but upon hearing that the United Kingdom had concluded a formal mutual assistance pact with Poland, and that Italy would maintain neutrality, he decided to delay it.[58]
64
+
65
+ In response to British requests for direct negotiations to avoid war, Germany made demands on Poland, which only served as a pretext to worsen relations.[59] On 29 August, Hitler demanded that a Polish plenipotentiary immediately travel to Berlin to negotiate the handover of Danzig, and to allow a plebiscite in the Polish Corridor in which the German minority would vote on secession.[59] The Poles refused to comply with the German demands, and on the night of 30–31 August in a stormy meeting with the British ambassador Neville Henderson, Ribbentrop declared that Germany considered its claims rejected.[60]
66
+
67
+ On 1 September 1939, Germany invaded Poland after having staged several false flag border incidents as a pretext to initiate the invasion.[61] The first German attack of the war came against the Polish defenses at Westerplatte.[62] The United Kingdom responded with an ultimatum to Germany to cease military operations, and on 3 September, after the ultimatum was ignored, France and Britain declared war on Germany, followed by Australia, New Zealand, South Africa and Canada. The alliance provided no direct military support to Poland, outside of a cautious French probe into the Saarland.[63] The Western Allies also began a naval blockade of Germany, which aimed to damage the country's economy and the war effort.[64] Germany responded by ordering U-boat warfare against Allied merchant and warships, which would later escalate into the Battle of the Atlantic.[65]
68
+
69
+ On 8 September, German troops reached the suburbs of Warsaw. The Polish counter offensive to the west halted the German advance for several days, but it was outflanked and encircled by the Wehrmacht. Remnants of the Polish army broke through to besieged Warsaw. On 17 September 1939, after signing a cease-fire with Japan, the Soviets invaded Eastern Poland[66] under a pretext that the Polish state had ostensibly ceased to exist.[67] On 27 September, the Warsaw garrison surrendered to the Germans, and the last large operational unit of the Polish Army surrendered on 6 October. Despite the military defeat, Poland never surrendered; instead it formed the Polish government-in-exile and a clandestine state apparatus remained in occupied Poland.[68] A significant part of Polish military personnel evacuated to Romania and the Baltic countries; many of them would fight against the Axis in other theatres of the war.[69]
70
+
71
+ Germany annexed the western and occupied the central part of Poland, and the Soviet Union annexed its eastern part; small shares of Polish territory were transferred to Lithuania and Slovakia. On 6 October, Hitler made a public peace overture to the United Kingdom and France but said that the future of Poland was to be determined exclusively by Germany and the Soviet Union. The proposal was rejected,[60] and Hitler ordered an immediate offensive against France,[70] which would be postponed until the spring of 1940 due to bad weather.[71][72][73]
72
+
73
+ The Soviet Union forced the Baltic countries—Estonia, Latvia and Lithuania, the states that were in the Soviet "sphere of influence" under the Molotov-Ribbentrop pact—to sign "mutual assistance pacts" that stipulated stationing Soviet troops in these countries. Soon after, significant Soviet military contingents were moved there.[74][75][76] Finland refused to sign a similar pact and rejected ceding part of its territory to the Soviet Union. The Soviet Union invaded Finland in November 1939,[77] and the Soviet Union was expelled from the League of Nations.[78] Despite overwhelming numerical superiority, Soviet military success was modest, and the Finno-Soviet war ended in March 1940 with minimal Finnish concessions.[79]
74
+
75
+ In June 1940, the Soviet Union forcibly annexed Estonia, Latvia and Lithuania,[75] and the disputed Romanian regions of Bessarabia, northern Bukovina and Hertza. Meanwhile, Nazi-Soviet political rapprochement and economic co-operation[80][81] gradually stalled,[82][83] and both states began preparations for war.[84]
76
+
77
+ In April 1940, Germany invaded Denmark and Norway to protect shipments of iron ore from Sweden, which the Allies were attempting to cut off.[85] Denmark capitulated after a few hours, and Norway was conquered within two months[86] despite Allied support. British discontent over the Norwegian campaign led to the appointment of Winston Churchill as Prime Minister on 10 May 1940.[87]
78
+
79
+ On the same day, Germany launched an offensive against France. To circumvent the strong Maginot Line fortifications on the Franco-German border, Germany directed its attack at the neutral nations of Belgium, the Netherlands, and Luxembourg.[88] The Germans carried out a flanking manoeuvre through the Ardennes region,[89] which was mistakenly perceived by Allies as an impenetrable natural barrier against armoured vehicles.[90][91] By successfully implementing new blitzkrieg tactics, the Wehrmacht rapidly advanced to the Channel and cut off the Allied forces in Belgium, trapping the bulk of the Allied armies in a cauldron on the Franco-Belgian border near Lille. The United Kingdom was able to evacuate a significant number of Allied troops from the continent by early June, although abandoning almost all their equipment.[92]
80
+
81
+ On 10 June, Italy invaded France, declaring war on both France and the United Kingdom.[93] The Germans turned south against the weakened French army, and Paris fell to them on 14 June. Eight days later France signed an armistice with Germany; it was divided into German and Italian occupation zones,[94] and an unoccupied rump state under the Vichy Regime, which, though officially neutral, was generally aligned with Germany. France kept its fleet, which the United Kingdom attacked on 3 July in an attempt to prevent its seizure by Germany.[95]
82
+
83
+ The Battle of Britain[96] began in early July with Luftwaffe attacks on shipping and harbours.[97] The United Kingdom rejected Hitler's ultimatum,[which?][98] and the German air superiority campaign started in August but failed to defeat RAF Fighter Command, forcing the indefinite postponement of the proposed German invasion of Britain. The German strategic bombing offensive intensified with night attacks on London and other cities in the Blitz, but failed to significantly disrupt the British war effort[97] and largely ended in May 1941.[99]
84
+
85
+ Using newly captured French ports, the German Navy enjoyed success against an over-extended Royal Navy, using U-boats against British shipping in the Atlantic.[100] The British Home Fleet scored a significant victory on 27 May 1941 by sinking the German battleship Bismarck.[101]
86
+
87
+ In November 1939, the United States was taking measures to assist China and the Western Allies, and amended the Neutrality Act to allow "cash and carry" purchases by the Allies.[102] In 1940, following the German capture of Paris, the size of the United States Navy was significantly increased. In September the United States further agreed to a trade of American destroyers for British bases.[103] Still, a large majority of the American public continued to oppose any direct military intervention in the conflict well into 1941.[104] In December 1940 Roosevelt accused Hitler of planning world conquest and ruled out any negotiations as useless, calling for the United States to become an "arsenal of democracy" and promoting Lend-Lease programmes of aid to support the British war effort.[98] The United States started strategic planning to prepare for a full-scale offensive against Germany.[105]
88
+
89
+ At the end of September 1940, the Tripartite Pact formally united Japan, Italy, and Germany as the Axis Powers. The Tripartite Pact stipulated that any country, with the exception of the Soviet Union, which attacked any Axis Power would be forced to go to war against all three.[106] The Axis expanded in November 1940 when Hungary, Slovakia and Romania joined.[107] Romania and Hungary would make major contributions to the Axis war against the Soviet Union, in Romania's case partially to recapture territory ceded to the Soviet Union.[108]
90
+
91
+ In early June 1940 the Italian Regia Aeronautica attacked and besieged Malta, a British possession. In late summer through early autumn Italy conquered British Somaliland and made an incursion into British-held Egypt. In October Italy attacked Greece, but the attack was repulsed with heavy Italian casualties; the campaign ended within months with minor territorial changes.[109] Germany started preparation for an invasion of the Balkans to assist Italy, to prevent the British from gaining a foothold there, which would be a potential threat for Romanian oil fields, and to strike against the British dominance of the Mediterranean.[110]
92
+
93
+ In December 1940, British Empire forces began counter-offensives against Italian forces in Egypt and Italian East Africa.[111] The offensives were highly successful; by early February 1941 Italy had lost control of eastern Libya, and large numbers of Italian troops had been taken, prisoner. The Italian Navy also suffered significant defeats, with the Royal Navy putting three Italian battleships out of commission by a carrier attack at Taranto and neutralising several more warships at the Battle of Cape Matapan.[112]
94
+
95
+ Italian defeats prompted Germany to deploy an expeditionary force to North Africa, and at the end of March 1941 Rommel's Afrika Korps launched an offensive which drove back the Commonwealth forces.[113] In under a month, Axis forces advanced to western Egypt and besieged the port of Tobruk.[114]
96
+
97
+ By late March 1941 Bulgaria and Yugoslavia signed the Tripartite Pact; however, the Yugoslav government was overthrown two days later by pro-British nationalists. Germany responded with simultaneous invasions of both Yugoslavia and Greece, commencing on 6 April 1941; both nations were forced to surrender within the month.[115] The airborne invasion of the Greek island of Crete at the end of May completed the German conquest of the Balkans.[116] Although the Axis victory was swift, bitter and large-scale partisan warfare subsequently broke out against the Axis occupation of Yugoslavia, which continued until the end of the war.[117]
98
+
99
+ In the Middle East, in May Commonwealth forces quashed an uprising in Iraq which had been supported by German aircraft from bases within Vichy-controlled Syria.[118] Between June and July they invaded and occupied the French possessions Syria and Lebanon, with the assistance of the Free French.[119]
100
+
101
+ With the situation in Europe and Asia relatively stable, Germany, Japan, and the Soviet Union made preparations. With the Soviets wary of mounting tensions with Germany and the Japanese planning to take advantage of the European War by seizing resource-rich European possessions in Southeast Asia, the two powers signed the Soviet–Japanese Neutrality Pact in April 1941.[120] By contrast, the Germans were steadily making preparations for an attack on the Soviet Union, massing forces on the Soviet border.[121]
102
+
103
+ Hitler believed that the United Kingdom's refusal to end the war was based on the hope that the United States and the Soviet Union would enter the war against Germany sooner or later.[122] He, therefore, decided to try to strengthen Germany's relations with the Soviets, or failing that to attack and eliminate them as a factor. In November 1940, negotiations took place to determine if the Soviet Union would join the Tripartite Pact. The Soviets showed some interest but asked for concessions from Finland, Bulgaria, Turkey, and Japan that Germany considered unacceptable. On 18 December 1940, Hitler issued the directive to prepare for an invasion of the Soviet Union.[123]
104
+
105
+ On 22 June 1941, Germany, supported by Italy and Romania, invaded the Soviet Union in Operation Barbarossa, with Germany accusing the Soviets of plotting against them. They were joined shortly by Finland and Hungary.[124] The primary targets of this surprise offensive[125] were the Baltic region, Moscow and Ukraine, with the ultimate goal of ending the 1941 campaign near the Arkhangelsk-Astrakhan line, from the Caspian to the White Seas. Hitler's objectives were to eliminate the Soviet Union as a military power, exterminate Communism, generate Lebensraum ("living space")[126] by dispossessing the native population[127] and guarantee access to the strategic resources needed to defeat Germany's remaining rivals.[128]
106
+
107
+ Although the Red Army was preparing for strategic counter-offensives before the war,[129] Barbarossa forced the Soviet supreme command to adopt a strategic defence. During the summer, the Axis made significant gains into Soviet territory, inflicting immense losses in both personnel and materiel. By mid-August, however, the German Army High Command decided to suspend the offensive of a considerably depleted Army Group Centre, and to divert the 2nd Panzer Group to reinforce troops advancing towards central Ukraine and Leningrad.[130] The Kiev offensive was overwhelmingly successful, resulting in encirclement and elimination of four Soviet armies, and made possible further advance into Crimea and industrially developed Eastern Ukraine (the First Battle of Kharkov).[131]
108
+
109
+ The diversion of three quarters of the Axis troops and the majority of their air forces from France and the central Mediterranean to the Eastern Front[132] prompted the United Kingdom to reconsider its grand strategy.[133] In July, the UK and the Soviet Union formed a military alliance against Germany[134] and in August, the United Kingdom and the United States jointly issued the Atlantic Charter, which outlined British and American goals for the postwar world.[135] In late August the British and Soviets invaded neutral Iran to secure the Persian Corridor, Iran's oil fields, and preempt any Axis advances through Iran toward the Baku oil fields or British India.[136]
110
+
111
+ By October Axis operational objectives in Ukraine and the Baltic region were achieved, with only the sieges of Leningrad[137] and Sevastopol continuing.[138] A major offensive against Moscow was renewed; after two months of fierce battles in increasingly harsh weather, the German army almost reached the outer suburbs of Moscow, where the exhausted troops[139] were forced to suspend their offensive.[140] Large territorial gains were made by Axis forces, but their campaign had failed to achieve its main objectives: two key cities remained in Soviet hands, the Soviet capability to resist was not broken, and the Soviet Union retained a considerable part of its military potential. The blitzkrieg phase of the war in Europe had ended.[141]
112
+
113
+ By early December, freshly mobilised reserves[142] allowed the Soviets to achieve numerical parity with Axis troops.[143] This, as well as intelligence data which established that a minimal number of Soviet troops in the East would be sufficient to deter any attack by the Japanese Kwantung Army,[144] allowed the Soviets to begin a massive counter-offensive that started on 5 December all along the front and pushed German troops 100–250 kilometres (62–155 mi) west.[145]
114
+
115
+ Following the Japanese false flag Mukden Incident in 1931, the Japanese shelling of the American gunboat USS Panay in 1937, and the 1937-38 Nanjing Massacre Japanese-American relations deteriorated. In 1939, the United States notified Japan that it would not be extending its trade treaty and American public opinion opposing Japanese expansionism led to a series of economic sanctions, the Export Control Acts, which banned U.S. exports of chemicals, minerals and military parts to Japan and increased economic pressure on the Japanese regime.[98][146][147] During 1939 Japan launched its first attack against Changsha, a strategically important Chinese city, but was repulsed by late September.[148] Despite several offensives by both sides, the war between China and Japan was stalemated by 1940. To increase pressure on China by blocking supply routes, and to better position Japanese forces in the event of a war with the Western powers, Japan invaded and occupied northern Indochina in September, 1940.[149]
116
+
117
+ Chinese nationalist forces launched a large-scale counter-offensive in early 1940. In August, Chinese communists launched an offensive in Central China; in retaliation, Japan instituted harsh measures in occupied areas to reduce human and material resources for the communists.[150] Continued antipathy between Chinese communist and nationalist forces culminated in armed clashes in January 1941, effectively ending their co-operation.[151] In March, the Japanese 11th army attacked the headquarters of the Chinese 19th army but was repulsed during Battle of Shanggao.[152] In September, Japan attempted to take the city of Changsha again and clashed with Chinese nationalist forces.[153]
118
+
119
+ German successes in Europe encouraged Japan to increase pressure on European governments in Southeast Asia. The Dutch government agreed to provide Japan some oil supplies from the Dutch East Indies, but negotiations for additional access to their resources ended in failure in June 1941.[154] In July 1941 Japan sent troops to southern Indochina, thus threatening British and Dutch possessions in the Far East. The United States, United Kingdom, and other Western governments reacted to this move with a freeze on Japanese assets and a total oil embargo.[155][156] At the same time, Japan was planning an invasion of the Soviet Far East, intending to capitalise off the German invasion in the west, but abandoned the operation after the sanctions.[157]
120
+
121
+ Since early 1941 the United States and Japan had been engaged in negotiations in an attempt to improve their strained relations and end the war in China. During these negotiations, Japan advanced a number of proposals which were dismissed by the Americans as inadequate.[158] At the same time the United States, the United Kingdom, and the Netherlands engaged in secret discussions for the joint defence of their territories, in the event of a Japanese attack against any of them.[159] Roosevelt reinforced the Philippines (an American protectorate scheduled for independence in 1946) and warned Japan that the United States would react to Japanese attacks against any "neighboring countries".[159]
122
+
123
+ Frustrated at the lack of progress and feeling the pinch of the American–British–Dutch sanctions, Japan prepared for war. On 20 November, a new government under Hideki Tojo presented an interim proposal as its final offer. It called for the end of American aid to China and for lifting the embargo on the supply of oil and other resources to Japan. In exchange, Japan promised not to launch any attacks in Southeast Asia and to withdraw its forces from southern Indochina.[158] The American counter-proposal of 26 November required that Japan evacuate all of China without conditions and conclude non-aggression pacts with all Pacific powers.[160] That meant Japan was essentially forced to choose between abandoning its ambitions in China, or seizing the natural resources it needed in the Dutch East Indies by force;[161][162] the Japanese military did not consider the former an option, and many officers considered the oil embargo an unspoken declaration of war.[163]
124
+
125
+ Japan planned to rapidly seize European colonies in Asia to create a large defensive perimeter stretching into the Central Pacific. The Japanese would then be free to exploit the resources of Southeast Asia while exhausting the over-stretched Allies by fighting a defensive war.[164][165] To prevent American intervention while securing the perimeter, it was further planned to neutralise the United States Pacific Fleet and the American military presence in the Philippines from the outset.[166] On 7 December 1941 (8 December in Asian time zones), Japan attacked British and American holdings with near-simultaneous offensives against Southeast Asia and the Central Pacific.[167] These included an attack on the American fleets at Pearl Harbor and the Philippines, landings in Malaya,[167], Thailand and the Battle of Hong Kong.[168]
126
+
127
+ The Japanese invasion of Thailand led to Thailand's decision to ally itself with Japan and the other Japanese attacks led the United States, United Kingdom, China, Australia, and several other states to formally declare war on Japan, whereas the Soviet Union, being heavily involved in large-scale hostilities with European Axis countries, maintained its neutrality agreement with Japan.[169] Germany, followed by the other Axis states, declared war on the United States[170] in solidarity with Japan, citing as justification the American attacks on German war vessels that had been ordered by Roosevelt.[124][171]
128
+
129
+ On 1 January 1942, the Allied Big Four[172]—the Soviet Union, China, the United Kingdom and the United States—and 22 smaller or exiled governments issued the Declaration by United Nations, thereby affirming the Atlantic Charter,[173] and agreeing not to sign a separate peace with the Axis powers.[174]
130
+
131
+ During 1942, Allied officials debated on the appropriate grand strategy to pursue. All agreed that defeating Germany was the primary objective. The Americans favoured a straightforward, large-scale attack on Germany through France. The Soviets were also demanding a second front. The British, on the other hand, argued that military operations should target peripheral areas to wear out German strength, leading to increasing demoralisation, and bolster resistance forces. Germany itself would be subject to a heavy bombing campaign. An offensive against Germany would then be launched primarily by Allied armour without using large-scale armies.[175] Eventually, the British persuaded the Americans that a landing in France was infeasible in 1942 and they should instead focus on driving the Axis out of North Africa.[176]
132
+
133
+ At the Casablanca Conference in early 1943, the Allies reiterated the statements issued in the 1942 Declaration, and demanded the unconditional surrender of their enemies. The British and Americans agreed to continue to press the initiative in the Mediterranean by invading Sicily to fully secure the Mediterranean supply routes.[177] Although the British argued for further operations in the Balkans to bring Turkey into the war, in May 1943, the Americans extracted a British commitment to limit Allied operations in the Mediterranean to an invasion of the Italian mainland and to invade France in 1944.[178]
134
+
135
+ By the end of April 1942, Japan and its ally Thailand had almost fully conquered Burma, Malaya, the Dutch East Indies, Singapore, and Rabaul, inflicting severe losses on Allied troops and taking a large number of prisoners.[179] Despite stubborn resistance by Filipino and US forces, the Philippine Commonwealth was eventually captured in May 1942, forcing its government into exile.[180] On 16 April, in Burma, 7,000 British soldiers were encircled by the Japanese 33rd Division during the Battle of Yenangyaung and rescued by the Chinese 38th Division.[181] Japanese forces also achieved naval victories in the South China Sea, Java Sea and Indian Ocean,[182] and bombed the Allied naval base at Darwin, Australia. In January 1942, the only Allied success against Japan was a Chinese victory at Changsha.[183] These easy victories over unprepared US and European opponents left Japan overconfident, as well as overextended.[184]
136
+
137
+ In early May 1942, Japan initiated operations to capture Port Moresby by amphibious assault and thus sever communications and supply lines between the United States and Australia. The planned invasion was thwarted when an Allied task force, centred on two American fleet carriers, fought Japanese naval forces to a draw in the Battle of the Coral Sea.[185] Japan's next plan, motivated by the earlier Doolittle Raid, was to seize Midway Atoll and lure American carriers into battle to be eliminated; as a diversion, Japan would also send forces to occupy the Aleutian Islands in Alaska.[186] In mid-May, Japan started the Zhejiang-Jiangxi Campaign in China, with the goal of inflicting retribution on the Chinese who aided the surviving American airmen in the Doolittle Raid by destroying air bases and fighting against the Chinese 23rd and 32nd Army Groups.[187][188] In early June, Japan put its operations into action, but the Americans, having broken Japanese naval codes in late May, were fully aware of the plans and order of battle, and used this knowledge to achieve a decisive victory at Midway over the Imperial Japanese Navy.[189]
138
+
139
+ With its capacity for aggressive action greatly diminished as a result of the Midway battle, Japan chose to focus on a belated attempt to capture Port Moresby by an overland campaign in the Territory of Papua.[190] The Americans planned a counter-attack against Japanese positions in the southern Solomon Islands, primarily Guadalcanal, as a first step towards capturing Rabaul, the main Japanese base in Southeast Asia.[191]
140
+
141
+ Both plans started in July, but by mid-September, the Battle for Guadalcanal took priority for the Japanese, and troops in New Guinea were ordered to withdraw from the Port Moresby area to the northern part of the island, where they faced Australian and United States troops in the Battle of Buna-Gona.[192] Guadalcanal soon became a focal point for both sides with heavy commitments of troops and ships in the battle for Guadalcanal. By the start of 1943, the Japanese were defeated on the island and withdrew their troops.[193] In Burma, Commonwealth forces mounted two operations. The first, an offensive into the Arakan region in late 1942, went disastrously, forcing a retreat back to India by May 1943.[194] The second was the insertion of irregular forces behind Japanese front-lines in February which, by the end of April, had achieved mixed results.[195]
142
+
143
+ Despite considerable losses, in early 1942 Germany and its allies stopped a major Soviet offensive in central and southern Russia, keeping most territorial gains they had achieved during the previous year.[196] In May the Germans defeated Soviet offensives in the Kerch Peninsula and at Kharkov,[197] and then launched their main summer offensive against southern Russia in June 1942, to seize the oil fields of the Caucasus and occupy Kuban steppe, while maintaining positions on the northern and central areas of the front. The Germans split Army Group South into two groups: Army Group A advanced to the lower Don River and struck south-east to the Caucasus, while Army Group B headed towards the Volga River. The Soviets decided to make their stand at Stalingrad on the Volga.[198]
144
+
145
+ By mid-November, the Germans had nearly taken Stalingrad in bitter street fighting. The Soviets began their second winter counter-offensive, starting with an encirclement of German forces at Stalingrad,[199] and an assault on the Rzhev salient near Moscow, though the latter failed disastrously.[200] By early February 1943, the German Army had taken tremendous losses; German troops at Stalingrad had been defeated,[201] and the front-line had been pushed back beyond its position before the summer offensive. In mid-February, after the Soviet push had tapered off, the Germans launched another attack on Kharkov, creating a salient in their front line around the Soviet city of Kursk.[202]
146
+
147
+ Exploiting poor American naval command decisions, the German navy ravaged Allied shipping off the American Atlantic coast.[203] By November 1941, Commonwealth forces had launched a counter-offensive, Operation Crusader, in North Africa, and reclaimed all the gains the Germans and Italians had made.[204] In North Africa, the Germans launched an offensive in January, pushing the British back to positions at the Gazala Line by early February,[205] followed by a temporary lull in combat which Germany used to prepare for their upcoming offensives.[206] Concerns the Japanese might use bases in Vichy-held Madagascar caused the British to invade the island in early May 1942.[207] An Axis offensive in Libya forced an Allied retreat deep inside Egypt until Axis forces were stopped at El Alamein.[208] On the Continent, raids of Allied commandos on strategic targets, culminating in the disastrous Dieppe Raid,[209] demonstrated the Western Allies' inability to launch an invasion of continental Europe without much better preparation, equipment, and operational security.[210][page needed]
148
+
149
+ In August 1942, the Allies succeeded in repelling a second attack against El Alamein[211] and, at a high cost, managed to deliver desperately needed supplies to the besieged Malta.[212] A few months later, the Allies commenced an attack of their own in Egypt, dislodging the Axis forces and beginning a drive west across Libya.[213] This attack was followed up shortly after by Anglo-American landings in French North Africa, which resulted in the region joining the Allies.[214] Hitler responded to the French colony's defection by ordering the occupation of Vichy France;[214] although Vichy forces did not resist this violation of the armistice, they managed to scuttle their fleet to prevent its capture by German forces.[214][215] The Axis forces in Africa withdrew into Tunisia, which was conquered by the Allies in May 1943.[214][216]
150
+
151
+ In June 1943 the British and Americans began a strategic bombing campaign against Germany with a goal to disrupt the war economy, reduce morale, and "de-house" the civilian population.[217] The firebombing of Hamburg was among the first attacks in this campaign, inflicting significant casualties and considerable losses on infrastructure of this important industrial centre.[218]
152
+
153
+ After the Guadalcanal Campaign, the Allies initiated several operations against Japan in the Pacific. In May 1943, Canadian and US forces were sent to eliminate Japanese forces from the Aleutians.[219] Soon after, the United States, with support from Australia, New Zealand and Pacific Islander forces, began major ground, sea and air operations to isolate Rabaul by capturing surrounding islands, and breach the Japanese Central Pacific perimeter at the Gilbert and Marshall Islands.[220] By the end of March 1944, the Allies had completed both of these objectives and had also neutralised the major Japanese base at Truk in the Caroline Islands. In April, the Allies launched an operation to retake Western New Guinea.[221]
154
+
155
+ In the Soviet Union, both the Germans and the Soviets spent the spring and early summer of 1943 preparing for large offensives in central Russia. On 4 July 1943, Germany attacked Soviet forces around the Kursk Bulge. Within a week, German forces had exhausted themselves against the Soviets' deeply echeloned and well-constructed defences,[222] and for the first time in the war Hitler cancelled the operation before it had achieved tactical or operational success.[223] This decision was partially affected by the Western Allies' invasion of Sicily launched on 9 July, which, combined with previous Italian failures, resulted in the ousting and arrest of Mussolini later that month.[224]
156
+
157
+ On 12 July 1943, the Soviets launched their own counter-offensives, thereby dispelling any chance of German victory or even stalemate in the east. The Soviet victory at Kursk marked the end of German superiority,[225] giving the Soviet Union the initiative on the Eastern Front.[226][227] The Germans tried to stabilise their eastern front along the hastily fortified Panther–Wotan line, but the Soviets broke through it at Smolensk and by the Lower Dnieper Offensives.[228]
158
+
159
+ On 3 September 1943, the Western Allies invaded the Italian mainland, following Italy's armistice with the Allies.[229] Germany with the help of fascists responded by disarming Italian forces that were in many places without superior orders, seizing military control of Italian areas,[230] and creating a series of defensive lines.[231] German special forces then rescued Mussolini, who then soon established a new client state in German-occupied Italy named the Italian Social Republic,[232] causing an Italian civil war. The Western Allies fought through several lines until reaching the main German defensive line in mid-November.[233]
160
+
161
+ German operations in the Atlantic also suffered. By May 1943, as Allied counter-measures became increasingly effective, the resulting sizeable German submarine losses forced a temporary halt of the German Atlantic naval campaign.[234] In November 1943, Franklin D. Roosevelt and Winston Churchill met with Chiang Kai-shek in Cairo and then with Joseph Stalin in Tehran.[235] The former conference determined the post-war return of Japanese territory[236] and the military planning for the Burma Campaign,[237] while the latter included agreement that the Western Allies would invade Europe in 1944 and that the Soviet Union would declare war on Japan within three months of Germany's defeat.[238]
162
+
163
+ From November 1943, during the seven-week Battle of Changde, the Chinese forced Japan to fight a costly war of attrition, while awaiting Allied relief.[239][240][241] In January 1944, the Allies launched a series of attacks in Italy against the line at Monte Cassino and tried to outflank it with landings at Anzio.[242]
164
+
165
+ On 27 January 1944, Soviet troops launched a major offensive that expelled German forces from the Leningrad region, thereby ending the most lethal siege in history.[243] The following Soviet offensive was halted on the pre-war Estonian border by the German Army Group North aided by Estonians hoping to re-establish national independence. This delay slowed subsequent Soviet operations in the Baltic Sea region.[244] By late May 1944, the Soviets had liberated Crimea, largely expelled Axis forces from Ukraine, and made incursions into Romania, which were repulsed by the Axis troops.[245] The Allied offensives in Italy had succeeded and, at the expense of allowing several German divisions to retreat, on 4 June Rome was captured.[246]
166
+
167
+ The Allies had mixed success in mainland Asia. In March 1944, the Japanese launched the first of two invasions, an operation against British positions in Assam, India,[247] and soon besieged Commonwealth positions at Imphal and Kohima.[248] In May 1944, British forces mounted a counter-offensive that drove Japanese troops back to Burma by July,[248] and Chinese forces that had invaded northern Burma in late 1943 besieged Japanese troops in Myitkyina.[249] The second Japanese invasion of China aimed to destroy China's main fighting forces, secure railways between Japanese-held territory and capture Allied airfields.[250] By June, the Japanese had conquered the province of Henan and begun a new attack on Changsha in Hunan province.[251]
168
+
169
+ On 6 June 1944 (known as D-Day), after three years of Soviet pressure,[252] the Western Allies invaded northern France. After reassigning several Allied divisions from Italy, they also attacked southern France.[253] These landings were successful, and led to the defeat of the German Army units in France. Paris was liberated on 25 August by the local resistance assisted by the Free French Forces, both led by General Charles de Gaulle,[254] and the Western Allies continued to push back German forces in western Europe during the latter part of the year. An attempt to advance into northern Germany spearheaded by a major airborne operation in the Netherlands failed.[255] After that, the Western Allies slowly pushed into Germany, but failed to cross the Rur river in a large offensive. In Italy, Allied advance also slowed due to the last major German defensive line.[256]
170
+
171
+ On 22 June, the Soviets launched a strategic offensive in Belarus ("Operation Bagration") that destroyed the German Army Group Centre almost completely.[257] Soon after that, another Soviet strategic offensive forced German troops from Western Ukraine and Eastern Poland. The Soviets formed the Polish Committee of National Liberation to control territory in Poland and combat the Polish Armia Krajowa; The Soviet Red Army remained in the Praga district on the other side of the Vistula and watched passively as the Germans quelled the Warsaw Uprising initiated by the Armia Krajowa.[258] The national uprising in Slovakia was also quelled by the Germans.[259] The Soviet Red Army's strategic offensive in eastern Romania cut off and destroyed the considerable German troops there and triggered a successful coup d'état in Romania and in Bulgaria, followed by those countries' shift to the Allied side.[260]
172
+
173
+ In September 1944, Soviet troops advanced into Yugoslavia and forced the rapid withdrawal of German Army Groups E and F in Greece, Albania and Yugoslavia to rescue them from being cut off.[261] By this point, the Communist-led Partisans under Marshal Josip Broz Tito, who had led an increasingly successful guerrilla campaign against the occupation since 1941, controlled much of the territory of Yugoslavia and engaged in delaying efforts against German forces further south. In northern Serbia, the Soviet Red Army, with limited support from Bulgarian forces, assisted the Partisans in a joint liberation of the capital city of Belgrade on 20 October. A few days later, the Soviets launched a massive assault against German-occupied Hungary that lasted until the fall of Budapest in February 1945.[262] Unlike impressive Soviet victories in the Balkans, bitter Finnish resistance to the Soviet offensive in the Karelian Isthmus denied the Soviets occupation of Finland and led to a Soviet-Finnish armistice on relatively mild conditions,[263] although Finland was forced to fight their former ally Germany.[264][broken footnote]
174
+
175
+ By the start of July 1944, Commonwealth forces in Southeast Asia had repelled the Japanese sieges in Assam, pushing the Japanese back to the Chindwin River[265] while the Chinese captured Myitkyina. In September 1944, Chinese forces captured Mount Song and reopened the Burma Road.[266] In China, the Japanese had more successes, having finally captured Changsha in mid-June and the city of Hengyang by early August.[267] Soon after, they invaded the province of Guangxi, winning major engagements against Chinese forces at Guilin and Liuzhou by the end of November[268] and successfully linking up their forces in China and Indochina by mid-December.[269]
176
+
177
+ In the Pacific, US forces continued to press back the Japanese perimeter. In mid-June 1944, they began their offensive against the Mariana and Palau islands, and decisively defeated Japanese forces in the Battle of the Philippine Sea. These defeats led to the resignation of the Japanese Prime Minister, Hideki Tojo, and provided the United States with air bases to launch intensive heavy bomber attacks on the Japanese home islands. In late October, American forces invaded the Filipino island of Leyte; soon after, Allied naval forces scored another large victory in the Battle of Leyte Gulf, one of the largest naval battles in history.[270]
178
+
179
+ On 16 December 1944, Germany made a last attempt on the Western Front by using most of its remaining reserves to launch a massive counter-offensive in the Ardennes and along with the French-German border to split the Western Allies, encircle large portions of Western Allied troops and capture their primary supply port at Antwerp to prompt a political settlement.[271] By January, the offensive had been repulsed with no strategic objectives fulfilled.[271] In Italy, the Western Allies remained stalemated at the German defensive line. In mid-January 1945, the Soviets and Poles attacked in Poland, pushing from the Vistula to the Oder river in Germany, and overran East Prussia.[272] On 4 February Soviet, British, and US leaders met for the Yalta Conference. They agreed on the occupation of post-war Germany, and on when the Soviet Union would join the war against Japan.[273]
180
+
181
+ In February, the Soviets entered Silesia and Pomerania, while Western Allies entered western Germany and closed to the Rhine river. By March, the Western Allies crossed the Rhine north and south of the Ruhr, encircling the German Army Group B.[274] In early March, in an attempt to protect its last oil reserves in Hungary and to retake Budapest, Germany launched its last major offensive against Soviet troops near Lake Balaton. In two weeks, the offensive had been repulsed, the Soviets advanced to Vienna, and captured the city. In early April, Soviet troops captured Königsberg, while the Western Allies finally pushed forward in Italy and swept across western Germany capturing Hamburg and Nuremberg. American and Soviet forces met at the Elbe river on 25 April, leaving several unoccupied pockets in southern Germany and around Berlin.
182
+
183
+ Soviet and Polish forces stormed and captured Berlin in late April. In Italy, German forces surrendered on 29 April. On 30 April, the Reichstag was captured, signalling the military defeat of Nazi Germany,[275] Berlin garrison surrendered on 2 May.
184
+
185
+ Several changes in leadership occurred during this period. On 12 April, President Roosevelt died and was succeeded by Harry S. Truman. Benito Mussolini was killed by Italian partisans on 28 April.[276] Two days later, Hitler committed suicide in besieged Berlin, and he was succeeded by Grand Admiral Karl Dönitz.[277]
186
+ Total and unconditional surrender in Europe was signed on 7 and 8 May, to be effective by the end of 8 May.[278] German Army Group Centre resisted in Prague until 11 May.[279]
187
+
188
+ In the Pacific theatre, American forces accompanied by the forces of the Philippine Commonwealth advanced in the Philippines, clearing Leyte by the end of April 1945. They landed on Luzon in January 1945 and recaptured Manila in March. Fighting continued on Luzon, Mindanao, and other islands of the Philippines until the end of the war.[280] Meanwhile, the United States Army Air Forces launched a massive firebombing campaign of strategic cities in Japan in an effort to destroy Japanese war industry and civilian morale. A devastating bombing raid on Tokyo of 9–10 March was the deadliest conventional bombing raid in history.[281]
189
+
190
+ In May 1945, Australian troops landed in Borneo, over-running the oilfields there. British, American, and Chinese forces defeated the Japanese in northern Burma in March, and the British pushed on to reach Rangoon by 3 May.[282] Chinese forces started a counterattack in the Battle of West Hunan that occurred between 6 April and 7 June 1945. American naval and amphibious forces also moved towards Japan, taking Iwo Jima by March, and Okinawa by the end of June.[283] At the same time, American submarines cut off Japanese imports, drastically reducing Japan's ability to supply its overseas forces.[284]
191
+
192
+ On 11 July, Allied leaders met in Potsdam, Germany. They confirmed earlier agreements about Germany,[285] and the American, British and Chinese governments reiterated the demand for unconditional surrender of Japan, specifically stating that "the alternative for Japan is prompt and utter destruction".[286] During this conference, the United Kingdom held its general election, and Clement Attlee replaced Churchill as Prime Minister.[287]
193
+
194
+ The call for unconditional surrender was rejected by the Japanese government, which believed it would be capable of negotiating for more favourable surrender terms.[288] In early August, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki. Between the two bombings, the Soviets, pursuant to the Yalta agreement, invaded Japanese-held Manchuria and quickly defeated the Kwantung Army, which was the largest Japanese fighting force.[289] These two events persuaded previously adamant Imperial Army leaders to accept surrender terms.[290] The Red Army also captured the southern part of Sakhalin Island and the Kuril Islands. On 15 August 1945, Japan surrendered, with the surrender documents finally signed at Tokyo Bay on the deck of the American battleship USS Missouri on 2 September 1945, ending the war.[291]
195
+
196
+ The Allies established occupation administrations in Austria and Germany. The former became a neutral state, non-aligned with any political bloc. The latter was divided into western and eastern occupation zones controlled by the Western Allies and the Soviet Union. A denazification programme in Germany led to the prosecution of Nazi war criminals in the Nuremberg trials and the removal of ex-Nazis from power, although this policy moved towards amnesty and re-integration of ex-Nazis into West German society.[292]
197
+
198
+ Germany lost a quarter of its pre-war (1937) territory. Among the eastern territories, Silesia, Neumark and most of Pomerania were taken over by Poland,[293] and East Prussia was divided between Poland and the Soviet Union, followed by the expulsion to Germany of the nine million Germans from these provinces,[294][295] as well as three million Germans from the Sudetenland in Czechoslovakia. By the 1950s, one-fifth of West Germans were refugees from the east. The Soviet Union also took over the Polish provinces east of the Curzon line,[296] from which 2 million Poles were expelled;[295][297] north-east Romania,[298][299] parts of eastern Finland,[300] and the three Baltic states were incorporated into the Soviet Union.[301][302]
199
+
200
+ In an effort to maintain world peace,[303] the Allies formed the United Nations, which officially came into existence on 24 October 1945,[304] and adopted the Universal Declaration of Human Rights in 1948 as a common standard for all member nations.[305] The great powers that were the victors of the war—France, China, the United Kingdom, the Soviet Union and the United States—became the permanent members of the UN's Security Council.[306] The five permanent members remain so to the present, although there have been two seat changes, between the Republic of China and the People's Republic of China in 1971, and between the Soviet Union and its successor state, the Russian Federation, following the dissolution of the Soviet Union in 1991. The alliance between the Western Allies and the Soviet Union had begun to deteriorate even before the war was over.[307]
201
+
202
+ Germany had been de facto divided, and two independent states, the Federal Republic of Germany (West Germany) and the German Democratic Republic (East Germany),[308] were created within the borders of Allied and Soviet occupation zones. The rest of Europe was also divided into Western and Soviet spheres of influence.[309] Most eastern and central European countries fell into the Soviet sphere, which led to establishment of Communist-led regimes, with full or partial support of the Soviet occupation authorities. As a result, East Germany,[310] Poland, Hungary, Romania, Czechoslovakia, and Albania[311] became Soviet satellite states. Communist Yugoslavia conducted a fully independent policy, causing tension with the Soviet Union.[312]
203
+
204
+ Post-war division of the world was formalised by two international military alliances, the United States-led NATO and the Soviet-led Warsaw Pact.[313] The long period of political tensions and military competition between them, the Cold War, would be accompanied by an unprecedented arms race and proxy wars.[314]
205
+
206
+ In Asia, the United States led the occupation of Japan and administrated Japan's former islands in the Western Pacific, while the Soviets annexed South Sakhalin and the Kuril Islands.[315] Korea, formerly under Japanese rule, was divided and occupied by the Soviet Union in the North and the United States in the South between 1945 and 1948. Separate republics emerged on both sides of the 38th parallel in 1948, each claiming to be the legitimate government for all of Korea, which led ultimately to the Korean War.[316]
207
+
208
+ In China, nationalist and communist forces resumed the civil war in June 1946. Communist forces were victorious and established the People's Republic of China on the mainland, while nationalist forces retreated to Taiwan in 1949.[317] In the Middle East, the Arab rejection of the United Nations Partition Plan for Palestine and the creation of Israel marked the escalation of the Arab–Israeli conflict. While European powers attempted to retain some or all of their colonial empires, their losses of prestige and resources during the war rendered this unsuccessful, leading to decolonisation.[318][319]
209
+
210
+ The global economy suffered heavily from the war, although participating nations were affected differently. The United States emerged much richer than any other nation, leading to a baby boom, and by 1950 its gross domestic product per person was much higher than that of any of the other powers, and it dominated the world economy.[320] The UK and US pursued a policy of industrial disarmament in Western Germany in the years 1945–1948.[321] Because of international trade interdependencies this led to European economic stagnation and delayed European recovery for several years.[322][323]
211
+
212
+ Recovery began with the mid-1948 currency reform in Western Germany, and was sped up by the liberalisation of European economic policy that the Marshall Plan (1948–1951) both directly and indirectly caused.[324][325] The post-1948 West German recovery has been called the German economic miracle.[326] Italy also experienced an economic boom[327] and the French economy rebounded.[328] By contrast, the United Kingdom was in a state of economic ruin,[329] and although receiving a quarter of the total Marshall Plan assistance, more than any other European country,[330] it continued in relative economic decline for decades.[331]
213
+
214
+ The Soviet Union, despite enormous human and material losses, also experienced rapid increase in production in the immediate post-war era.[332] Japan recovered much later.[333] China returned to its pre-war industrial production by 1952.[334]
215
+
216
+ Estimates for the total number of casualties in the war vary, because many deaths went unrecorded.[335] Most suggest that some 60 million people died in the war, including about 20 million military personnel and 40 million civilians.[336][337][338]
217
+ Many of the civilians died because of deliberate genocide, massacres, mass bombings, disease, and starvation.
218
+
219
+ The Soviet Union alone lost around 27 million people during the war,[339] including 8.7 million military and 19 million civilian deaths.[340] A quarter of the people in the Soviet Union were wounded or killed.[341] Germany sustained 5.3 million military losses, mostly on the Eastern Front and during the final battles in Germany.[342]
220
+
221
+ An estimated 11[343] to 17 million[344] civilians died as a direct or as an indirect result of Nazi racist policies, including mass killing of around 6 million Jews, along with Roma, homosexuals, at least 1.9 million ethnic Poles[345][346] and millions of other Slavs (including Russians, Ukrainians and Belarusians), and other ethnic and minority groups.[347][344] Between 1941 and 1945, more than 200,000 ethnic Serbs, along with gypsies and Jews, were persecuted and murdered by the Axis-aligned Croatian Ustaše in Yugoslavia.[348] Also, more than 100,000 Poles were massacred by the Ukrainian Insurgent Army in the Volhynia massacres, between 1943 and 1945.[349] At the same time about 10,000–15,000 Ukrainians were killed by the Polish Home Army and other Polish units, in reprisal attacks.[350]
222
+
223
+ In Asia and the Pacific, between 3 million and more than 10 million civilians, mostly Chinese (estimated at 7.5 million[351]), were killed by the Japanese occupation forces.[352] The most infamous Japanese atrocity was the Nanking Massacre, in which fifty to three hundred thousand Chinese civilians were raped and murdered.[353] Mitsuyoshi Himeta reported that 2.7 million casualties occurred during the Sankō Sakusen. General Yasuji Okamura implemented the policy in Heipei and Shantung.[354]
224
+
225
+ Axis forces employed biological and chemical weapons. The Imperial Japanese Army used a variety of such weapons during its invasion and occupation of China (see Unit 731)[355][356] and in early conflicts against the Soviets.[357] Both the Germans and the Japanese tested such weapons against civilians,[358] and sometimes on prisoners of war.[359]
226
+
227
+ The Soviet Union was responsible for the Katyn massacre of 22,000 Polish officers,[360] and the imprisonment or execution of thousands of political prisoners by the NKVD, along with mass civilian deportations to Siberia, in the Baltic states and eastern Poland annexed by the Red Army.[361]
228
+
229
+ The mass bombing of cities in Europe and Asia has often been called a war crime, although no positive or specific customary international humanitarian law with respect to aerial warfare existed before or during World War II.[362] The USAAF firebombed a total of 67 Japanese cities, killing 393,000 civilians and destroying 65% of built-up areas.[363]
230
+
231
+ Nazi Germany was responsible for the Holocaust (which killed approximately 6 million Jews) as well as for killing 2.7 million ethnic Poles[364] and 4 million others who were deemed "unworthy of life" (including the disabled and mentally ill, Soviet prisoners of war, Romani, homosexuals, Freemasons, and Jehovah's Witnesses) as part of a programme of deliberate extermination, in effect becoming a "genocidal state".[365] Soviet POWs were kept in especially unbearable conditions, and 3.6 million Soviet POWs out of 5.7 died in Nazi camps during the war.[366][367] In addition to concentration camps, death camps were created in Nazi Germany to exterminate people on an industrial scale. Nazi Germany extensively used forced labourers; about 12 million Europeans from German occupied countries were abducted and used as a slave work force in German industry, agriculture and war economy.[368]
232
+
233
+ The Soviet Gulag became a de facto system of deadly camps during 1942–43, when wartime privation and hunger caused numerous deaths of inmates,[369] including foreign citizens of Poland and other countries occupied in 1939–40 by the Soviet Union, as well as Axis POWs.[370] By the end of the war, most Soviet POWs liberated from Nazi camps and many repatriated civilians were detained in special filtration camps where they were subjected to NKVD evaluation, and 226,127 were sent to the Gulag as real or perceived Nazi collaborators.[371]
234
+
235
+ Japanese prisoner-of-war camps, many of which were used as labour camps, also had high death rates. The International Military Tribunal for the Far East found the death rate of Western prisoners was 27 per cent (for American POWs, 37 per cent),[372] seven times that of POWs under the Germans and Italians.[373] While 37,583 prisoners from the UK, 28,500 from the Netherlands, and 14,473 from the United States were released after the surrender of Japan, the number of Chinese released was only 56.[374]
236
+
237
+ At least five million Chinese civilians from northern China and Manchukuo were enslaved between 1935 and 1941 by the East Asia Development Board, or Kōain, for work in mines and war industries. After 1942, the number reached 10 million.[375] In Java, between 4 and 10 million rōmusha (Japanese: "manual labourers"), were forced to work by the Japanese military. About 270,000 of these Javanese labourers were sent to other Japanese-held areas in South East Asia, and only 52,000 were repatriated to Java.[376]
238
+
239
+ In Europe, occupation came under two forms. In Western, Northern, and Central Europe (France, Norway, Denmark, the Low Countries, and the annexed portions of Czechoslovakia) Germany established economic policies through which it collected roughly 69.5 billion reichmarks (27.8 billion US dollars) by the end of the war; this figure does not include the sizeable plunder of industrial products, military equipment, raw materials and other goods.[377] Thus, the income from occupied nations was over 40 per cent of the income Germany collected from taxation, a figure which increased to nearly 40 per cent of total German income as the war went on.[378]
240
+
241
+ In the East, the intended gains of Lebensraum were never attained as fluctuating front-lines and Soviet scorched earth policies denied resources to the German invaders.[379] Unlike in the West, the Nazi racial policy encouraged extreme brutality against what it considered to be the "inferior people" of Slavic descent; most German advances were thus followed by mass executions.[380] Although resistance groups formed in most occupied territories, they did not significantly hamper German operations in either the East[381] or the West[382] until late 1943.
242
+
243
+ In Asia, Japan termed nations under its occupation as being part of the Greater East Asia Co-Prosperity Sphere, essentially a Japanese hegemony which it claimed was for purposes of liberating colonised peoples.[383] Although Japanese forces were sometimes welcomed as liberators from European domination, Japanese war crimes frequently turned local public opinion against them.[384] During Japan's initial conquest it captured 4,000,000 barrels (640,000 m3) of oil (~5.5×105 tonnes) left behind by retreating Allied forces, and by 1943 was able to get production in the Dutch East Indies up to 50 million barrels (~6.8×10^6 t), 76 per cent of its 1940 output rate.[384]
244
+
245
+ In Europe, before the outbreak of the war, the Allies had significant advantages in both population and economics. In 1938, the Western Allies (United Kingdom, France, Poland and the British Dominions) had a 30 per cent larger population and a 30 per cent higher gross domestic product than the European Axis powers (Germany and Italy); if colonies are included, the Allies had more than a 5:1 advantage in population and a nearly 2:1 advantage in GDP.[385] In Asia at the same time, China had roughly six times the population of Japan but only an 89 per cent higher GDP; this is reduced to three times the population and only a 38 per cent higher GDP if Japanese colonies are included.[385]
246
+
247
+ The United States produced about two-thirds of all the munitions used by the Allies in WWII, including warships, transports, warplanes, artillery, tanks, trucks, and ammunition.[386]
248
+ Though the Allies' economic and population advantages were largely mitigated during the initial rapid blitzkrieg attacks of Germany and Japan, they became the decisive factor by 1942, after the United States and Soviet Union joined the Allies, as the war largely settled into one of attrition.[387] While the Allies' ability to out-produce the Axis is often attributed[by whom?] to the Allies having more access to natural resources, other factors, such as Germany and Japan's reluctance to employ women in the labour force,[388] Allied strategic bombing,[389] and Germany's late shift to a war economy[390] contributed significantly. Additionally, neither Germany nor Japan planned to fight a protracted war, and had not equipped themselves to do so.[391] To improve their production, Germany and Japan used millions of slave labourers;[392] Germany used about 12 million people, mostly from Eastern Europe,[368] while Japan used more than 18 million people in Far East Asia.[375][376]
249
+
250
+ Aircraft were used for reconnaissance, as fighters, bombers, and ground-support, and each role was advanced considerably. Innovation included airlift (the capability to quickly move limited high-priority supplies, equipment, and personnel);[393] and of strategic bombing (the bombing of enemy industrial and population centres to destroy the enemy's ability to wage war).[394] Anti-aircraft weaponry also advanced, including defences such as radar and surface-to-air artillery. The use of the jet aircraft was pioneered and, though late introduction meant it had little impact, it led to jets becoming standard in air forces worldwide.[395] Although guided missiles were being developed, they were not advanced enough to reliably target aircraft until some years after the war.
251
+
252
+ Advances were made in nearly every aspect of naval warfare, most notably with aircraft carriers and submarines. Although aeronautical warfare had relatively little success at the start of the war, actions at Taranto, Pearl Harbor, and the Coral Sea established the carrier as the dominant capital ship in place of the battleship.[396][397][398] In the Atlantic, escort carriers proved to be a vital part of Allied convoys, increasing the effective protection radius and helping to close the Mid-Atlantic gap.[399] Carriers were also more economical than battleships because of the relatively low cost of aircraft[400] and their not requiring to be as heavily armoured.[401] Submarines, which had proved to be an effective weapon during the First World War,[402] were anticipated by all sides to be important in the second. The British focused development on anti-submarine weaponry and tactics, such as sonar and convoys, while Germany focused on improving its offensive capability, with designs such as the Type VII submarine and wolfpack tactics.[403][better source needed] Gradually, improving Allied technologies such as the Leigh light, hedgehog, squid, and homing torpedoes proved victorious over the German submarines.[citation needed]
253
+
254
+ Land warfare changed from the static front lines of trench warfare of World War I, which had relied on improved artillery that outmatched the speed of both infantry and cavalry, to increased mobility and combined arms. The tank, which had been used predominantly for infantry support in the First World War, had evolved into the primary weapon.[404] In the late 1930s, tank design was considerably more advanced than it had been during World War I,[405] and advances continued throughout the war with increases in speed, armour and firepower.[citation needed] At the start of the war, most commanders thought enemy tanks should be met by tanks with superior specifications.[406] This idea was challenged by the poor performance of the relatively light early tank guns against armour, and German doctrine of avoiding tank-versus-tank combat. This, along with Germany's use of combined arms, were among the key elements of their highly successful blitzkrieg tactics across Poland and France.[404] Many means of destroying tanks, including indirect artillery, anti-tank guns (both towed and self-propelled), mines, short-ranged infantry antitank weapons, and other tanks were used.[406] Even with large-scale mechanisation, infantry remained the backbone of all forces,[407] and throughout the war, most infantry were equipped similarly to World War I.[408] The portable machine gun spread, a notable example being the German MG34, and various submachine guns which were suited to close combat in urban and jungle settings.[408] The assault rifle, a late war development incorporating many features of the rifle and submachine gun, became the standard postwar infantry weapon for most armed forces.[409]
255
+
256
+ Most major belligerents attempted to solve the problems of complexity and security involved in using large codebooks for cryptography by designing ciphering machines, the most well known being the German Enigma machine.[410] Development of SIGINT (signals intelligence) and cryptanalysis enabled the countering process of decryption. Notable examples were the Allied decryption of Japanese naval codes[411] and British Ultra, a pioneering method for decoding Enigma benefiting from information given to the United Kingdom by the Polish Cipher Bureau, which had been decoding early versions of Enigma before the war.[412] Another aspect of military intelligence was the use of deception, which the Allies used to great effect, such as in operations Mincemeat and Bodyguard.[411][413]
257
+
258
+ Other technological and engineering feats achieved during, or as a result of, the war include the world's first programmable computers (Z3, Colossus, and ENIAC), guided missiles and modern rockets, the Manhattan Project's development of nuclear weapons, operations research and the development of artificial harbours and oil pipelines under the English Channel.[citation needed] Penicillin was first mass-produced and used during the war (see Stabilization and mass production of penicillin).[414]
259
+
en/5322.html.txt ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Asia-Pacific
6
+
7
+ Mediterranean and Middle East
8
+
9
+ Other campaigns
10
+
11
+ Coups
12
+
13
+ World War II (WWII or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945. It involved the vast majority of the world's countries—including all the great powers—forming two opposing military alliances: the Allies and the Axis. In a state of total war, directly involving more than 100 million people from more than 30 countries, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, blurring the distinction between civilian and military resources. World War II was the deadliest conflict in human history, marked by 70 to 85 million fatalities. Tens of millions of people died due to genocides (including the Holocaust), premeditated death from starvation, massacres, and disease. Aircraft played a major role in the conflict, including in the use of strategic bombing of population centres, and the only uses of nuclear weapons in war.
14
+
15
+ World War II is generally considered to have begun on 1 September 1939, with the invasion of Poland by Germany and subsequent declarations of war on Germany by France and the United Kingdom. From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours: Poland, Finland, Romania and the Baltic states. Following the onset of campaigns in North Africa and East Africa, and the Fall of France in mid-1940, the war continued primarily between the European Axis powers and the British Empire, with war in the Balkans, the aerial Battle of Britain, the Blitz, and the Battle of the Atlantic. On 22 June 1941, Germany led the European Axis powers in an invasion of the Soviet Union, opening the largest land theatre of war in history and trapping the Axis, crucially the German Wehrmacht, in a war of attrition.
16
+
17
+ Japan, which aimed to dominate Asia and the Pacific, was at war with the Republic of China by 1937. In December 1941, Japan launched a surprise attack on the United States as well as European colonies in East Asia and the Pacific. Following an immediate US declaration of war against Japan, supported by one from the UK, the European Axis powers declared war on the United States in solidarity with their ally. Japan soon captured much of the Western Pacific, but its advances were halted in 1942 after Japan lost the critical Battle of Midway; later, Germany and Italy were defeated in North Africa and at Stalingrad in the Soviet Union. Key setbacks in 1943—which included a series of German defeats on the Eastern Front, the Allied invasions of Sicily and Italy, and Allied offensives in the Pacific—cost the Axis its initiative and forced it into strategic retreat on all fronts. In 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained its territorial losses and turned towards Germany and its allies. During 1944 and 1945, the Japanese suffered reversals in mainland Asia, while the Allies crippled the Japanese Navy and captured key Western Pacific islands.
18
+
19
+ The war in Europe concluded with an invasion of Germany by the Western Allies and the Soviet Union, culminating in the capture of Berlin by Soviet troops, the suicide of Adolf Hitler and the German unconditional surrender on 8 May 1945. Following the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender on its terms, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki on 6 and 9 August, respectively. Faced with an imminent invasion of the Japanese archipelago, the possibility of additional atomic bombings, and the Soviet entry into the war against Japan and its invasion of Manchuria on 9 August, Japan announced its intention to surrender on 15 August 1945, cementing total victory in Asia for the Allies. In the wake of the war, Germany and Japan were occupied and war crimes tribunals were conducted against German and Japanese leaders.
20
+
21
+ World War II changed the political alignment and social structure of the globe. The United Nations (UN) was established to foster international co-operation and prevent future conflicts, and the victorious great powers—China, France, the Soviet Union, the United Kingdom, and the United States—became the permanent members of its Security Council. The Soviet Union and the United States emerged as rival superpowers, setting the stage for the nearly half-century-long Cold War. In the wake of European devastation, the influence of its great powers waned, triggering the decolonisation of Africa and Asia. Most countries whose industries had been damaged moved towards economic recovery and expansion. Political integration, especially in Europe, began as an effort to forestall future hostilities, end pre-war enmities and forge a sense of common identity.
22
+
23
+ The start of the war in Europe is generally held to be 1 September 1939,[1][2] beginning with the German invasion of Poland; the United Kingdom and France declared war on Germany two days later. The dates for the beginning of war in the Pacific include the start of the Second Sino-Japanese War on 7 July 1937,[3][4] or even the Japanese invasion of Manchuria on 19 September 1931.[5][6][7]
24
+
25
+ Others follow the British historian A. J. P. Taylor, who held that the Sino-Japanese War and war in Europe and its colonies occurred simultaneously, and the two wars merged in 1941. This article uses the conventional dating. Other starting dates sometimes used for World War II include the Italian invasion of Abyssinia on 3 October 1935.[8] The British historian Antony Beevor views the beginning of World War II as the Battles of Khalkhin Gol fought between Japan and the forces of Mongolia and the Soviet Union from May to September 1939.[9]
26
+
27
+ The exact date of the war's end is also not universally agreed upon. It was generally accepted at the time that the war ended with the armistice of 14 August 1945 (V-J Day), rather than the formal surrender of Japan, which was on 2 September 1945 that officially ended the war in Asia. A peace treaty with Japan was signed in 1951.[10] A treaty regarding Germany's future allowed the reunification of East and West Germany to take place in 1990 and resolved most post-World War II issues.[11] No formal peace treaty between Japan and the Soviet Union was ever signed.[12]
28
+
29
+ World War I had radically altered the political European map, with the defeat of the Central Powers—including Austria-Hungary, Germany, Bulgaria and the Ottoman Empire—and the 1917 Bolshevik seizure of power in Russia, which eventually led to the founding of the Soviet Union. Meanwhile, the victorious Allies of World War I, such as France, Belgium, Italy, Romania, and Greece, gained territory, and new nation-states were created out of the collapse of Austria-Hungary and the Ottoman and Russian Empires.
30
+
31
+ To prevent a future world war, the League of Nations was created during the 1919 Paris Peace Conference. The organisation's primary goals were to prevent armed conflict through collective security, military and naval disarmament, and settling international disputes through peaceful negotiations and arbitration.
32
+
33
+ Despite strong pacifist sentiment after World War I,[13] its aftermath still caused irredentist and revanchist nationalism in several European states. These sentiments were especially marked in Germany because of the significant territorial, colonial, and financial losses incurred by the Treaty of Versailles. Under the treaty, Germany lost around 13 percent of its home territory and all its overseas possessions, while German annexation of other states was prohibited, reparations were imposed, and limits were placed on the size and capability of the country's armed forces.[14]
34
+
35
+ The German Empire was dissolved in the German Revolution of 1918–1919, and a democratic government, later known as the Weimar Republic, was created. The interwar period saw strife between supporters of the new republic and hardline opponents on both the right and left. Italy, as an Entente ally, had made some post-war territorial gains; however, Italian nationalists were angered that the promises made by the United Kingdom and France to secure Italian entrance into the war were not fulfilled in the peace settlement. From 1922 to 1925, the Fascist movement led by Benito Mussolini seized power in Italy with a nationalist, totalitarian, and class collaborationist agenda that abolished representative democracy, repressed socialist, left-wing and liberal forces, and pursued an aggressive expansionist foreign policy aimed at making Italy a world power, promising the creation of a "New Roman Empire".[15]
36
+
37
+ Adolf Hitler, after an unsuccessful attempt to overthrow the German government in 1923, eventually became the Chancellor of Germany in 1933. He abolished democracy, espousing a radical, racially motivated revision of the world order, and soon began a massive rearmament campaign.[16] Meanwhile, France, to secure its alliance, allowed Italy a free hand in Ethiopia, which Italy desired as a colonial possession. The situation was aggravated in early 1935 when the Territory of the Saar Basin was legally reunited with Germany and Hitler repudiated the Treaty of Versailles, accelerated his rearmament programme, and introduced conscription.[17]
38
+
39
+ The United Kingdom, France and Italy formed the Stresa Front in April 1935 in order to contain Germany, a key step towards military globalisation; however, that June, the United Kingdom made an independent naval agreement with Germany, easing prior restrictions. The Soviet Union, concerned by Germany's goals of capturing vast areas of Eastern Europe, drafted a treaty of mutual assistance with France. Before taking effect, though, the Franco-Soviet pact was required to go through the bureaucracy of the League of Nations, which rendered it essentially toothless.[18] The United States, concerned with events in Europe and Asia, passed the Neutrality Act in August of the same year.[19]
40
+
41
+ Hitler defied the Versailles and Locarno treaties by remilitarising the Rhineland in March 1936, encountering little opposition due to appeasement.[20] In October 1936, Germany and Italy formed the Rome–Berlin Axis. A month later, Germany and Japan signed the Anti-Comintern Pact, which Italy joined the following year.[21]
42
+
43
+ The Kuomintang (KMT) party in China launched a unification campaign against regional warlords and nominally unified China in the mid-1920s, but was soon embroiled in a civil war against its former Chinese Communist Party allies[22] and new regional warlords. In 1931, an increasingly militaristic Empire of Japan, which had long sought influence in China[23] as the first step of what its government saw as the country's right to rule Asia, staged the Mukden Incident as a pretext to invade Manchuria and establish the puppet state of Manchukuo.[24]
44
+
45
+ China appealed to the League of Nations to stop the Japanese invasion of Manchuria. Japan withdrew from the League of Nations after being condemned for its incursion into Manchuria. The two nations then fought several battles, in Shanghai, Rehe and Hebei, until the Tanggu Truce was signed in 1933. Thereafter, Chinese volunteer forces continued the resistance to Japanese aggression in Manchuria, and Chahar and Suiyuan.[25] After the 1936 Xi'an Incident, the Kuomintang and communist forces agreed on a ceasefire to present a united front to oppose Japan.[26]
46
+
47
+ The Second Italo–Ethiopian War was a brief colonial war that began in October 1935 and ended in May 1936. The war began with the invasion of the Ethiopian Empire (also known as Abyssinia) by the armed forces of the Kingdom of Italy (Regno d'Italia), which was launched from Italian Somaliland and Eritrea.[27] The war resulted in the military occupation of Ethiopia and its annexation into the newly created colony of Italian East Africa (Africa Orientale Italiana, or AOI); in addition it exposed the weakness of the League of Nations as a force to preserve peace. Both Italy and Ethiopia were member nations, but the League did little when the former clearly violated Article X of the League's Covenant.[28] The United Kingdom and France supported imposing sanctions on Italy for the invasion, but they were not fully enforced and failed to end the Italian invasion.[29] Italy subsequently dropped its objections to Germany's goal of absorbing Austria.[30]
48
+
49
+ When civil war broke out in Spain, Hitler and Mussolini lent military support to the Nationalist rebels, led by General Francisco Franco. Italy supported the Nationalists to a greater extent than the Nazis did: altogether Mussolini sent to Spain more than 70,000 ground troops and 6,000 aviation personnel, as well as about 720 aircraft.[31] The Soviet Union supported the existing government, the Spanish Republic. More than 30,000 foreign volunteers, known as the International Brigades, also fought against the Nationalists. Both Germany and the Soviet Union used this proxy war as an opportunity to test in combat their most advanced weapons and tactics. The Nationalists won the civil war in April 1939; Franco, now dictator, remained officially neutral during World War II but generally favoured the Axis.[32] His greatest collaboration with Germany was the sending of volunteers to fight on the Eastern Front.[33]
50
+
51
+ In July 1937, Japan captured the former Chinese imperial capital of Peking after instigating the Marco Polo Bridge Incident, which culminated in the Japanese campaign to invade all of China.[34] The Soviets quickly signed a non-aggression pact with China to lend materiel support, effectively ending China's prior co-operation with Germany. From September to November, the Japanese attacked Taiyuan, engaged the Kuomintang Army around Xinkou,[35] and fought Communist forces in Pingxingguan.[36][37] Generalissimo Chiang Kai-shek deployed his best army to defend Shanghai, but, after three months of fighting, Shanghai fell. The Japanese continued to push the Chinese forces back, capturing the capital Nanking in December 1937. After the fall of Nanking, tens of thousands if not hundreds of thousands of Chinese civilians and disarmed combatants were murdered by the Japanese.[38][39]
52
+
53
+ In March 1938, Nationalist Chinese forces won their first major victory at Taierzhuang but then the city of Xuzhou was taken by Japanese in May.[40] In June 1938, Chinese forces stalled the Japanese advance by flooding the Yellow River; this manoeuvre bought time for the Chinese to prepare their defences at Wuhan, but the city was taken by October.[41] Japanese military victories did not bring about the collapse of Chinese resistance that Japan had hoped to achieve; instead, the Chinese government relocated inland to Chongqing and continued the war.[42][43]
54
+
55
+ In the mid-to-late 1930s, Japanese forces in Manchukuo had sporadic border clashes with the Soviet Union and Mongolia. The Japanese doctrine of Hokushin-ron, which emphasised Japan's expansion northward, was favoured by the Imperial Army during this time. With the Japanese defeat at Khalkin Gol in 1939, the ongoing Second Sino-Japanese War[44] and ally Nazi Germany pursuing neutrality with the Soviets, this policy would prove difficult to maintain. Japan and the Soviet Union eventually signed a Neutrality Pact in April 1941, and Japan adopted the doctrine of Nanshin-ron, promoted by the Navy, which took its focus southward, eventually leading to its war with the United States and the Western Allies.[45][46]
56
+
57
+ In Europe, Germany and Italy were becoming more aggressive. In March 1938, Germany annexed Austria, again provoking little response from other European powers.[47] Encouraged, Hitler began pressing German claims on the Sudetenland, an area of Czechoslovakia with a predominantly ethnic German population. Soon the United Kingdom and France followed the appeasement policy of British Prime Minister Neville Chamberlain and conceded this territory to Germany in the Munich Agreement, which was made against the wishes of the Czechoslovak government, in exchange for a promise of no further territorial demands.[48] Soon afterwards, Germany and Italy forced Czechoslovakia to cede additional territory to Hungary, and Poland annexed Czechoslovakia's Zaolzie region.[49]
58
+
59
+ Although all of Germany's stated demands had been satisfied by the agreement, privately Hitler was furious that British interference had prevented him from seizing all of Czechoslovakia in one operation. In subsequent speeches Hitler attacked British and Jewish "war-mongers" and in January 1939 secretly ordered a major build-up of the German navy to challenge British naval supremacy. In March 1939, Germany invaded the remainder of Czechoslovakia and subsequently split it into the German Protectorate of Bohemia and Moravia and a pro-German client state, the Slovak Republic.[50] Hitler also delivered 20 March 1939 ultimatum to Lithuania, forcing the concession of the Klaipėda Region, formerly the German Memelland.[51]
60
+
61
+ Greatly alarmed and with Hitler making further demands on the Free City of Danzig, the United Kingdom and France guaranteed their support for Polish independence; when Italy conquered Albania in April 1939, the same guarantee was extended to Romania and Greece.[52] Shortly after the Franco-British pledge to Poland, Germany and Italy formalised their own alliance with the Pact of Steel.[53] Hitler accused the United Kingdom and Poland of trying to "encircle" Germany and renounced the Anglo-German Naval Agreement and the German–Polish Non-Aggression Pact.[54]
62
+
63
+ The situation reached a general crisis in late August as German troops continued to mobilise against the Polish border. On 23 August, when tripartite negotiations about a military alliance between France, the United Kingdom and Soviet Union stalled,[55] the Soviet Union signed a non-aggression pact with Germany.[56] This pact had a secret protocol that defined German and Soviet "spheres of influence" (western Poland and Lithuania for Germany; eastern Poland, Finland, Estonia, Latvia and Bessarabia for the Soviet Union), and raised the question of continuing Polish independence.[57] The pact neutralised the possibility of Soviet opposition to a campaign against Poland and assured that Germany would not have to face the prospect of a two-front war, as it had in World War I. Immediately after that, Hitler ordered the attack to proceed on 26 August, but upon hearing that the United Kingdom had concluded a formal mutual assistance pact with Poland, and that Italy would maintain neutrality, he decided to delay it.[58]
64
+
65
+ In response to British requests for direct negotiations to avoid war, Germany made demands on Poland, which only served as a pretext to worsen relations.[59] On 29 August, Hitler demanded that a Polish plenipotentiary immediately travel to Berlin to negotiate the handover of Danzig, and to allow a plebiscite in the Polish Corridor in which the German minority would vote on secession.[59] The Poles refused to comply with the German demands, and on the night of 30–31 August in a stormy meeting with the British ambassador Neville Henderson, Ribbentrop declared that Germany considered its claims rejected.[60]
66
+
67
+ On 1 September 1939, Germany invaded Poland after having staged several false flag border incidents as a pretext to initiate the invasion.[61] The first German attack of the war came against the Polish defenses at Westerplatte.[62] The United Kingdom responded with an ultimatum to Germany to cease military operations, and on 3 September, after the ultimatum was ignored, France and Britain declared war on Germany, followed by Australia, New Zealand, South Africa and Canada. The alliance provided no direct military support to Poland, outside of a cautious French probe into the Saarland.[63] The Western Allies also began a naval blockade of Germany, which aimed to damage the country's economy and the war effort.[64] Germany responded by ordering U-boat warfare against Allied merchant and warships, which would later escalate into the Battle of the Atlantic.[65]
68
+
69
+ On 8 September, German troops reached the suburbs of Warsaw. The Polish counter offensive to the west halted the German advance for several days, but it was outflanked and encircled by the Wehrmacht. Remnants of the Polish army broke through to besieged Warsaw. On 17 September 1939, after signing a cease-fire with Japan, the Soviets invaded Eastern Poland[66] under a pretext that the Polish state had ostensibly ceased to exist.[67] On 27 September, the Warsaw garrison surrendered to the Germans, and the last large operational unit of the Polish Army surrendered on 6 October. Despite the military defeat, Poland never surrendered; instead it formed the Polish government-in-exile and a clandestine state apparatus remained in occupied Poland.[68] A significant part of Polish military personnel evacuated to Romania and the Baltic countries; many of them would fight against the Axis in other theatres of the war.[69]
70
+
71
+ Germany annexed the western and occupied the central part of Poland, and the Soviet Union annexed its eastern part; small shares of Polish territory were transferred to Lithuania and Slovakia. On 6 October, Hitler made a public peace overture to the United Kingdom and France but said that the future of Poland was to be determined exclusively by Germany and the Soviet Union. The proposal was rejected,[60] and Hitler ordered an immediate offensive against France,[70] which would be postponed until the spring of 1940 due to bad weather.[71][72][73]
72
+
73
+ The Soviet Union forced the Baltic countries—Estonia, Latvia and Lithuania, the states that were in the Soviet "sphere of influence" under the Molotov-Ribbentrop pact—to sign "mutual assistance pacts" that stipulated stationing Soviet troops in these countries. Soon after, significant Soviet military contingents were moved there.[74][75][76] Finland refused to sign a similar pact and rejected ceding part of its territory to the Soviet Union. The Soviet Union invaded Finland in November 1939,[77] and the Soviet Union was expelled from the League of Nations.[78] Despite overwhelming numerical superiority, Soviet military success was modest, and the Finno-Soviet war ended in March 1940 with minimal Finnish concessions.[79]
74
+
75
+ In June 1940, the Soviet Union forcibly annexed Estonia, Latvia and Lithuania,[75] and the disputed Romanian regions of Bessarabia, northern Bukovina and Hertza. Meanwhile, Nazi-Soviet political rapprochement and economic co-operation[80][81] gradually stalled,[82][83] and both states began preparations for war.[84]
76
+
77
+ In April 1940, Germany invaded Denmark and Norway to protect shipments of iron ore from Sweden, which the Allies were attempting to cut off.[85] Denmark capitulated after a few hours, and Norway was conquered within two months[86] despite Allied support. British discontent over the Norwegian campaign led to the appointment of Winston Churchill as Prime Minister on 10 May 1940.[87]
78
+
79
+ On the same day, Germany launched an offensive against France. To circumvent the strong Maginot Line fortifications on the Franco-German border, Germany directed its attack at the neutral nations of Belgium, the Netherlands, and Luxembourg.[88] The Germans carried out a flanking manoeuvre through the Ardennes region,[89] which was mistakenly perceived by Allies as an impenetrable natural barrier against armoured vehicles.[90][91] By successfully implementing new blitzkrieg tactics, the Wehrmacht rapidly advanced to the Channel and cut off the Allied forces in Belgium, trapping the bulk of the Allied armies in a cauldron on the Franco-Belgian border near Lille. The United Kingdom was able to evacuate a significant number of Allied troops from the continent by early June, although abandoning almost all their equipment.[92]
80
+
81
+ On 10 June, Italy invaded France, declaring war on both France and the United Kingdom.[93] The Germans turned south against the weakened French army, and Paris fell to them on 14 June. Eight days later France signed an armistice with Germany; it was divided into German and Italian occupation zones,[94] and an unoccupied rump state under the Vichy Regime, which, though officially neutral, was generally aligned with Germany. France kept its fleet, which the United Kingdom attacked on 3 July in an attempt to prevent its seizure by Germany.[95]
82
+
83
+ The Battle of Britain[96] began in early July with Luftwaffe attacks on shipping and harbours.[97] The United Kingdom rejected Hitler's ultimatum,[which?][98] and the German air superiority campaign started in August but failed to defeat RAF Fighter Command, forcing the indefinite postponement of the proposed German invasion of Britain. The German strategic bombing offensive intensified with night attacks on London and other cities in the Blitz, but failed to significantly disrupt the British war effort[97] and largely ended in May 1941.[99]
84
+
85
+ Using newly captured French ports, the German Navy enjoyed success against an over-extended Royal Navy, using U-boats against British shipping in the Atlantic.[100] The British Home Fleet scored a significant victory on 27 May 1941 by sinking the German battleship Bismarck.[101]
86
+
87
+ In November 1939, the United States was taking measures to assist China and the Western Allies, and amended the Neutrality Act to allow "cash and carry" purchases by the Allies.[102] In 1940, following the German capture of Paris, the size of the United States Navy was significantly increased. In September the United States further agreed to a trade of American destroyers for British bases.[103] Still, a large majority of the American public continued to oppose any direct military intervention in the conflict well into 1941.[104] In December 1940 Roosevelt accused Hitler of planning world conquest and ruled out any negotiations as useless, calling for the United States to become an "arsenal of democracy" and promoting Lend-Lease programmes of aid to support the British war effort.[98] The United States started strategic planning to prepare for a full-scale offensive against Germany.[105]
88
+
89
+ At the end of September 1940, the Tripartite Pact formally united Japan, Italy, and Germany as the Axis Powers. The Tripartite Pact stipulated that any country, with the exception of the Soviet Union, which attacked any Axis Power would be forced to go to war against all three.[106] The Axis expanded in November 1940 when Hungary, Slovakia and Romania joined.[107] Romania and Hungary would make major contributions to the Axis war against the Soviet Union, in Romania's case partially to recapture territory ceded to the Soviet Union.[108]
90
+
91
+ In early June 1940 the Italian Regia Aeronautica attacked and besieged Malta, a British possession. In late summer through early autumn Italy conquered British Somaliland and made an incursion into British-held Egypt. In October Italy attacked Greece, but the attack was repulsed with heavy Italian casualties; the campaign ended within months with minor territorial changes.[109] Germany started preparation for an invasion of the Balkans to assist Italy, to prevent the British from gaining a foothold there, which would be a potential threat for Romanian oil fields, and to strike against the British dominance of the Mediterranean.[110]
92
+
93
+ In December 1940, British Empire forces began counter-offensives against Italian forces in Egypt and Italian East Africa.[111] The offensives were highly successful; by early February 1941 Italy had lost control of eastern Libya, and large numbers of Italian troops had been taken, prisoner. The Italian Navy also suffered significant defeats, with the Royal Navy putting three Italian battleships out of commission by a carrier attack at Taranto and neutralising several more warships at the Battle of Cape Matapan.[112]
94
+
95
+ Italian defeats prompted Germany to deploy an expeditionary force to North Africa, and at the end of March 1941 Rommel's Afrika Korps launched an offensive which drove back the Commonwealth forces.[113] In under a month, Axis forces advanced to western Egypt and besieged the port of Tobruk.[114]
96
+
97
+ By late March 1941 Bulgaria and Yugoslavia signed the Tripartite Pact; however, the Yugoslav government was overthrown two days later by pro-British nationalists. Germany responded with simultaneous invasions of both Yugoslavia and Greece, commencing on 6 April 1941; both nations were forced to surrender within the month.[115] The airborne invasion of the Greek island of Crete at the end of May completed the German conquest of the Balkans.[116] Although the Axis victory was swift, bitter and large-scale partisan warfare subsequently broke out against the Axis occupation of Yugoslavia, which continued until the end of the war.[117]
98
+
99
+ In the Middle East, in May Commonwealth forces quashed an uprising in Iraq which had been supported by German aircraft from bases within Vichy-controlled Syria.[118] Between June and July they invaded and occupied the French possessions Syria and Lebanon, with the assistance of the Free French.[119]
100
+
101
+ With the situation in Europe and Asia relatively stable, Germany, Japan, and the Soviet Union made preparations. With the Soviets wary of mounting tensions with Germany and the Japanese planning to take advantage of the European War by seizing resource-rich European possessions in Southeast Asia, the two powers signed the Soviet–Japanese Neutrality Pact in April 1941.[120] By contrast, the Germans were steadily making preparations for an attack on the Soviet Union, massing forces on the Soviet border.[121]
102
+
103
+ Hitler believed that the United Kingdom's refusal to end the war was based on the hope that the United States and the Soviet Union would enter the war against Germany sooner or later.[122] He, therefore, decided to try to strengthen Germany's relations with the Soviets, or failing that to attack and eliminate them as a factor. In November 1940, negotiations took place to determine if the Soviet Union would join the Tripartite Pact. The Soviets showed some interest but asked for concessions from Finland, Bulgaria, Turkey, and Japan that Germany considered unacceptable. On 18 December 1940, Hitler issued the directive to prepare for an invasion of the Soviet Union.[123]
104
+
105
+ On 22 June 1941, Germany, supported by Italy and Romania, invaded the Soviet Union in Operation Barbarossa, with Germany accusing the Soviets of plotting against them. They were joined shortly by Finland and Hungary.[124] The primary targets of this surprise offensive[125] were the Baltic region, Moscow and Ukraine, with the ultimate goal of ending the 1941 campaign near the Arkhangelsk-Astrakhan line, from the Caspian to the White Seas. Hitler's objectives were to eliminate the Soviet Union as a military power, exterminate Communism, generate Lebensraum ("living space")[126] by dispossessing the native population[127] and guarantee access to the strategic resources needed to defeat Germany's remaining rivals.[128]
106
+
107
+ Although the Red Army was preparing for strategic counter-offensives before the war,[129] Barbarossa forced the Soviet supreme command to adopt a strategic defence. During the summer, the Axis made significant gains into Soviet territory, inflicting immense losses in both personnel and materiel. By mid-August, however, the German Army High Command decided to suspend the offensive of a considerably depleted Army Group Centre, and to divert the 2nd Panzer Group to reinforce troops advancing towards central Ukraine and Leningrad.[130] The Kiev offensive was overwhelmingly successful, resulting in encirclement and elimination of four Soviet armies, and made possible further advance into Crimea and industrially developed Eastern Ukraine (the First Battle of Kharkov).[131]
108
+
109
+ The diversion of three quarters of the Axis troops and the majority of their air forces from France and the central Mediterranean to the Eastern Front[132] prompted the United Kingdom to reconsider its grand strategy.[133] In July, the UK and the Soviet Union formed a military alliance against Germany[134] and in August, the United Kingdom and the United States jointly issued the Atlantic Charter, which outlined British and American goals for the postwar world.[135] In late August the British and Soviets invaded neutral Iran to secure the Persian Corridor, Iran's oil fields, and preempt any Axis advances through Iran toward the Baku oil fields or British India.[136]
110
+
111
+ By October Axis operational objectives in Ukraine and the Baltic region were achieved, with only the sieges of Leningrad[137] and Sevastopol continuing.[138] A major offensive against Moscow was renewed; after two months of fierce battles in increasingly harsh weather, the German army almost reached the outer suburbs of Moscow, where the exhausted troops[139] were forced to suspend their offensive.[140] Large territorial gains were made by Axis forces, but their campaign had failed to achieve its main objectives: two key cities remained in Soviet hands, the Soviet capability to resist was not broken, and the Soviet Union retained a considerable part of its military potential. The blitzkrieg phase of the war in Europe had ended.[141]
112
+
113
+ By early December, freshly mobilised reserves[142] allowed the Soviets to achieve numerical parity with Axis troops.[143] This, as well as intelligence data which established that a minimal number of Soviet troops in the East would be sufficient to deter any attack by the Japanese Kwantung Army,[144] allowed the Soviets to begin a massive counter-offensive that started on 5 December all along the front and pushed German troops 100–250 kilometres (62–155 mi) west.[145]
114
+
115
+ Following the Japanese false flag Mukden Incident in 1931, the Japanese shelling of the American gunboat USS Panay in 1937, and the 1937-38 Nanjing Massacre Japanese-American relations deteriorated. In 1939, the United States notified Japan that it would not be extending its trade treaty and American public opinion opposing Japanese expansionism led to a series of economic sanctions, the Export Control Acts, which banned U.S. exports of chemicals, minerals and military parts to Japan and increased economic pressure on the Japanese regime.[98][146][147] During 1939 Japan launched its first attack against Changsha, a strategically important Chinese city, but was repulsed by late September.[148] Despite several offensives by both sides, the war between China and Japan was stalemated by 1940. To increase pressure on China by blocking supply routes, and to better position Japanese forces in the event of a war with the Western powers, Japan invaded and occupied northern Indochina in September, 1940.[149]
116
+
117
+ Chinese nationalist forces launched a large-scale counter-offensive in early 1940. In August, Chinese communists launched an offensive in Central China; in retaliation, Japan instituted harsh measures in occupied areas to reduce human and material resources for the communists.[150] Continued antipathy between Chinese communist and nationalist forces culminated in armed clashes in January 1941, effectively ending their co-operation.[151] In March, the Japanese 11th army attacked the headquarters of the Chinese 19th army but was repulsed during Battle of Shanggao.[152] In September, Japan attempted to take the city of Changsha again and clashed with Chinese nationalist forces.[153]
118
+
119
+ German successes in Europe encouraged Japan to increase pressure on European governments in Southeast Asia. The Dutch government agreed to provide Japan some oil supplies from the Dutch East Indies, but negotiations for additional access to their resources ended in failure in June 1941.[154] In July 1941 Japan sent troops to southern Indochina, thus threatening British and Dutch possessions in the Far East. The United States, United Kingdom, and other Western governments reacted to this move with a freeze on Japanese assets and a total oil embargo.[155][156] At the same time, Japan was planning an invasion of the Soviet Far East, intending to capitalise off the German invasion in the west, but abandoned the operation after the sanctions.[157]
120
+
121
+ Since early 1941 the United States and Japan had been engaged in negotiations in an attempt to improve their strained relations and end the war in China. During these negotiations, Japan advanced a number of proposals which were dismissed by the Americans as inadequate.[158] At the same time the United States, the United Kingdom, and the Netherlands engaged in secret discussions for the joint defence of their territories, in the event of a Japanese attack against any of them.[159] Roosevelt reinforced the Philippines (an American protectorate scheduled for independence in 1946) and warned Japan that the United States would react to Japanese attacks against any "neighboring countries".[159]
122
+
123
+ Frustrated at the lack of progress and feeling the pinch of the American–British–Dutch sanctions, Japan prepared for war. On 20 November, a new government under Hideki Tojo presented an interim proposal as its final offer. It called for the end of American aid to China and for lifting the embargo on the supply of oil and other resources to Japan. In exchange, Japan promised not to launch any attacks in Southeast Asia and to withdraw its forces from southern Indochina.[158] The American counter-proposal of 26 November required that Japan evacuate all of China without conditions and conclude non-aggression pacts with all Pacific powers.[160] That meant Japan was essentially forced to choose between abandoning its ambitions in China, or seizing the natural resources it needed in the Dutch East Indies by force;[161][162] the Japanese military did not consider the former an option, and many officers considered the oil embargo an unspoken declaration of war.[163]
124
+
125
+ Japan planned to rapidly seize European colonies in Asia to create a large defensive perimeter stretching into the Central Pacific. The Japanese would then be free to exploit the resources of Southeast Asia while exhausting the over-stretched Allies by fighting a defensive war.[164][165] To prevent American intervention while securing the perimeter, it was further planned to neutralise the United States Pacific Fleet and the American military presence in the Philippines from the outset.[166] On 7 December 1941 (8 December in Asian time zones), Japan attacked British and American holdings with near-simultaneous offensives against Southeast Asia and the Central Pacific.[167] These included an attack on the American fleets at Pearl Harbor and the Philippines, landings in Malaya,[167], Thailand and the Battle of Hong Kong.[168]
126
+
127
+ The Japanese invasion of Thailand led to Thailand's decision to ally itself with Japan and the other Japanese attacks led the United States, United Kingdom, China, Australia, and several other states to formally declare war on Japan, whereas the Soviet Union, being heavily involved in large-scale hostilities with European Axis countries, maintained its neutrality agreement with Japan.[169] Germany, followed by the other Axis states, declared war on the United States[170] in solidarity with Japan, citing as justification the American attacks on German war vessels that had been ordered by Roosevelt.[124][171]
128
+
129
+ On 1 January 1942, the Allied Big Four[172]—the Soviet Union, China, the United Kingdom and the United States—and 22 smaller or exiled governments issued the Declaration by United Nations, thereby affirming the Atlantic Charter,[173] and agreeing not to sign a separate peace with the Axis powers.[174]
130
+
131
+ During 1942, Allied officials debated on the appropriate grand strategy to pursue. All agreed that defeating Germany was the primary objective. The Americans favoured a straightforward, large-scale attack on Germany through France. The Soviets were also demanding a second front. The British, on the other hand, argued that military operations should target peripheral areas to wear out German strength, leading to increasing demoralisation, and bolster resistance forces. Germany itself would be subject to a heavy bombing campaign. An offensive against Germany would then be launched primarily by Allied armour without using large-scale armies.[175] Eventually, the British persuaded the Americans that a landing in France was infeasible in 1942 and they should instead focus on driving the Axis out of North Africa.[176]
132
+
133
+ At the Casablanca Conference in early 1943, the Allies reiterated the statements issued in the 1942 Declaration, and demanded the unconditional surrender of their enemies. The British and Americans agreed to continue to press the initiative in the Mediterranean by invading Sicily to fully secure the Mediterranean supply routes.[177] Although the British argued for further operations in the Balkans to bring Turkey into the war, in May 1943, the Americans extracted a British commitment to limit Allied operations in the Mediterranean to an invasion of the Italian mainland and to invade France in 1944.[178]
134
+
135
+ By the end of April 1942, Japan and its ally Thailand had almost fully conquered Burma, Malaya, the Dutch East Indies, Singapore, and Rabaul, inflicting severe losses on Allied troops and taking a large number of prisoners.[179] Despite stubborn resistance by Filipino and US forces, the Philippine Commonwealth was eventually captured in May 1942, forcing its government into exile.[180] On 16 April, in Burma, 7,000 British soldiers were encircled by the Japanese 33rd Division during the Battle of Yenangyaung and rescued by the Chinese 38th Division.[181] Japanese forces also achieved naval victories in the South China Sea, Java Sea and Indian Ocean,[182] and bombed the Allied naval base at Darwin, Australia. In January 1942, the only Allied success against Japan was a Chinese victory at Changsha.[183] These easy victories over unprepared US and European opponents left Japan overconfident, as well as overextended.[184]
136
+
137
+ In early May 1942, Japan initiated operations to capture Port Moresby by amphibious assault and thus sever communications and supply lines between the United States and Australia. The planned invasion was thwarted when an Allied task force, centred on two American fleet carriers, fought Japanese naval forces to a draw in the Battle of the Coral Sea.[185] Japan's next plan, motivated by the earlier Doolittle Raid, was to seize Midway Atoll and lure American carriers into battle to be eliminated; as a diversion, Japan would also send forces to occupy the Aleutian Islands in Alaska.[186] In mid-May, Japan started the Zhejiang-Jiangxi Campaign in China, with the goal of inflicting retribution on the Chinese who aided the surviving American airmen in the Doolittle Raid by destroying air bases and fighting against the Chinese 23rd and 32nd Army Groups.[187][188] In early June, Japan put its operations into action, but the Americans, having broken Japanese naval codes in late May, were fully aware of the plans and order of battle, and used this knowledge to achieve a decisive victory at Midway over the Imperial Japanese Navy.[189]
138
+
139
+ With its capacity for aggressive action greatly diminished as a result of the Midway battle, Japan chose to focus on a belated attempt to capture Port Moresby by an overland campaign in the Territory of Papua.[190] The Americans planned a counter-attack against Japanese positions in the southern Solomon Islands, primarily Guadalcanal, as a first step towards capturing Rabaul, the main Japanese base in Southeast Asia.[191]
140
+
141
+ Both plans started in July, but by mid-September, the Battle for Guadalcanal took priority for the Japanese, and troops in New Guinea were ordered to withdraw from the Port Moresby area to the northern part of the island, where they faced Australian and United States troops in the Battle of Buna-Gona.[192] Guadalcanal soon became a focal point for both sides with heavy commitments of troops and ships in the battle for Guadalcanal. By the start of 1943, the Japanese were defeated on the island and withdrew their troops.[193] In Burma, Commonwealth forces mounted two operations. The first, an offensive into the Arakan region in late 1942, went disastrously, forcing a retreat back to India by May 1943.[194] The second was the insertion of irregular forces behind Japanese front-lines in February which, by the end of April, had achieved mixed results.[195]
142
+
143
+ Despite considerable losses, in early 1942 Germany and its allies stopped a major Soviet offensive in central and southern Russia, keeping most territorial gains they had achieved during the previous year.[196] In May the Germans defeated Soviet offensives in the Kerch Peninsula and at Kharkov,[197] and then launched their main summer offensive against southern Russia in June 1942, to seize the oil fields of the Caucasus and occupy Kuban steppe, while maintaining positions on the northern and central areas of the front. The Germans split Army Group South into two groups: Army Group A advanced to the lower Don River and struck south-east to the Caucasus, while Army Group B headed towards the Volga River. The Soviets decided to make their stand at Stalingrad on the Volga.[198]
144
+
145
+ By mid-November, the Germans had nearly taken Stalingrad in bitter street fighting. The Soviets began their second winter counter-offensive, starting with an encirclement of German forces at Stalingrad,[199] and an assault on the Rzhev salient near Moscow, though the latter failed disastrously.[200] By early February 1943, the German Army had taken tremendous losses; German troops at Stalingrad had been defeated,[201] and the front-line had been pushed back beyond its position before the summer offensive. In mid-February, after the Soviet push had tapered off, the Germans launched another attack on Kharkov, creating a salient in their front line around the Soviet city of Kursk.[202]
146
+
147
+ Exploiting poor American naval command decisions, the German navy ravaged Allied shipping off the American Atlantic coast.[203] By November 1941, Commonwealth forces had launched a counter-offensive, Operation Crusader, in North Africa, and reclaimed all the gains the Germans and Italians had made.[204] In North Africa, the Germans launched an offensive in January, pushing the British back to positions at the Gazala Line by early February,[205] followed by a temporary lull in combat which Germany used to prepare for their upcoming offensives.[206] Concerns the Japanese might use bases in Vichy-held Madagascar caused the British to invade the island in early May 1942.[207] An Axis offensive in Libya forced an Allied retreat deep inside Egypt until Axis forces were stopped at El Alamein.[208] On the Continent, raids of Allied commandos on strategic targets, culminating in the disastrous Dieppe Raid,[209] demonstrated the Western Allies' inability to launch an invasion of continental Europe without much better preparation, equipment, and operational security.[210][page needed]
148
+
149
+ In August 1942, the Allies succeeded in repelling a second attack against El Alamein[211] and, at a high cost, managed to deliver desperately needed supplies to the besieged Malta.[212] A few months later, the Allies commenced an attack of their own in Egypt, dislodging the Axis forces and beginning a drive west across Libya.[213] This attack was followed up shortly after by Anglo-American landings in French North Africa, which resulted in the region joining the Allies.[214] Hitler responded to the French colony's defection by ordering the occupation of Vichy France;[214] although Vichy forces did not resist this violation of the armistice, they managed to scuttle their fleet to prevent its capture by German forces.[214][215] The Axis forces in Africa withdrew into Tunisia, which was conquered by the Allies in May 1943.[214][216]
150
+
151
+ In June 1943 the British and Americans began a strategic bombing campaign against Germany with a goal to disrupt the war economy, reduce morale, and "de-house" the civilian population.[217] The firebombing of Hamburg was among the first attacks in this campaign, inflicting significant casualties and considerable losses on infrastructure of this important industrial centre.[218]
152
+
153
+ After the Guadalcanal Campaign, the Allies initiated several operations against Japan in the Pacific. In May 1943, Canadian and US forces were sent to eliminate Japanese forces from the Aleutians.[219] Soon after, the United States, with support from Australia, New Zealand and Pacific Islander forces, began major ground, sea and air operations to isolate Rabaul by capturing surrounding islands, and breach the Japanese Central Pacific perimeter at the Gilbert and Marshall Islands.[220] By the end of March 1944, the Allies had completed both of these objectives and had also neutralised the major Japanese base at Truk in the Caroline Islands. In April, the Allies launched an operation to retake Western New Guinea.[221]
154
+
155
+ In the Soviet Union, both the Germans and the Soviets spent the spring and early summer of 1943 preparing for large offensives in central Russia. On 4 July 1943, Germany attacked Soviet forces around the Kursk Bulge. Within a week, German forces had exhausted themselves against the Soviets' deeply echeloned and well-constructed defences,[222] and for the first time in the war Hitler cancelled the operation before it had achieved tactical or operational success.[223] This decision was partially affected by the Western Allies' invasion of Sicily launched on 9 July, which, combined with previous Italian failures, resulted in the ousting and arrest of Mussolini later that month.[224]
156
+
157
+ On 12 July 1943, the Soviets launched their own counter-offensives, thereby dispelling any chance of German victory or even stalemate in the east. The Soviet victory at Kursk marked the end of German superiority,[225] giving the Soviet Union the initiative on the Eastern Front.[226][227] The Germans tried to stabilise their eastern front along the hastily fortified Panther–Wotan line, but the Soviets broke through it at Smolensk and by the Lower Dnieper Offensives.[228]
158
+
159
+ On 3 September 1943, the Western Allies invaded the Italian mainland, following Italy's armistice with the Allies.[229] Germany with the help of fascists responded by disarming Italian forces that were in many places without superior orders, seizing military control of Italian areas,[230] and creating a series of defensive lines.[231] German special forces then rescued Mussolini, who then soon established a new client state in German-occupied Italy named the Italian Social Republic,[232] causing an Italian civil war. The Western Allies fought through several lines until reaching the main German defensive line in mid-November.[233]
160
+
161
+ German operations in the Atlantic also suffered. By May 1943, as Allied counter-measures became increasingly effective, the resulting sizeable German submarine losses forced a temporary halt of the German Atlantic naval campaign.[234] In November 1943, Franklin D. Roosevelt and Winston Churchill met with Chiang Kai-shek in Cairo and then with Joseph Stalin in Tehran.[235] The former conference determined the post-war return of Japanese territory[236] and the military planning for the Burma Campaign,[237] while the latter included agreement that the Western Allies would invade Europe in 1944 and that the Soviet Union would declare war on Japan within three months of Germany's defeat.[238]
162
+
163
+ From November 1943, during the seven-week Battle of Changde, the Chinese forced Japan to fight a costly war of attrition, while awaiting Allied relief.[239][240][241] In January 1944, the Allies launched a series of attacks in Italy against the line at Monte Cassino and tried to outflank it with landings at Anzio.[242]
164
+
165
+ On 27 January 1944, Soviet troops launched a major offensive that expelled German forces from the Leningrad region, thereby ending the most lethal siege in history.[243] The following Soviet offensive was halted on the pre-war Estonian border by the German Army Group North aided by Estonians hoping to re-establish national independence. This delay slowed subsequent Soviet operations in the Baltic Sea region.[244] By late May 1944, the Soviets had liberated Crimea, largely expelled Axis forces from Ukraine, and made incursions into Romania, which were repulsed by the Axis troops.[245] The Allied offensives in Italy had succeeded and, at the expense of allowing several German divisions to retreat, on 4 June Rome was captured.[246]
166
+
167
+ The Allies had mixed success in mainland Asia. In March 1944, the Japanese launched the first of two invasions, an operation against British positions in Assam, India,[247] and soon besieged Commonwealth positions at Imphal and Kohima.[248] In May 1944, British forces mounted a counter-offensive that drove Japanese troops back to Burma by July,[248] and Chinese forces that had invaded northern Burma in late 1943 besieged Japanese troops in Myitkyina.[249] The second Japanese invasion of China aimed to destroy China's main fighting forces, secure railways between Japanese-held territory and capture Allied airfields.[250] By June, the Japanese had conquered the province of Henan and begun a new attack on Changsha in Hunan province.[251]
168
+
169
+ On 6 June 1944 (known as D-Day), after three years of Soviet pressure,[252] the Western Allies invaded northern France. After reassigning several Allied divisions from Italy, they also attacked southern France.[253] These landings were successful, and led to the defeat of the German Army units in France. Paris was liberated on 25 August by the local resistance assisted by the Free French Forces, both led by General Charles de Gaulle,[254] and the Western Allies continued to push back German forces in western Europe during the latter part of the year. An attempt to advance into northern Germany spearheaded by a major airborne operation in the Netherlands failed.[255] After that, the Western Allies slowly pushed into Germany, but failed to cross the Rur river in a large offensive. In Italy, Allied advance also slowed due to the last major German defensive line.[256]
170
+
171
+ On 22 June, the Soviets launched a strategic offensive in Belarus ("Operation Bagration") that destroyed the German Army Group Centre almost completely.[257] Soon after that, another Soviet strategic offensive forced German troops from Western Ukraine and Eastern Poland. The Soviets formed the Polish Committee of National Liberation to control territory in Poland and combat the Polish Armia Krajowa; The Soviet Red Army remained in the Praga district on the other side of the Vistula and watched passively as the Germans quelled the Warsaw Uprising initiated by the Armia Krajowa.[258] The national uprising in Slovakia was also quelled by the Germans.[259] The Soviet Red Army's strategic offensive in eastern Romania cut off and destroyed the considerable German troops there and triggered a successful coup d'état in Romania and in Bulgaria, followed by those countries' shift to the Allied side.[260]
172
+
173
+ In September 1944, Soviet troops advanced into Yugoslavia and forced the rapid withdrawal of German Army Groups E and F in Greece, Albania and Yugoslavia to rescue them from being cut off.[261] By this point, the Communist-led Partisans under Marshal Josip Broz Tito, who had led an increasingly successful guerrilla campaign against the occupation since 1941, controlled much of the territory of Yugoslavia and engaged in delaying efforts against German forces further south. In northern Serbia, the Soviet Red Army, with limited support from Bulgarian forces, assisted the Partisans in a joint liberation of the capital city of Belgrade on 20 October. A few days later, the Soviets launched a massive assault against German-occupied Hungary that lasted until the fall of Budapest in February 1945.[262] Unlike impressive Soviet victories in the Balkans, bitter Finnish resistance to the Soviet offensive in the Karelian Isthmus denied the Soviets occupation of Finland and led to a Soviet-Finnish armistice on relatively mild conditions,[263] although Finland was forced to fight their former ally Germany.[264][broken footnote]
174
+
175
+ By the start of July 1944, Commonwealth forces in Southeast Asia had repelled the Japanese sieges in Assam, pushing the Japanese back to the Chindwin River[265] while the Chinese captured Myitkyina. In September 1944, Chinese forces captured Mount Song and reopened the Burma Road.[266] In China, the Japanese had more successes, having finally captured Changsha in mid-June and the city of Hengyang by early August.[267] Soon after, they invaded the province of Guangxi, winning major engagements against Chinese forces at Guilin and Liuzhou by the end of November[268] and successfully linking up their forces in China and Indochina by mid-December.[269]
176
+
177
+ In the Pacific, US forces continued to press back the Japanese perimeter. In mid-June 1944, they began their offensive against the Mariana and Palau islands, and decisively defeated Japanese forces in the Battle of the Philippine Sea. These defeats led to the resignation of the Japanese Prime Minister, Hideki Tojo, and provided the United States with air bases to launch intensive heavy bomber attacks on the Japanese home islands. In late October, American forces invaded the Filipino island of Leyte; soon after, Allied naval forces scored another large victory in the Battle of Leyte Gulf, one of the largest naval battles in history.[270]
178
+
179
+ On 16 December 1944, Germany made a last attempt on the Western Front by using most of its remaining reserves to launch a massive counter-offensive in the Ardennes and along with the French-German border to split the Western Allies, encircle large portions of Western Allied troops and capture their primary supply port at Antwerp to prompt a political settlement.[271] By January, the offensive had been repulsed with no strategic objectives fulfilled.[271] In Italy, the Western Allies remained stalemated at the German defensive line. In mid-January 1945, the Soviets and Poles attacked in Poland, pushing from the Vistula to the Oder river in Germany, and overran East Prussia.[272] On 4 February Soviet, British, and US leaders met for the Yalta Conference. They agreed on the occupation of post-war Germany, and on when the Soviet Union would join the war against Japan.[273]
180
+
181
+ In February, the Soviets entered Silesia and Pomerania, while Western Allies entered western Germany and closed to the Rhine river. By March, the Western Allies crossed the Rhine north and south of the Ruhr, encircling the German Army Group B.[274] In early March, in an attempt to protect its last oil reserves in Hungary and to retake Budapest, Germany launched its last major offensive against Soviet troops near Lake Balaton. In two weeks, the offensive had been repulsed, the Soviets advanced to Vienna, and captured the city. In early April, Soviet troops captured Königsberg, while the Western Allies finally pushed forward in Italy and swept across western Germany capturing Hamburg and Nuremberg. American and Soviet forces met at the Elbe river on 25 April, leaving several unoccupied pockets in southern Germany and around Berlin.
182
+
183
+ Soviet and Polish forces stormed and captured Berlin in late April. In Italy, German forces surrendered on 29 April. On 30 April, the Reichstag was captured, signalling the military defeat of Nazi Germany,[275] Berlin garrison surrendered on 2 May.
184
+
185
+ Several changes in leadership occurred during this period. On 12 April, President Roosevelt died and was succeeded by Harry S. Truman. Benito Mussolini was killed by Italian partisans on 28 April.[276] Two days later, Hitler committed suicide in besieged Berlin, and he was succeeded by Grand Admiral Karl Dönitz.[277]
186
+ Total and unconditional surrender in Europe was signed on 7 and 8 May, to be effective by the end of 8 May.[278] German Army Group Centre resisted in Prague until 11 May.[279]
187
+
188
+ In the Pacific theatre, American forces accompanied by the forces of the Philippine Commonwealth advanced in the Philippines, clearing Leyte by the end of April 1945. They landed on Luzon in January 1945 and recaptured Manila in March. Fighting continued on Luzon, Mindanao, and other islands of the Philippines until the end of the war.[280] Meanwhile, the United States Army Air Forces launched a massive firebombing campaign of strategic cities in Japan in an effort to destroy Japanese war industry and civilian morale. A devastating bombing raid on Tokyo of 9–10 March was the deadliest conventional bombing raid in history.[281]
189
+
190
+ In May 1945, Australian troops landed in Borneo, over-running the oilfields there. British, American, and Chinese forces defeated the Japanese in northern Burma in March, and the British pushed on to reach Rangoon by 3 May.[282] Chinese forces started a counterattack in the Battle of West Hunan that occurred between 6 April and 7 June 1945. American naval and amphibious forces also moved towards Japan, taking Iwo Jima by March, and Okinawa by the end of June.[283] At the same time, American submarines cut off Japanese imports, drastically reducing Japan's ability to supply its overseas forces.[284]
191
+
192
+ On 11 July, Allied leaders met in Potsdam, Germany. They confirmed earlier agreements about Germany,[285] and the American, British and Chinese governments reiterated the demand for unconditional surrender of Japan, specifically stating that "the alternative for Japan is prompt and utter destruction".[286] During this conference, the United Kingdom held its general election, and Clement Attlee replaced Churchill as Prime Minister.[287]
193
+
194
+ The call for unconditional surrender was rejected by the Japanese government, which believed it would be capable of negotiating for more favourable surrender terms.[288] In early August, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki. Between the two bombings, the Soviets, pursuant to the Yalta agreement, invaded Japanese-held Manchuria and quickly defeated the Kwantung Army, which was the largest Japanese fighting force.[289] These two events persuaded previously adamant Imperial Army leaders to accept surrender terms.[290] The Red Army also captured the southern part of Sakhalin Island and the Kuril Islands. On 15 August 1945, Japan surrendered, with the surrender documents finally signed at Tokyo Bay on the deck of the American battleship USS Missouri on 2 September 1945, ending the war.[291]
195
+
196
+ The Allies established occupation administrations in Austria and Germany. The former became a neutral state, non-aligned with any political bloc. The latter was divided into western and eastern occupation zones controlled by the Western Allies and the Soviet Union. A denazification programme in Germany led to the prosecution of Nazi war criminals in the Nuremberg trials and the removal of ex-Nazis from power, although this policy moved towards amnesty and re-integration of ex-Nazis into West German society.[292]
197
+
198
+ Germany lost a quarter of its pre-war (1937) territory. Among the eastern territories, Silesia, Neumark and most of Pomerania were taken over by Poland,[293] and East Prussia was divided between Poland and the Soviet Union, followed by the expulsion to Germany of the nine million Germans from these provinces,[294][295] as well as three million Germans from the Sudetenland in Czechoslovakia. By the 1950s, one-fifth of West Germans were refugees from the east. The Soviet Union also took over the Polish provinces east of the Curzon line,[296] from which 2 million Poles were expelled;[295][297] north-east Romania,[298][299] parts of eastern Finland,[300] and the three Baltic states were incorporated into the Soviet Union.[301][302]
199
+
200
+ In an effort to maintain world peace,[303] the Allies formed the United Nations, which officially came into existence on 24 October 1945,[304] and adopted the Universal Declaration of Human Rights in 1948 as a common standard for all member nations.[305] The great powers that were the victors of the war—France, China, the United Kingdom, the Soviet Union and the United States—became the permanent members of the UN's Security Council.[306] The five permanent members remain so to the present, although there have been two seat changes, between the Republic of China and the People's Republic of China in 1971, and between the Soviet Union and its successor state, the Russian Federation, following the dissolution of the Soviet Union in 1991. The alliance between the Western Allies and the Soviet Union had begun to deteriorate even before the war was over.[307]
201
+
202
+ Germany had been de facto divided, and two independent states, the Federal Republic of Germany (West Germany) and the German Democratic Republic (East Germany),[308] were created within the borders of Allied and Soviet occupation zones. The rest of Europe was also divided into Western and Soviet spheres of influence.[309] Most eastern and central European countries fell into the Soviet sphere, which led to establishment of Communist-led regimes, with full or partial support of the Soviet occupation authorities. As a result, East Germany,[310] Poland, Hungary, Romania, Czechoslovakia, and Albania[311] became Soviet satellite states. Communist Yugoslavia conducted a fully independent policy, causing tension with the Soviet Union.[312]
203
+
204
+ Post-war division of the world was formalised by two international military alliances, the United States-led NATO and the Soviet-led Warsaw Pact.[313] The long period of political tensions and military competition between them, the Cold War, would be accompanied by an unprecedented arms race and proxy wars.[314]
205
+
206
+ In Asia, the United States led the occupation of Japan and administrated Japan's former islands in the Western Pacific, while the Soviets annexed South Sakhalin and the Kuril Islands.[315] Korea, formerly under Japanese rule, was divided and occupied by the Soviet Union in the North and the United States in the South between 1945 and 1948. Separate republics emerged on both sides of the 38th parallel in 1948, each claiming to be the legitimate government for all of Korea, which led ultimately to the Korean War.[316]
207
+
208
+ In China, nationalist and communist forces resumed the civil war in June 1946. Communist forces were victorious and established the People's Republic of China on the mainland, while nationalist forces retreated to Taiwan in 1949.[317] In the Middle East, the Arab rejection of the United Nations Partition Plan for Palestine and the creation of Israel marked the escalation of the Arab–Israeli conflict. While European powers attempted to retain some or all of their colonial empires, their losses of prestige and resources during the war rendered this unsuccessful, leading to decolonisation.[318][319]
209
+
210
+ The global economy suffered heavily from the war, although participating nations were affected differently. The United States emerged much richer than any other nation, leading to a baby boom, and by 1950 its gross domestic product per person was much higher than that of any of the other powers, and it dominated the world economy.[320] The UK and US pursued a policy of industrial disarmament in Western Germany in the years 1945–1948.[321] Because of international trade interdependencies this led to European economic stagnation and delayed European recovery for several years.[322][323]
211
+
212
+ Recovery began with the mid-1948 currency reform in Western Germany, and was sped up by the liberalisation of European economic policy that the Marshall Plan (1948–1951) both directly and indirectly caused.[324][325] The post-1948 West German recovery has been called the German economic miracle.[326] Italy also experienced an economic boom[327] and the French economy rebounded.[328] By contrast, the United Kingdom was in a state of economic ruin,[329] and although receiving a quarter of the total Marshall Plan assistance, more than any other European country,[330] it continued in relative economic decline for decades.[331]
213
+
214
+ The Soviet Union, despite enormous human and material losses, also experienced rapid increase in production in the immediate post-war era.[332] Japan recovered much later.[333] China returned to its pre-war industrial production by 1952.[334]
215
+
216
+ Estimates for the total number of casualties in the war vary, because many deaths went unrecorded.[335] Most suggest that some 60 million people died in the war, including about 20 million military personnel and 40 million civilians.[336][337][338]
217
+ Many of the civilians died because of deliberate genocide, massacres, mass bombings, disease, and starvation.
218
+
219
+ The Soviet Union alone lost around 27 million people during the war,[339] including 8.7 million military and 19 million civilian deaths.[340] A quarter of the people in the Soviet Union were wounded or killed.[341] Germany sustained 5.3 million military losses, mostly on the Eastern Front and during the final battles in Germany.[342]
220
+
221
+ An estimated 11[343] to 17 million[344] civilians died as a direct or as an indirect result of Nazi racist policies, including mass killing of around 6 million Jews, along with Roma, homosexuals, at least 1.9 million ethnic Poles[345][346] and millions of other Slavs (including Russians, Ukrainians and Belarusians), and other ethnic and minority groups.[347][344] Between 1941 and 1945, more than 200,000 ethnic Serbs, along with gypsies and Jews, were persecuted and murdered by the Axis-aligned Croatian Ustaše in Yugoslavia.[348] Also, more than 100,000 Poles were massacred by the Ukrainian Insurgent Army in the Volhynia massacres, between 1943 and 1945.[349] At the same time about 10,000–15,000 Ukrainians were killed by the Polish Home Army and other Polish units, in reprisal attacks.[350]
222
+
223
+ In Asia and the Pacific, between 3 million and more than 10 million civilians, mostly Chinese (estimated at 7.5 million[351]), were killed by the Japanese occupation forces.[352] The most infamous Japanese atrocity was the Nanking Massacre, in which fifty to three hundred thousand Chinese civilians were raped and murdered.[353] Mitsuyoshi Himeta reported that 2.7 million casualties occurred during the Sankō Sakusen. General Yasuji Okamura implemented the policy in Heipei and Shantung.[354]
224
+
225
+ Axis forces employed biological and chemical weapons. The Imperial Japanese Army used a variety of such weapons during its invasion and occupation of China (see Unit 731)[355][356] and in early conflicts against the Soviets.[357] Both the Germans and the Japanese tested such weapons against civilians,[358] and sometimes on prisoners of war.[359]
226
+
227
+ The Soviet Union was responsible for the Katyn massacre of 22,000 Polish officers,[360] and the imprisonment or execution of thousands of political prisoners by the NKVD, along with mass civilian deportations to Siberia, in the Baltic states and eastern Poland annexed by the Red Army.[361]
228
+
229
+ The mass bombing of cities in Europe and Asia has often been called a war crime, although no positive or specific customary international humanitarian law with respect to aerial warfare existed before or during World War II.[362] The USAAF firebombed a total of 67 Japanese cities, killing 393,000 civilians and destroying 65% of built-up areas.[363]
230
+
231
+ Nazi Germany was responsible for the Holocaust (which killed approximately 6 million Jews) as well as for killing 2.7 million ethnic Poles[364] and 4 million others who were deemed "unworthy of life" (including the disabled and mentally ill, Soviet prisoners of war, Romani, homosexuals, Freemasons, and Jehovah's Witnesses) as part of a programme of deliberate extermination, in effect becoming a "genocidal state".[365] Soviet POWs were kept in especially unbearable conditions, and 3.6 million Soviet POWs out of 5.7 died in Nazi camps during the war.[366][367] In addition to concentration camps, death camps were created in Nazi Germany to exterminate people on an industrial scale. Nazi Germany extensively used forced labourers; about 12 million Europeans from German occupied countries were abducted and used as a slave work force in German industry, agriculture and war economy.[368]
232
+
233
+ The Soviet Gulag became a de facto system of deadly camps during 1942–43, when wartime privation and hunger caused numerous deaths of inmates,[369] including foreign citizens of Poland and other countries occupied in 1939–40 by the Soviet Union, as well as Axis POWs.[370] By the end of the war, most Soviet POWs liberated from Nazi camps and many repatriated civilians were detained in special filtration camps where they were subjected to NKVD evaluation, and 226,127 were sent to the Gulag as real or perceived Nazi collaborators.[371]
234
+
235
+ Japanese prisoner-of-war camps, many of which were used as labour camps, also had high death rates. The International Military Tribunal for the Far East found the death rate of Western prisoners was 27 per cent (for American POWs, 37 per cent),[372] seven times that of POWs under the Germans and Italians.[373] While 37,583 prisoners from the UK, 28,500 from the Netherlands, and 14,473 from the United States were released after the surrender of Japan, the number of Chinese released was only 56.[374]
236
+
237
+ At least five million Chinese civilians from northern China and Manchukuo were enslaved between 1935 and 1941 by the East Asia Development Board, or Kōain, for work in mines and war industries. After 1942, the number reached 10 million.[375] In Java, between 4 and 10 million rōmusha (Japanese: "manual labourers"), were forced to work by the Japanese military. About 270,000 of these Javanese labourers were sent to other Japanese-held areas in South East Asia, and only 52,000 were repatriated to Java.[376]
238
+
239
+ In Europe, occupation came under two forms. In Western, Northern, and Central Europe (France, Norway, Denmark, the Low Countries, and the annexed portions of Czechoslovakia) Germany established economic policies through which it collected roughly 69.5 billion reichmarks (27.8 billion US dollars) by the end of the war; this figure does not include the sizeable plunder of industrial products, military equipment, raw materials and other goods.[377] Thus, the income from occupied nations was over 40 per cent of the income Germany collected from taxation, a figure which increased to nearly 40 per cent of total German income as the war went on.[378]
240
+
241
+ In the East, the intended gains of Lebensraum were never attained as fluctuating front-lines and Soviet scorched earth policies denied resources to the German invaders.[379] Unlike in the West, the Nazi racial policy encouraged extreme brutality against what it considered to be the "inferior people" of Slavic descent; most German advances were thus followed by mass executions.[380] Although resistance groups formed in most occupied territories, they did not significantly hamper German operations in either the East[381] or the West[382] until late 1943.
242
+
243
+ In Asia, Japan termed nations under its occupation as being part of the Greater East Asia Co-Prosperity Sphere, essentially a Japanese hegemony which it claimed was for purposes of liberating colonised peoples.[383] Although Japanese forces were sometimes welcomed as liberators from European domination, Japanese war crimes frequently turned local public opinion against them.[384] During Japan's initial conquest it captured 4,000,000 barrels (640,000 m3) of oil (~5.5×105 tonnes) left behind by retreating Allied forces, and by 1943 was able to get production in the Dutch East Indies up to 50 million barrels (~6.8×10^6 t), 76 per cent of its 1940 output rate.[384]
244
+
245
+ In Europe, before the outbreak of the war, the Allies had significant advantages in both population and economics. In 1938, the Western Allies (United Kingdom, France, Poland and the British Dominions) had a 30 per cent larger population and a 30 per cent higher gross domestic product than the European Axis powers (Germany and Italy); if colonies are included, the Allies had more than a 5:1 advantage in population and a nearly 2:1 advantage in GDP.[385] In Asia at the same time, China had roughly six times the population of Japan but only an 89 per cent higher GDP; this is reduced to three times the population and only a 38 per cent higher GDP if Japanese colonies are included.[385]
246
+
247
+ The United States produced about two-thirds of all the munitions used by the Allies in WWII, including warships, transports, warplanes, artillery, tanks, trucks, and ammunition.[386]
248
+ Though the Allies' economic and population advantages were largely mitigated during the initial rapid blitzkrieg attacks of Germany and Japan, they became the decisive factor by 1942, after the United States and Soviet Union joined the Allies, as the war largely settled into one of attrition.[387] While the Allies' ability to out-produce the Axis is often attributed[by whom?] to the Allies having more access to natural resources, other factors, such as Germany and Japan's reluctance to employ women in the labour force,[388] Allied strategic bombing,[389] and Germany's late shift to a war economy[390] contributed significantly. Additionally, neither Germany nor Japan planned to fight a protracted war, and had not equipped themselves to do so.[391] To improve their production, Germany and Japan used millions of slave labourers;[392] Germany used about 12 million people, mostly from Eastern Europe,[368] while Japan used more than 18 million people in Far East Asia.[375][376]
249
+
250
+ Aircraft were used for reconnaissance, as fighters, bombers, and ground-support, and each role was advanced considerably. Innovation included airlift (the capability to quickly move limited high-priority supplies, equipment, and personnel);[393] and of strategic bombing (the bombing of enemy industrial and population centres to destroy the enemy's ability to wage war).[394] Anti-aircraft weaponry also advanced, including defences such as radar and surface-to-air artillery. The use of the jet aircraft was pioneered and, though late introduction meant it had little impact, it led to jets becoming standard in air forces worldwide.[395] Although guided missiles were being developed, they were not advanced enough to reliably target aircraft until some years after the war.
251
+
252
+ Advances were made in nearly every aspect of naval warfare, most notably with aircraft carriers and submarines. Although aeronautical warfare had relatively little success at the start of the war, actions at Taranto, Pearl Harbor, and the Coral Sea established the carrier as the dominant capital ship in place of the battleship.[396][397][398] In the Atlantic, escort carriers proved to be a vital part of Allied convoys, increasing the effective protection radius and helping to close the Mid-Atlantic gap.[399] Carriers were also more economical than battleships because of the relatively low cost of aircraft[400] and their not requiring to be as heavily armoured.[401] Submarines, which had proved to be an effective weapon during the First World War,[402] were anticipated by all sides to be important in the second. The British focused development on anti-submarine weaponry and tactics, such as sonar and convoys, while Germany focused on improving its offensive capability, with designs such as the Type VII submarine and wolfpack tactics.[403][better source needed] Gradually, improving Allied technologies such as the Leigh light, hedgehog, squid, and homing torpedoes proved victorious over the German submarines.[citation needed]
253
+
254
+ Land warfare changed from the static front lines of trench warfare of World War I, which had relied on improved artillery that outmatched the speed of both infantry and cavalry, to increased mobility and combined arms. The tank, which had been used predominantly for infantry support in the First World War, had evolved into the primary weapon.[404] In the late 1930s, tank design was considerably more advanced than it had been during World War I,[405] and advances continued throughout the war with increases in speed, armour and firepower.[citation needed] At the start of the war, most commanders thought enemy tanks should be met by tanks with superior specifications.[406] This idea was challenged by the poor performance of the relatively light early tank guns against armour, and German doctrine of avoiding tank-versus-tank combat. This, along with Germany's use of combined arms, were among the key elements of their highly successful blitzkrieg tactics across Poland and France.[404] Many means of destroying tanks, including indirect artillery, anti-tank guns (both towed and self-propelled), mines, short-ranged infantry antitank weapons, and other tanks were used.[406] Even with large-scale mechanisation, infantry remained the backbone of all forces,[407] and throughout the war, most infantry were equipped similarly to World War I.[408] The portable machine gun spread, a notable example being the German MG34, and various submachine guns which were suited to close combat in urban and jungle settings.[408] The assault rifle, a late war development incorporating many features of the rifle and submachine gun, became the standard postwar infantry weapon for most armed forces.[409]
255
+
256
+ Most major belligerents attempted to solve the problems of complexity and security involved in using large codebooks for cryptography by designing ciphering machines, the most well known being the German Enigma machine.[410] Development of SIGINT (signals intelligence) and cryptanalysis enabled the countering process of decryption. Notable examples were the Allied decryption of Japanese naval codes[411] and British Ultra, a pioneering method for decoding Enigma benefiting from information given to the United Kingdom by the Polish Cipher Bureau, which had been decoding early versions of Enigma before the war.[412] Another aspect of military intelligence was the use of deception, which the Allies used to great effect, such as in operations Mincemeat and Bodyguard.[411][413]
257
+
258
+ Other technological and engineering feats achieved during, or as a result of, the war include the world's first programmable computers (Z3, Colossus, and ENIAC), guided missiles and modern rockets, the Manhattan Project's development of nuclear weapons, operations research and the development of artificial harbours and oil pipelines under the English Channel.[citation needed] Penicillin was first mass-produced and used during the war (see Stabilization and mass production of penicillin).[414]
259
+
en/5323.html.txt ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Asia-Pacific
6
+
7
+ Mediterranean and Middle East
8
+
9
+ Other campaigns
10
+
11
+ Coups
12
+
13
+ World War II (WWII or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945. It involved the vast majority of the world's countries—including all the great powers—forming two opposing military alliances: the Allies and the Axis. In a state of total war, directly involving more than 100 million people from more than 30 countries, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, blurring the distinction between civilian and military resources. World War II was the deadliest conflict in human history, marked by 70 to 85 million fatalities. Tens of millions of people died due to genocides (including the Holocaust), premeditated death from starvation, massacres, and disease. Aircraft played a major role in the conflict, including in the use of strategic bombing of population centres, and the only uses of nuclear weapons in war.
14
+
15
+ World War II is generally considered to have begun on 1 September 1939, with the invasion of Poland by Germany and subsequent declarations of war on Germany by France and the United Kingdom. From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours: Poland, Finland, Romania and the Baltic states. Following the onset of campaigns in North Africa and East Africa, and the Fall of France in mid-1940, the war continued primarily between the European Axis powers and the British Empire, with war in the Balkans, the aerial Battle of Britain, the Blitz, and the Battle of the Atlantic. On 22 June 1941, Germany led the European Axis powers in an invasion of the Soviet Union, opening the largest land theatre of war in history and trapping the Axis, crucially the German Wehrmacht, in a war of attrition.
16
+
17
+ Japan, which aimed to dominate Asia and the Pacific, was at war with the Republic of China by 1937. In December 1941, Japan launched a surprise attack on the United States as well as European colonies in East Asia and the Pacific. Following an immediate US declaration of war against Japan, supported by one from the UK, the European Axis powers declared war on the United States in solidarity with their ally. Japan soon captured much of the Western Pacific, but its advances were halted in 1942 after Japan lost the critical Battle of Midway; later, Germany and Italy were defeated in North Africa and at Stalingrad in the Soviet Union. Key setbacks in 1943—which included a series of German defeats on the Eastern Front, the Allied invasions of Sicily and Italy, and Allied offensives in the Pacific—cost the Axis its initiative and forced it into strategic retreat on all fronts. In 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained its territorial losses and turned towards Germany and its allies. During 1944 and 1945, the Japanese suffered reversals in mainland Asia, while the Allies crippled the Japanese Navy and captured key Western Pacific islands.
18
+
19
+ The war in Europe concluded with an invasion of Germany by the Western Allies and the Soviet Union, culminating in the capture of Berlin by Soviet troops, the suicide of Adolf Hitler and the German unconditional surrender on 8 May 1945. Following the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender on its terms, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki on 6 and 9 August, respectively. Faced with an imminent invasion of the Japanese archipelago, the possibility of additional atomic bombings, and the Soviet entry into the war against Japan and its invasion of Manchuria on 9 August, Japan announced its intention to surrender on 15 August 1945, cementing total victory in Asia for the Allies. In the wake of the war, Germany and Japan were occupied and war crimes tribunals were conducted against German and Japanese leaders.
20
+
21
+ World War II changed the political alignment and social structure of the globe. The United Nations (UN) was established to foster international co-operation and prevent future conflicts, and the victorious great powers—China, France, the Soviet Union, the United Kingdom, and the United States—became the permanent members of its Security Council. The Soviet Union and the United States emerged as rival superpowers, setting the stage for the nearly half-century-long Cold War. In the wake of European devastation, the influence of its great powers waned, triggering the decolonisation of Africa and Asia. Most countries whose industries had been damaged moved towards economic recovery and expansion. Political integration, especially in Europe, began as an effort to forestall future hostilities, end pre-war enmities and forge a sense of common identity.
22
+
23
+ The start of the war in Europe is generally held to be 1 September 1939,[1][2] beginning with the German invasion of Poland; the United Kingdom and France declared war on Germany two days later. The dates for the beginning of war in the Pacific include the start of the Second Sino-Japanese War on 7 July 1937,[3][4] or even the Japanese invasion of Manchuria on 19 September 1931.[5][6][7]
24
+
25
+ Others follow the British historian A. J. P. Taylor, who held that the Sino-Japanese War and war in Europe and its colonies occurred simultaneously, and the two wars merged in 1941. This article uses the conventional dating. Other starting dates sometimes used for World War II include the Italian invasion of Abyssinia on 3 October 1935.[8] The British historian Antony Beevor views the beginning of World War II as the Battles of Khalkhin Gol fought between Japan and the forces of Mongolia and the Soviet Union from May to September 1939.[9]
26
+
27
+ The exact date of the war's end is also not universally agreed upon. It was generally accepted at the time that the war ended with the armistice of 14 August 1945 (V-J Day), rather than the formal surrender of Japan, which was on 2 September 1945 that officially ended the war in Asia. A peace treaty with Japan was signed in 1951.[10] A treaty regarding Germany's future allowed the reunification of East and West Germany to take place in 1990 and resolved most post-World War II issues.[11] No formal peace treaty between Japan and the Soviet Union was ever signed.[12]
28
+
29
+ World War I had radically altered the political European map, with the defeat of the Central Powers—including Austria-Hungary, Germany, Bulgaria and the Ottoman Empire—and the 1917 Bolshevik seizure of power in Russia, which eventually led to the founding of the Soviet Union. Meanwhile, the victorious Allies of World War I, such as France, Belgium, Italy, Romania, and Greece, gained territory, and new nation-states were created out of the collapse of Austria-Hungary and the Ottoman and Russian Empires.
30
+
31
+ To prevent a future world war, the League of Nations was created during the 1919 Paris Peace Conference. The organisation's primary goals were to prevent armed conflict through collective security, military and naval disarmament, and settling international disputes through peaceful negotiations and arbitration.
32
+
33
+ Despite strong pacifist sentiment after World War I,[13] its aftermath still caused irredentist and revanchist nationalism in several European states. These sentiments were especially marked in Germany because of the significant territorial, colonial, and financial losses incurred by the Treaty of Versailles. Under the treaty, Germany lost around 13 percent of its home territory and all its overseas possessions, while German annexation of other states was prohibited, reparations were imposed, and limits were placed on the size and capability of the country's armed forces.[14]
34
+
35
+ The German Empire was dissolved in the German Revolution of 1918–1919, and a democratic government, later known as the Weimar Republic, was created. The interwar period saw strife between supporters of the new republic and hardline opponents on both the right and left. Italy, as an Entente ally, had made some post-war territorial gains; however, Italian nationalists were angered that the promises made by the United Kingdom and France to secure Italian entrance into the war were not fulfilled in the peace settlement. From 1922 to 1925, the Fascist movement led by Benito Mussolini seized power in Italy with a nationalist, totalitarian, and class collaborationist agenda that abolished representative democracy, repressed socialist, left-wing and liberal forces, and pursued an aggressive expansionist foreign policy aimed at making Italy a world power, promising the creation of a "New Roman Empire".[15]
36
+
37
+ Adolf Hitler, after an unsuccessful attempt to overthrow the German government in 1923, eventually became the Chancellor of Germany in 1933. He abolished democracy, espousing a radical, racially motivated revision of the world order, and soon began a massive rearmament campaign.[16] Meanwhile, France, to secure its alliance, allowed Italy a free hand in Ethiopia, which Italy desired as a colonial possession. The situation was aggravated in early 1935 when the Territory of the Saar Basin was legally reunited with Germany and Hitler repudiated the Treaty of Versailles, accelerated his rearmament programme, and introduced conscription.[17]
38
+
39
+ The United Kingdom, France and Italy formed the Stresa Front in April 1935 in order to contain Germany, a key step towards military globalisation; however, that June, the United Kingdom made an independent naval agreement with Germany, easing prior restrictions. The Soviet Union, concerned by Germany's goals of capturing vast areas of Eastern Europe, drafted a treaty of mutual assistance with France. Before taking effect, though, the Franco-Soviet pact was required to go through the bureaucracy of the League of Nations, which rendered it essentially toothless.[18] The United States, concerned with events in Europe and Asia, passed the Neutrality Act in August of the same year.[19]
40
+
41
+ Hitler defied the Versailles and Locarno treaties by remilitarising the Rhineland in March 1936, encountering little opposition due to appeasement.[20] In October 1936, Germany and Italy formed the Rome–Berlin Axis. A month later, Germany and Japan signed the Anti-Comintern Pact, which Italy joined the following year.[21]
42
+
43
+ The Kuomintang (KMT) party in China launched a unification campaign against regional warlords and nominally unified China in the mid-1920s, but was soon embroiled in a civil war against its former Chinese Communist Party allies[22] and new regional warlords. In 1931, an increasingly militaristic Empire of Japan, which had long sought influence in China[23] as the first step of what its government saw as the country's right to rule Asia, staged the Mukden Incident as a pretext to invade Manchuria and establish the puppet state of Manchukuo.[24]
44
+
45
+ China appealed to the League of Nations to stop the Japanese invasion of Manchuria. Japan withdrew from the League of Nations after being condemned for its incursion into Manchuria. The two nations then fought several battles, in Shanghai, Rehe and Hebei, until the Tanggu Truce was signed in 1933. Thereafter, Chinese volunteer forces continued the resistance to Japanese aggression in Manchuria, and Chahar and Suiyuan.[25] After the 1936 Xi'an Incident, the Kuomintang and communist forces agreed on a ceasefire to present a united front to oppose Japan.[26]
46
+
47
+ The Second Italo–Ethiopian War was a brief colonial war that began in October 1935 and ended in May 1936. The war began with the invasion of the Ethiopian Empire (also known as Abyssinia) by the armed forces of the Kingdom of Italy (Regno d'Italia), which was launched from Italian Somaliland and Eritrea.[27] The war resulted in the military occupation of Ethiopia and its annexation into the newly created colony of Italian East Africa (Africa Orientale Italiana, or AOI); in addition it exposed the weakness of the League of Nations as a force to preserve peace. Both Italy and Ethiopia were member nations, but the League did little when the former clearly violated Article X of the League's Covenant.[28] The United Kingdom and France supported imposing sanctions on Italy for the invasion, but they were not fully enforced and failed to end the Italian invasion.[29] Italy subsequently dropped its objections to Germany's goal of absorbing Austria.[30]
48
+
49
+ When civil war broke out in Spain, Hitler and Mussolini lent military support to the Nationalist rebels, led by General Francisco Franco. Italy supported the Nationalists to a greater extent than the Nazis did: altogether Mussolini sent to Spain more than 70,000 ground troops and 6,000 aviation personnel, as well as about 720 aircraft.[31] The Soviet Union supported the existing government, the Spanish Republic. More than 30,000 foreign volunteers, known as the International Brigades, also fought against the Nationalists. Both Germany and the Soviet Union used this proxy war as an opportunity to test in combat their most advanced weapons and tactics. The Nationalists won the civil war in April 1939; Franco, now dictator, remained officially neutral during World War II but generally favoured the Axis.[32] His greatest collaboration with Germany was the sending of volunteers to fight on the Eastern Front.[33]
50
+
51
+ In July 1937, Japan captured the former Chinese imperial capital of Peking after instigating the Marco Polo Bridge Incident, which culminated in the Japanese campaign to invade all of China.[34] The Soviets quickly signed a non-aggression pact with China to lend materiel support, effectively ending China's prior co-operation with Germany. From September to November, the Japanese attacked Taiyuan, engaged the Kuomintang Army around Xinkou,[35] and fought Communist forces in Pingxingguan.[36][37] Generalissimo Chiang Kai-shek deployed his best army to defend Shanghai, but, after three months of fighting, Shanghai fell. The Japanese continued to push the Chinese forces back, capturing the capital Nanking in December 1937. After the fall of Nanking, tens of thousands if not hundreds of thousands of Chinese civilians and disarmed combatants were murdered by the Japanese.[38][39]
52
+
53
+ In March 1938, Nationalist Chinese forces won their first major victory at Taierzhuang but then the city of Xuzhou was taken by Japanese in May.[40] In June 1938, Chinese forces stalled the Japanese advance by flooding the Yellow River; this manoeuvre bought time for the Chinese to prepare their defences at Wuhan, but the city was taken by October.[41] Japanese military victories did not bring about the collapse of Chinese resistance that Japan had hoped to achieve; instead, the Chinese government relocated inland to Chongqing and continued the war.[42][43]
54
+
55
+ In the mid-to-late 1930s, Japanese forces in Manchukuo had sporadic border clashes with the Soviet Union and Mongolia. The Japanese doctrine of Hokushin-ron, which emphasised Japan's expansion northward, was favoured by the Imperial Army during this time. With the Japanese defeat at Khalkin Gol in 1939, the ongoing Second Sino-Japanese War[44] and ally Nazi Germany pursuing neutrality with the Soviets, this policy would prove difficult to maintain. Japan and the Soviet Union eventually signed a Neutrality Pact in April 1941, and Japan adopted the doctrine of Nanshin-ron, promoted by the Navy, which took its focus southward, eventually leading to its war with the United States and the Western Allies.[45][46]
56
+
57
+ In Europe, Germany and Italy were becoming more aggressive. In March 1938, Germany annexed Austria, again provoking little response from other European powers.[47] Encouraged, Hitler began pressing German claims on the Sudetenland, an area of Czechoslovakia with a predominantly ethnic German population. Soon the United Kingdom and France followed the appeasement policy of British Prime Minister Neville Chamberlain and conceded this territory to Germany in the Munich Agreement, which was made against the wishes of the Czechoslovak government, in exchange for a promise of no further territorial demands.[48] Soon afterwards, Germany and Italy forced Czechoslovakia to cede additional territory to Hungary, and Poland annexed Czechoslovakia's Zaolzie region.[49]
58
+
59
+ Although all of Germany's stated demands had been satisfied by the agreement, privately Hitler was furious that British interference had prevented him from seizing all of Czechoslovakia in one operation. In subsequent speeches Hitler attacked British and Jewish "war-mongers" and in January 1939 secretly ordered a major build-up of the German navy to challenge British naval supremacy. In March 1939, Germany invaded the remainder of Czechoslovakia and subsequently split it into the German Protectorate of Bohemia and Moravia and a pro-German client state, the Slovak Republic.[50] Hitler also delivered 20 March 1939 ultimatum to Lithuania, forcing the concession of the Klaipėda Region, formerly the German Memelland.[51]
60
+
61
+ Greatly alarmed and with Hitler making further demands on the Free City of Danzig, the United Kingdom and France guaranteed their support for Polish independence; when Italy conquered Albania in April 1939, the same guarantee was extended to Romania and Greece.[52] Shortly after the Franco-British pledge to Poland, Germany and Italy formalised their own alliance with the Pact of Steel.[53] Hitler accused the United Kingdom and Poland of trying to "encircle" Germany and renounced the Anglo-German Naval Agreement and the German–Polish Non-Aggression Pact.[54]
62
+
63
+ The situation reached a general crisis in late August as German troops continued to mobilise against the Polish border. On 23 August, when tripartite negotiations about a military alliance between France, the United Kingdom and Soviet Union stalled,[55] the Soviet Union signed a non-aggression pact with Germany.[56] This pact had a secret protocol that defined German and Soviet "spheres of influence" (western Poland and Lithuania for Germany; eastern Poland, Finland, Estonia, Latvia and Bessarabia for the Soviet Union), and raised the question of continuing Polish independence.[57] The pact neutralised the possibility of Soviet opposition to a campaign against Poland and assured that Germany would not have to face the prospect of a two-front war, as it had in World War I. Immediately after that, Hitler ordered the attack to proceed on 26 August, but upon hearing that the United Kingdom had concluded a formal mutual assistance pact with Poland, and that Italy would maintain neutrality, he decided to delay it.[58]
64
+
65
+ In response to British requests for direct negotiations to avoid war, Germany made demands on Poland, which only served as a pretext to worsen relations.[59] On 29 August, Hitler demanded that a Polish plenipotentiary immediately travel to Berlin to negotiate the handover of Danzig, and to allow a plebiscite in the Polish Corridor in which the German minority would vote on secession.[59] The Poles refused to comply with the German demands, and on the night of 30–31 August in a stormy meeting with the British ambassador Neville Henderson, Ribbentrop declared that Germany considered its claims rejected.[60]
66
+
67
+ On 1 September 1939, Germany invaded Poland after having staged several false flag border incidents as a pretext to initiate the invasion.[61] The first German attack of the war came against the Polish defenses at Westerplatte.[62] The United Kingdom responded with an ultimatum to Germany to cease military operations, and on 3 September, after the ultimatum was ignored, France and Britain declared war on Germany, followed by Australia, New Zealand, South Africa and Canada. The alliance provided no direct military support to Poland, outside of a cautious French probe into the Saarland.[63] The Western Allies also began a naval blockade of Germany, which aimed to damage the country's economy and the war effort.[64] Germany responded by ordering U-boat warfare against Allied merchant and warships, which would later escalate into the Battle of the Atlantic.[65]
68
+
69
+ On 8 September, German troops reached the suburbs of Warsaw. The Polish counter offensive to the west halted the German advance for several days, but it was outflanked and encircled by the Wehrmacht. Remnants of the Polish army broke through to besieged Warsaw. On 17 September 1939, after signing a cease-fire with Japan, the Soviets invaded Eastern Poland[66] under a pretext that the Polish state had ostensibly ceased to exist.[67] On 27 September, the Warsaw garrison surrendered to the Germans, and the last large operational unit of the Polish Army surrendered on 6 October. Despite the military defeat, Poland never surrendered; instead it formed the Polish government-in-exile and a clandestine state apparatus remained in occupied Poland.[68] A significant part of Polish military personnel evacuated to Romania and the Baltic countries; many of them would fight against the Axis in other theatres of the war.[69]
70
+
71
+ Germany annexed the western and occupied the central part of Poland, and the Soviet Union annexed its eastern part; small shares of Polish territory were transferred to Lithuania and Slovakia. On 6 October, Hitler made a public peace overture to the United Kingdom and France but said that the future of Poland was to be determined exclusively by Germany and the Soviet Union. The proposal was rejected,[60] and Hitler ordered an immediate offensive against France,[70] which would be postponed until the spring of 1940 due to bad weather.[71][72][73]
72
+
73
+ The Soviet Union forced the Baltic countries—Estonia, Latvia and Lithuania, the states that were in the Soviet "sphere of influence" under the Molotov-Ribbentrop pact—to sign "mutual assistance pacts" that stipulated stationing Soviet troops in these countries. Soon after, significant Soviet military contingents were moved there.[74][75][76] Finland refused to sign a similar pact and rejected ceding part of its territory to the Soviet Union. The Soviet Union invaded Finland in November 1939,[77] and the Soviet Union was expelled from the League of Nations.[78] Despite overwhelming numerical superiority, Soviet military success was modest, and the Finno-Soviet war ended in March 1940 with minimal Finnish concessions.[79]
74
+
75
+ In June 1940, the Soviet Union forcibly annexed Estonia, Latvia and Lithuania,[75] and the disputed Romanian regions of Bessarabia, northern Bukovina and Hertza. Meanwhile, Nazi-Soviet political rapprochement and economic co-operation[80][81] gradually stalled,[82][83] and both states began preparations for war.[84]
76
+
77
+ In April 1940, Germany invaded Denmark and Norway to protect shipments of iron ore from Sweden, which the Allies were attempting to cut off.[85] Denmark capitulated after a few hours, and Norway was conquered within two months[86] despite Allied support. British discontent over the Norwegian campaign led to the appointment of Winston Churchill as Prime Minister on 10 May 1940.[87]
78
+
79
+ On the same day, Germany launched an offensive against France. To circumvent the strong Maginot Line fortifications on the Franco-German border, Germany directed its attack at the neutral nations of Belgium, the Netherlands, and Luxembourg.[88] The Germans carried out a flanking manoeuvre through the Ardennes region,[89] which was mistakenly perceived by Allies as an impenetrable natural barrier against armoured vehicles.[90][91] By successfully implementing new blitzkrieg tactics, the Wehrmacht rapidly advanced to the Channel and cut off the Allied forces in Belgium, trapping the bulk of the Allied armies in a cauldron on the Franco-Belgian border near Lille. The United Kingdom was able to evacuate a significant number of Allied troops from the continent by early June, although abandoning almost all their equipment.[92]
80
+
81
+ On 10 June, Italy invaded France, declaring war on both France and the United Kingdom.[93] The Germans turned south against the weakened French army, and Paris fell to them on 14 June. Eight days later France signed an armistice with Germany; it was divided into German and Italian occupation zones,[94] and an unoccupied rump state under the Vichy Regime, which, though officially neutral, was generally aligned with Germany. France kept its fleet, which the United Kingdom attacked on 3 July in an attempt to prevent its seizure by Germany.[95]
82
+
83
+ The Battle of Britain[96] began in early July with Luftwaffe attacks on shipping and harbours.[97] The United Kingdom rejected Hitler's ultimatum,[which?][98] and the German air superiority campaign started in August but failed to defeat RAF Fighter Command, forcing the indefinite postponement of the proposed German invasion of Britain. The German strategic bombing offensive intensified with night attacks on London and other cities in the Blitz, but failed to significantly disrupt the British war effort[97] and largely ended in May 1941.[99]
84
+
85
+ Using newly captured French ports, the German Navy enjoyed success against an over-extended Royal Navy, using U-boats against British shipping in the Atlantic.[100] The British Home Fleet scored a significant victory on 27 May 1941 by sinking the German battleship Bismarck.[101]
86
+
87
+ In November 1939, the United States was taking measures to assist China and the Western Allies, and amended the Neutrality Act to allow "cash and carry" purchases by the Allies.[102] In 1940, following the German capture of Paris, the size of the United States Navy was significantly increased. In September the United States further agreed to a trade of American destroyers for British bases.[103] Still, a large majority of the American public continued to oppose any direct military intervention in the conflict well into 1941.[104] In December 1940 Roosevelt accused Hitler of planning world conquest and ruled out any negotiations as useless, calling for the United States to become an "arsenal of democracy" and promoting Lend-Lease programmes of aid to support the British war effort.[98] The United States started strategic planning to prepare for a full-scale offensive against Germany.[105]
88
+
89
+ At the end of September 1940, the Tripartite Pact formally united Japan, Italy, and Germany as the Axis Powers. The Tripartite Pact stipulated that any country, with the exception of the Soviet Union, which attacked any Axis Power would be forced to go to war against all three.[106] The Axis expanded in November 1940 when Hungary, Slovakia and Romania joined.[107] Romania and Hungary would make major contributions to the Axis war against the Soviet Union, in Romania's case partially to recapture territory ceded to the Soviet Union.[108]
90
+
91
+ In early June 1940 the Italian Regia Aeronautica attacked and besieged Malta, a British possession. In late summer through early autumn Italy conquered British Somaliland and made an incursion into British-held Egypt. In October Italy attacked Greece, but the attack was repulsed with heavy Italian casualties; the campaign ended within months with minor territorial changes.[109] Germany started preparation for an invasion of the Balkans to assist Italy, to prevent the British from gaining a foothold there, which would be a potential threat for Romanian oil fields, and to strike against the British dominance of the Mediterranean.[110]
92
+
93
+ In December 1940, British Empire forces began counter-offensives against Italian forces in Egypt and Italian East Africa.[111] The offensives were highly successful; by early February 1941 Italy had lost control of eastern Libya, and large numbers of Italian troops had been taken, prisoner. The Italian Navy also suffered significant defeats, with the Royal Navy putting three Italian battleships out of commission by a carrier attack at Taranto and neutralising several more warships at the Battle of Cape Matapan.[112]
94
+
95
+ Italian defeats prompted Germany to deploy an expeditionary force to North Africa, and at the end of March 1941 Rommel's Afrika Korps launched an offensive which drove back the Commonwealth forces.[113] In under a month, Axis forces advanced to western Egypt and besieged the port of Tobruk.[114]
96
+
97
+ By late March 1941 Bulgaria and Yugoslavia signed the Tripartite Pact; however, the Yugoslav government was overthrown two days later by pro-British nationalists. Germany responded with simultaneous invasions of both Yugoslavia and Greece, commencing on 6 April 1941; both nations were forced to surrender within the month.[115] The airborne invasion of the Greek island of Crete at the end of May completed the German conquest of the Balkans.[116] Although the Axis victory was swift, bitter and large-scale partisan warfare subsequently broke out against the Axis occupation of Yugoslavia, which continued until the end of the war.[117]
98
+
99
+ In the Middle East, in May Commonwealth forces quashed an uprising in Iraq which had been supported by German aircraft from bases within Vichy-controlled Syria.[118] Between June and July they invaded and occupied the French possessions Syria and Lebanon, with the assistance of the Free French.[119]
100
+
101
+ With the situation in Europe and Asia relatively stable, Germany, Japan, and the Soviet Union made preparations. With the Soviets wary of mounting tensions with Germany and the Japanese planning to take advantage of the European War by seizing resource-rich European possessions in Southeast Asia, the two powers signed the Soviet–Japanese Neutrality Pact in April 1941.[120] By contrast, the Germans were steadily making preparations for an attack on the Soviet Union, massing forces on the Soviet border.[121]
102
+
103
+ Hitler believed that the United Kingdom's refusal to end the war was based on the hope that the United States and the Soviet Union would enter the war against Germany sooner or later.[122] He, therefore, decided to try to strengthen Germany's relations with the Soviets, or failing that to attack and eliminate them as a factor. In November 1940, negotiations took place to determine if the Soviet Union would join the Tripartite Pact. The Soviets showed some interest but asked for concessions from Finland, Bulgaria, Turkey, and Japan that Germany considered unacceptable. On 18 December 1940, Hitler issued the directive to prepare for an invasion of the Soviet Union.[123]
104
+
105
+ On 22 June 1941, Germany, supported by Italy and Romania, invaded the Soviet Union in Operation Barbarossa, with Germany accusing the Soviets of plotting against them. They were joined shortly by Finland and Hungary.[124] The primary targets of this surprise offensive[125] were the Baltic region, Moscow and Ukraine, with the ultimate goal of ending the 1941 campaign near the Arkhangelsk-Astrakhan line, from the Caspian to the White Seas. Hitler's objectives were to eliminate the Soviet Union as a military power, exterminate Communism, generate Lebensraum ("living space")[126] by dispossessing the native population[127] and guarantee access to the strategic resources needed to defeat Germany's remaining rivals.[128]
106
+
107
+ Although the Red Army was preparing for strategic counter-offensives before the war,[129] Barbarossa forced the Soviet supreme command to adopt a strategic defence. During the summer, the Axis made significant gains into Soviet territory, inflicting immense losses in both personnel and materiel. By mid-August, however, the German Army High Command decided to suspend the offensive of a considerably depleted Army Group Centre, and to divert the 2nd Panzer Group to reinforce troops advancing towards central Ukraine and Leningrad.[130] The Kiev offensive was overwhelmingly successful, resulting in encirclement and elimination of four Soviet armies, and made possible further advance into Crimea and industrially developed Eastern Ukraine (the First Battle of Kharkov).[131]
108
+
109
+ The diversion of three quarters of the Axis troops and the majority of their air forces from France and the central Mediterranean to the Eastern Front[132] prompted the United Kingdom to reconsider its grand strategy.[133] In July, the UK and the Soviet Union formed a military alliance against Germany[134] and in August, the United Kingdom and the United States jointly issued the Atlantic Charter, which outlined British and American goals for the postwar world.[135] In late August the British and Soviets invaded neutral Iran to secure the Persian Corridor, Iran's oil fields, and preempt any Axis advances through Iran toward the Baku oil fields or British India.[136]
110
+
111
+ By October Axis operational objectives in Ukraine and the Baltic region were achieved, with only the sieges of Leningrad[137] and Sevastopol continuing.[138] A major offensive against Moscow was renewed; after two months of fierce battles in increasingly harsh weather, the German army almost reached the outer suburbs of Moscow, where the exhausted troops[139] were forced to suspend their offensive.[140] Large territorial gains were made by Axis forces, but their campaign had failed to achieve its main objectives: two key cities remained in Soviet hands, the Soviet capability to resist was not broken, and the Soviet Union retained a considerable part of its military potential. The blitzkrieg phase of the war in Europe had ended.[141]
112
+
113
+ By early December, freshly mobilised reserves[142] allowed the Soviets to achieve numerical parity with Axis troops.[143] This, as well as intelligence data which established that a minimal number of Soviet troops in the East would be sufficient to deter any attack by the Japanese Kwantung Army,[144] allowed the Soviets to begin a massive counter-offensive that started on 5 December all along the front and pushed German troops 100–250 kilometres (62–155 mi) west.[145]
114
+
115
+ Following the Japanese false flag Mukden Incident in 1931, the Japanese shelling of the American gunboat USS Panay in 1937, and the 1937-38 Nanjing Massacre Japanese-American relations deteriorated. In 1939, the United States notified Japan that it would not be extending its trade treaty and American public opinion opposing Japanese expansionism led to a series of economic sanctions, the Export Control Acts, which banned U.S. exports of chemicals, minerals and military parts to Japan and increased economic pressure on the Japanese regime.[98][146][147] During 1939 Japan launched its first attack against Changsha, a strategically important Chinese city, but was repulsed by late September.[148] Despite several offensives by both sides, the war between China and Japan was stalemated by 1940. To increase pressure on China by blocking supply routes, and to better position Japanese forces in the event of a war with the Western powers, Japan invaded and occupied northern Indochina in September, 1940.[149]
116
+
117
+ Chinese nationalist forces launched a large-scale counter-offensive in early 1940. In August, Chinese communists launched an offensive in Central China; in retaliation, Japan instituted harsh measures in occupied areas to reduce human and material resources for the communists.[150] Continued antipathy between Chinese communist and nationalist forces culminated in armed clashes in January 1941, effectively ending their co-operation.[151] In March, the Japanese 11th army attacked the headquarters of the Chinese 19th army but was repulsed during Battle of Shanggao.[152] In September, Japan attempted to take the city of Changsha again and clashed with Chinese nationalist forces.[153]
118
+
119
+ German successes in Europe encouraged Japan to increase pressure on European governments in Southeast Asia. The Dutch government agreed to provide Japan some oil supplies from the Dutch East Indies, but negotiations for additional access to their resources ended in failure in June 1941.[154] In July 1941 Japan sent troops to southern Indochina, thus threatening British and Dutch possessions in the Far East. The United States, United Kingdom, and other Western governments reacted to this move with a freeze on Japanese assets and a total oil embargo.[155][156] At the same time, Japan was planning an invasion of the Soviet Far East, intending to capitalise off the German invasion in the west, but abandoned the operation after the sanctions.[157]
120
+
121
+ Since early 1941 the United States and Japan had been engaged in negotiations in an attempt to improve their strained relations and end the war in China. During these negotiations, Japan advanced a number of proposals which were dismissed by the Americans as inadequate.[158] At the same time the United States, the United Kingdom, and the Netherlands engaged in secret discussions for the joint defence of their territories, in the event of a Japanese attack against any of them.[159] Roosevelt reinforced the Philippines (an American protectorate scheduled for independence in 1946) and warned Japan that the United States would react to Japanese attacks against any "neighboring countries".[159]
122
+
123
+ Frustrated at the lack of progress and feeling the pinch of the American–British–Dutch sanctions, Japan prepared for war. On 20 November, a new government under Hideki Tojo presented an interim proposal as its final offer. It called for the end of American aid to China and for lifting the embargo on the supply of oil and other resources to Japan. In exchange, Japan promised not to launch any attacks in Southeast Asia and to withdraw its forces from southern Indochina.[158] The American counter-proposal of 26 November required that Japan evacuate all of China without conditions and conclude non-aggression pacts with all Pacific powers.[160] That meant Japan was essentially forced to choose between abandoning its ambitions in China, or seizing the natural resources it needed in the Dutch East Indies by force;[161][162] the Japanese military did not consider the former an option, and many officers considered the oil embargo an unspoken declaration of war.[163]
124
+
125
+ Japan planned to rapidly seize European colonies in Asia to create a large defensive perimeter stretching into the Central Pacific. The Japanese would then be free to exploit the resources of Southeast Asia while exhausting the over-stretched Allies by fighting a defensive war.[164][165] To prevent American intervention while securing the perimeter, it was further planned to neutralise the United States Pacific Fleet and the American military presence in the Philippines from the outset.[166] On 7 December 1941 (8 December in Asian time zones), Japan attacked British and American holdings with near-simultaneous offensives against Southeast Asia and the Central Pacific.[167] These included an attack on the American fleets at Pearl Harbor and the Philippines, landings in Malaya,[167], Thailand and the Battle of Hong Kong.[168]
126
+
127
+ The Japanese invasion of Thailand led to Thailand's decision to ally itself with Japan and the other Japanese attacks led the United States, United Kingdom, China, Australia, and several other states to formally declare war on Japan, whereas the Soviet Union, being heavily involved in large-scale hostilities with European Axis countries, maintained its neutrality agreement with Japan.[169] Germany, followed by the other Axis states, declared war on the United States[170] in solidarity with Japan, citing as justification the American attacks on German war vessels that had been ordered by Roosevelt.[124][171]
128
+
129
+ On 1 January 1942, the Allied Big Four[172]—the Soviet Union, China, the United Kingdom and the United States—and 22 smaller or exiled governments issued the Declaration by United Nations, thereby affirming the Atlantic Charter,[173] and agreeing not to sign a separate peace with the Axis powers.[174]
130
+
131
+ During 1942, Allied officials debated on the appropriate grand strategy to pursue. All agreed that defeating Germany was the primary objective. The Americans favoured a straightforward, large-scale attack on Germany through France. The Soviets were also demanding a second front. The British, on the other hand, argued that military operations should target peripheral areas to wear out German strength, leading to increasing demoralisation, and bolster resistance forces. Germany itself would be subject to a heavy bombing campaign. An offensive against Germany would then be launched primarily by Allied armour without using large-scale armies.[175] Eventually, the British persuaded the Americans that a landing in France was infeasible in 1942 and they should instead focus on driving the Axis out of North Africa.[176]
132
+
133
+ At the Casablanca Conference in early 1943, the Allies reiterated the statements issued in the 1942 Declaration, and demanded the unconditional surrender of their enemies. The British and Americans agreed to continue to press the initiative in the Mediterranean by invading Sicily to fully secure the Mediterranean supply routes.[177] Although the British argued for further operations in the Balkans to bring Turkey into the war, in May 1943, the Americans extracted a British commitment to limit Allied operations in the Mediterranean to an invasion of the Italian mainland and to invade France in 1944.[178]
134
+
135
+ By the end of April 1942, Japan and its ally Thailand had almost fully conquered Burma, Malaya, the Dutch East Indies, Singapore, and Rabaul, inflicting severe losses on Allied troops and taking a large number of prisoners.[179] Despite stubborn resistance by Filipino and US forces, the Philippine Commonwealth was eventually captured in May 1942, forcing its government into exile.[180] On 16 April, in Burma, 7,000 British soldiers were encircled by the Japanese 33rd Division during the Battle of Yenangyaung and rescued by the Chinese 38th Division.[181] Japanese forces also achieved naval victories in the South China Sea, Java Sea and Indian Ocean,[182] and bombed the Allied naval base at Darwin, Australia. In January 1942, the only Allied success against Japan was a Chinese victory at Changsha.[183] These easy victories over unprepared US and European opponents left Japan overconfident, as well as overextended.[184]
136
+
137
+ In early May 1942, Japan initiated operations to capture Port Moresby by amphibious assault and thus sever communications and supply lines between the United States and Australia. The planned invasion was thwarted when an Allied task force, centred on two American fleet carriers, fought Japanese naval forces to a draw in the Battle of the Coral Sea.[185] Japan's next plan, motivated by the earlier Doolittle Raid, was to seize Midway Atoll and lure American carriers into battle to be eliminated; as a diversion, Japan would also send forces to occupy the Aleutian Islands in Alaska.[186] In mid-May, Japan started the Zhejiang-Jiangxi Campaign in China, with the goal of inflicting retribution on the Chinese who aided the surviving American airmen in the Doolittle Raid by destroying air bases and fighting against the Chinese 23rd and 32nd Army Groups.[187][188] In early June, Japan put its operations into action, but the Americans, having broken Japanese naval codes in late May, were fully aware of the plans and order of battle, and used this knowledge to achieve a decisive victory at Midway over the Imperial Japanese Navy.[189]
138
+
139
+ With its capacity for aggressive action greatly diminished as a result of the Midway battle, Japan chose to focus on a belated attempt to capture Port Moresby by an overland campaign in the Territory of Papua.[190] The Americans planned a counter-attack against Japanese positions in the southern Solomon Islands, primarily Guadalcanal, as a first step towards capturing Rabaul, the main Japanese base in Southeast Asia.[191]
140
+
141
+ Both plans started in July, but by mid-September, the Battle for Guadalcanal took priority for the Japanese, and troops in New Guinea were ordered to withdraw from the Port Moresby area to the northern part of the island, where they faced Australian and United States troops in the Battle of Buna-Gona.[192] Guadalcanal soon became a focal point for both sides with heavy commitments of troops and ships in the battle for Guadalcanal. By the start of 1943, the Japanese were defeated on the island and withdrew their troops.[193] In Burma, Commonwealth forces mounted two operations. The first, an offensive into the Arakan region in late 1942, went disastrously, forcing a retreat back to India by May 1943.[194] The second was the insertion of irregular forces behind Japanese front-lines in February which, by the end of April, had achieved mixed results.[195]
142
+
143
+ Despite considerable losses, in early 1942 Germany and its allies stopped a major Soviet offensive in central and southern Russia, keeping most territorial gains they had achieved during the previous year.[196] In May the Germans defeated Soviet offensives in the Kerch Peninsula and at Kharkov,[197] and then launched their main summer offensive against southern Russia in June 1942, to seize the oil fields of the Caucasus and occupy Kuban steppe, while maintaining positions on the northern and central areas of the front. The Germans split Army Group South into two groups: Army Group A advanced to the lower Don River and struck south-east to the Caucasus, while Army Group B headed towards the Volga River. The Soviets decided to make their stand at Stalingrad on the Volga.[198]
144
+
145
+ By mid-November, the Germans had nearly taken Stalingrad in bitter street fighting. The Soviets began their second winter counter-offensive, starting with an encirclement of German forces at Stalingrad,[199] and an assault on the Rzhev salient near Moscow, though the latter failed disastrously.[200] By early February 1943, the German Army had taken tremendous losses; German troops at Stalingrad had been defeated,[201] and the front-line had been pushed back beyond its position before the summer offensive. In mid-February, after the Soviet push had tapered off, the Germans launched another attack on Kharkov, creating a salient in their front line around the Soviet city of Kursk.[202]
146
+
147
+ Exploiting poor American naval command decisions, the German navy ravaged Allied shipping off the American Atlantic coast.[203] By November 1941, Commonwealth forces had launched a counter-offensive, Operation Crusader, in North Africa, and reclaimed all the gains the Germans and Italians had made.[204] In North Africa, the Germans launched an offensive in January, pushing the British back to positions at the Gazala Line by early February,[205] followed by a temporary lull in combat which Germany used to prepare for their upcoming offensives.[206] Concerns the Japanese might use bases in Vichy-held Madagascar caused the British to invade the island in early May 1942.[207] An Axis offensive in Libya forced an Allied retreat deep inside Egypt until Axis forces were stopped at El Alamein.[208] On the Continent, raids of Allied commandos on strategic targets, culminating in the disastrous Dieppe Raid,[209] demonstrated the Western Allies' inability to launch an invasion of continental Europe without much better preparation, equipment, and operational security.[210][page needed]
148
+
149
+ In August 1942, the Allies succeeded in repelling a second attack against El Alamein[211] and, at a high cost, managed to deliver desperately needed supplies to the besieged Malta.[212] A few months later, the Allies commenced an attack of their own in Egypt, dislodging the Axis forces and beginning a drive west across Libya.[213] This attack was followed up shortly after by Anglo-American landings in French North Africa, which resulted in the region joining the Allies.[214] Hitler responded to the French colony's defection by ordering the occupation of Vichy France;[214] although Vichy forces did not resist this violation of the armistice, they managed to scuttle their fleet to prevent its capture by German forces.[214][215] The Axis forces in Africa withdrew into Tunisia, which was conquered by the Allies in May 1943.[214][216]
150
+
151
+ In June 1943 the British and Americans began a strategic bombing campaign against Germany with a goal to disrupt the war economy, reduce morale, and "de-house" the civilian population.[217] The firebombing of Hamburg was among the first attacks in this campaign, inflicting significant casualties and considerable losses on infrastructure of this important industrial centre.[218]
152
+
153
+ After the Guadalcanal Campaign, the Allies initiated several operations against Japan in the Pacific. In May 1943, Canadian and US forces were sent to eliminate Japanese forces from the Aleutians.[219] Soon after, the United States, with support from Australia, New Zealand and Pacific Islander forces, began major ground, sea and air operations to isolate Rabaul by capturing surrounding islands, and breach the Japanese Central Pacific perimeter at the Gilbert and Marshall Islands.[220] By the end of March 1944, the Allies had completed both of these objectives and had also neutralised the major Japanese base at Truk in the Caroline Islands. In April, the Allies launched an operation to retake Western New Guinea.[221]
154
+
155
+ In the Soviet Union, both the Germans and the Soviets spent the spring and early summer of 1943 preparing for large offensives in central Russia. On 4 July 1943, Germany attacked Soviet forces around the Kursk Bulge. Within a week, German forces had exhausted themselves against the Soviets' deeply echeloned and well-constructed defences,[222] and for the first time in the war Hitler cancelled the operation before it had achieved tactical or operational success.[223] This decision was partially affected by the Western Allies' invasion of Sicily launched on 9 July, which, combined with previous Italian failures, resulted in the ousting and arrest of Mussolini later that month.[224]
156
+
157
+ On 12 July 1943, the Soviets launched their own counter-offensives, thereby dispelling any chance of German victory or even stalemate in the east. The Soviet victory at Kursk marked the end of German superiority,[225] giving the Soviet Union the initiative on the Eastern Front.[226][227] The Germans tried to stabilise their eastern front along the hastily fortified Panther–Wotan line, but the Soviets broke through it at Smolensk and by the Lower Dnieper Offensives.[228]
158
+
159
+ On 3 September 1943, the Western Allies invaded the Italian mainland, following Italy's armistice with the Allies.[229] Germany with the help of fascists responded by disarming Italian forces that were in many places without superior orders, seizing military control of Italian areas,[230] and creating a series of defensive lines.[231] German special forces then rescued Mussolini, who then soon established a new client state in German-occupied Italy named the Italian Social Republic,[232] causing an Italian civil war. The Western Allies fought through several lines until reaching the main German defensive line in mid-November.[233]
160
+
161
+ German operations in the Atlantic also suffered. By May 1943, as Allied counter-measures became increasingly effective, the resulting sizeable German submarine losses forced a temporary halt of the German Atlantic naval campaign.[234] In November 1943, Franklin D. Roosevelt and Winston Churchill met with Chiang Kai-shek in Cairo and then with Joseph Stalin in Tehran.[235] The former conference determined the post-war return of Japanese territory[236] and the military planning for the Burma Campaign,[237] while the latter included agreement that the Western Allies would invade Europe in 1944 and that the Soviet Union would declare war on Japan within three months of Germany's defeat.[238]
162
+
163
+ From November 1943, during the seven-week Battle of Changde, the Chinese forced Japan to fight a costly war of attrition, while awaiting Allied relief.[239][240][241] In January 1944, the Allies launched a series of attacks in Italy against the line at Monte Cassino and tried to outflank it with landings at Anzio.[242]
164
+
165
+ On 27 January 1944, Soviet troops launched a major offensive that expelled German forces from the Leningrad region, thereby ending the most lethal siege in history.[243] The following Soviet offensive was halted on the pre-war Estonian border by the German Army Group North aided by Estonians hoping to re-establish national independence. This delay slowed subsequent Soviet operations in the Baltic Sea region.[244] By late May 1944, the Soviets had liberated Crimea, largely expelled Axis forces from Ukraine, and made incursions into Romania, which were repulsed by the Axis troops.[245] The Allied offensives in Italy had succeeded and, at the expense of allowing several German divisions to retreat, on 4 June Rome was captured.[246]
166
+
167
+ The Allies had mixed success in mainland Asia. In March 1944, the Japanese launched the first of two invasions, an operation against British positions in Assam, India,[247] and soon besieged Commonwealth positions at Imphal and Kohima.[248] In May 1944, British forces mounted a counter-offensive that drove Japanese troops back to Burma by July,[248] and Chinese forces that had invaded northern Burma in late 1943 besieged Japanese troops in Myitkyina.[249] The second Japanese invasion of China aimed to destroy China's main fighting forces, secure railways between Japanese-held territory and capture Allied airfields.[250] By June, the Japanese had conquered the province of Henan and begun a new attack on Changsha in Hunan province.[251]
168
+
169
+ On 6 June 1944 (known as D-Day), after three years of Soviet pressure,[252] the Western Allies invaded northern France. After reassigning several Allied divisions from Italy, they also attacked southern France.[253] These landings were successful, and led to the defeat of the German Army units in France. Paris was liberated on 25 August by the local resistance assisted by the Free French Forces, both led by General Charles de Gaulle,[254] and the Western Allies continued to push back German forces in western Europe during the latter part of the year. An attempt to advance into northern Germany spearheaded by a major airborne operation in the Netherlands failed.[255] After that, the Western Allies slowly pushed into Germany, but failed to cross the Rur river in a large offensive. In Italy, Allied advance also slowed due to the last major German defensive line.[256]
170
+
171
+ On 22 June, the Soviets launched a strategic offensive in Belarus ("Operation Bagration") that destroyed the German Army Group Centre almost completely.[257] Soon after that, another Soviet strategic offensive forced German troops from Western Ukraine and Eastern Poland. The Soviets formed the Polish Committee of National Liberation to control territory in Poland and combat the Polish Armia Krajowa; The Soviet Red Army remained in the Praga district on the other side of the Vistula and watched passively as the Germans quelled the Warsaw Uprising initiated by the Armia Krajowa.[258] The national uprising in Slovakia was also quelled by the Germans.[259] The Soviet Red Army's strategic offensive in eastern Romania cut off and destroyed the considerable German troops there and triggered a successful coup d'état in Romania and in Bulgaria, followed by those countries' shift to the Allied side.[260]
172
+
173
+ In September 1944, Soviet troops advanced into Yugoslavia and forced the rapid withdrawal of German Army Groups E and F in Greece, Albania and Yugoslavia to rescue them from being cut off.[261] By this point, the Communist-led Partisans under Marshal Josip Broz Tito, who had led an increasingly successful guerrilla campaign against the occupation since 1941, controlled much of the territory of Yugoslavia and engaged in delaying efforts against German forces further south. In northern Serbia, the Soviet Red Army, with limited support from Bulgarian forces, assisted the Partisans in a joint liberation of the capital city of Belgrade on 20 October. A few days later, the Soviets launched a massive assault against German-occupied Hungary that lasted until the fall of Budapest in February 1945.[262] Unlike impressive Soviet victories in the Balkans, bitter Finnish resistance to the Soviet offensive in the Karelian Isthmus denied the Soviets occupation of Finland and led to a Soviet-Finnish armistice on relatively mild conditions,[263] although Finland was forced to fight their former ally Germany.[264][broken footnote]
174
+
175
+ By the start of July 1944, Commonwealth forces in Southeast Asia had repelled the Japanese sieges in Assam, pushing the Japanese back to the Chindwin River[265] while the Chinese captured Myitkyina. In September 1944, Chinese forces captured Mount Song and reopened the Burma Road.[266] In China, the Japanese had more successes, having finally captured Changsha in mid-June and the city of Hengyang by early August.[267] Soon after, they invaded the province of Guangxi, winning major engagements against Chinese forces at Guilin and Liuzhou by the end of November[268] and successfully linking up their forces in China and Indochina by mid-December.[269]
176
+
177
+ In the Pacific, US forces continued to press back the Japanese perimeter. In mid-June 1944, they began their offensive against the Mariana and Palau islands, and decisively defeated Japanese forces in the Battle of the Philippine Sea. These defeats led to the resignation of the Japanese Prime Minister, Hideki Tojo, and provided the United States with air bases to launch intensive heavy bomber attacks on the Japanese home islands. In late October, American forces invaded the Filipino island of Leyte; soon after, Allied naval forces scored another large victory in the Battle of Leyte Gulf, one of the largest naval battles in history.[270]
178
+
179
+ On 16 December 1944, Germany made a last attempt on the Western Front by using most of its remaining reserves to launch a massive counter-offensive in the Ardennes and along with the French-German border to split the Western Allies, encircle large portions of Western Allied troops and capture their primary supply port at Antwerp to prompt a political settlement.[271] By January, the offensive had been repulsed with no strategic objectives fulfilled.[271] In Italy, the Western Allies remained stalemated at the German defensive line. In mid-January 1945, the Soviets and Poles attacked in Poland, pushing from the Vistula to the Oder river in Germany, and overran East Prussia.[272] On 4 February Soviet, British, and US leaders met for the Yalta Conference. They agreed on the occupation of post-war Germany, and on when the Soviet Union would join the war against Japan.[273]
180
+
181
+ In February, the Soviets entered Silesia and Pomerania, while Western Allies entered western Germany and closed to the Rhine river. By March, the Western Allies crossed the Rhine north and south of the Ruhr, encircling the German Army Group B.[274] In early March, in an attempt to protect its last oil reserves in Hungary and to retake Budapest, Germany launched its last major offensive against Soviet troops near Lake Balaton. In two weeks, the offensive had been repulsed, the Soviets advanced to Vienna, and captured the city. In early April, Soviet troops captured Königsberg, while the Western Allies finally pushed forward in Italy and swept across western Germany capturing Hamburg and Nuremberg. American and Soviet forces met at the Elbe river on 25 April, leaving several unoccupied pockets in southern Germany and around Berlin.
182
+
183
+ Soviet and Polish forces stormed and captured Berlin in late April. In Italy, German forces surrendered on 29 April. On 30 April, the Reichstag was captured, signalling the military defeat of Nazi Germany,[275] Berlin garrison surrendered on 2 May.
184
+
185
+ Several changes in leadership occurred during this period. On 12 April, President Roosevelt died and was succeeded by Harry S. Truman. Benito Mussolini was killed by Italian partisans on 28 April.[276] Two days later, Hitler committed suicide in besieged Berlin, and he was succeeded by Grand Admiral Karl Dönitz.[277]
186
+ Total and unconditional surrender in Europe was signed on 7 and 8 May, to be effective by the end of 8 May.[278] German Army Group Centre resisted in Prague until 11 May.[279]
187
+
188
+ In the Pacific theatre, American forces accompanied by the forces of the Philippine Commonwealth advanced in the Philippines, clearing Leyte by the end of April 1945. They landed on Luzon in January 1945 and recaptured Manila in March. Fighting continued on Luzon, Mindanao, and other islands of the Philippines until the end of the war.[280] Meanwhile, the United States Army Air Forces launched a massive firebombing campaign of strategic cities in Japan in an effort to destroy Japanese war industry and civilian morale. A devastating bombing raid on Tokyo of 9–10 March was the deadliest conventional bombing raid in history.[281]
189
+
190
+ In May 1945, Australian troops landed in Borneo, over-running the oilfields there. British, American, and Chinese forces defeated the Japanese in northern Burma in March, and the British pushed on to reach Rangoon by 3 May.[282] Chinese forces started a counterattack in the Battle of West Hunan that occurred between 6 April and 7 June 1945. American naval and amphibious forces also moved towards Japan, taking Iwo Jima by March, and Okinawa by the end of June.[283] At the same time, American submarines cut off Japanese imports, drastically reducing Japan's ability to supply its overseas forces.[284]
191
+
192
+ On 11 July, Allied leaders met in Potsdam, Germany. They confirmed earlier agreements about Germany,[285] and the American, British and Chinese governments reiterated the demand for unconditional surrender of Japan, specifically stating that "the alternative for Japan is prompt and utter destruction".[286] During this conference, the United Kingdom held its general election, and Clement Attlee replaced Churchill as Prime Minister.[287]
193
+
194
+ The call for unconditional surrender was rejected by the Japanese government, which believed it would be capable of negotiating for more favourable surrender terms.[288] In early August, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki. Between the two bombings, the Soviets, pursuant to the Yalta agreement, invaded Japanese-held Manchuria and quickly defeated the Kwantung Army, which was the largest Japanese fighting force.[289] These two events persuaded previously adamant Imperial Army leaders to accept surrender terms.[290] The Red Army also captured the southern part of Sakhalin Island and the Kuril Islands. On 15 August 1945, Japan surrendered, with the surrender documents finally signed at Tokyo Bay on the deck of the American battleship USS Missouri on 2 September 1945, ending the war.[291]
195
+
196
+ The Allies established occupation administrations in Austria and Germany. The former became a neutral state, non-aligned with any political bloc. The latter was divided into western and eastern occupation zones controlled by the Western Allies and the Soviet Union. A denazification programme in Germany led to the prosecution of Nazi war criminals in the Nuremberg trials and the removal of ex-Nazis from power, although this policy moved towards amnesty and re-integration of ex-Nazis into West German society.[292]
197
+
198
+ Germany lost a quarter of its pre-war (1937) territory. Among the eastern territories, Silesia, Neumark and most of Pomerania were taken over by Poland,[293] and East Prussia was divided between Poland and the Soviet Union, followed by the expulsion to Germany of the nine million Germans from these provinces,[294][295] as well as three million Germans from the Sudetenland in Czechoslovakia. By the 1950s, one-fifth of West Germans were refugees from the east. The Soviet Union also took over the Polish provinces east of the Curzon line,[296] from which 2 million Poles were expelled;[295][297] north-east Romania,[298][299] parts of eastern Finland,[300] and the three Baltic states were incorporated into the Soviet Union.[301][302]
199
+
200
+ In an effort to maintain world peace,[303] the Allies formed the United Nations, which officially came into existence on 24 October 1945,[304] and adopted the Universal Declaration of Human Rights in 1948 as a common standard for all member nations.[305] The great powers that were the victors of the war—France, China, the United Kingdom, the Soviet Union and the United States—became the permanent members of the UN's Security Council.[306] The five permanent members remain so to the present, although there have been two seat changes, between the Republic of China and the People's Republic of China in 1971, and between the Soviet Union and its successor state, the Russian Federation, following the dissolution of the Soviet Union in 1991. The alliance between the Western Allies and the Soviet Union had begun to deteriorate even before the war was over.[307]
201
+
202
+ Germany had been de facto divided, and two independent states, the Federal Republic of Germany (West Germany) and the German Democratic Republic (East Germany),[308] were created within the borders of Allied and Soviet occupation zones. The rest of Europe was also divided into Western and Soviet spheres of influence.[309] Most eastern and central European countries fell into the Soviet sphere, which led to establishment of Communist-led regimes, with full or partial support of the Soviet occupation authorities. As a result, East Germany,[310] Poland, Hungary, Romania, Czechoslovakia, and Albania[311] became Soviet satellite states. Communist Yugoslavia conducted a fully independent policy, causing tension with the Soviet Union.[312]
203
+
204
+ Post-war division of the world was formalised by two international military alliances, the United States-led NATO and the Soviet-led Warsaw Pact.[313] The long period of political tensions and military competition between them, the Cold War, would be accompanied by an unprecedented arms race and proxy wars.[314]
205
+
206
+ In Asia, the United States led the occupation of Japan and administrated Japan's former islands in the Western Pacific, while the Soviets annexed South Sakhalin and the Kuril Islands.[315] Korea, formerly under Japanese rule, was divided and occupied by the Soviet Union in the North and the United States in the South between 1945 and 1948. Separate republics emerged on both sides of the 38th parallel in 1948, each claiming to be the legitimate government for all of Korea, which led ultimately to the Korean War.[316]
207
+
208
+ In China, nationalist and communist forces resumed the civil war in June 1946. Communist forces were victorious and established the People's Republic of China on the mainland, while nationalist forces retreated to Taiwan in 1949.[317] In the Middle East, the Arab rejection of the United Nations Partition Plan for Palestine and the creation of Israel marked the escalation of the Arab–Israeli conflict. While European powers attempted to retain some or all of their colonial empires, their losses of prestige and resources during the war rendered this unsuccessful, leading to decolonisation.[318][319]
209
+
210
+ The global economy suffered heavily from the war, although participating nations were affected differently. The United States emerged much richer than any other nation, leading to a baby boom, and by 1950 its gross domestic product per person was much higher than that of any of the other powers, and it dominated the world economy.[320] The UK and US pursued a policy of industrial disarmament in Western Germany in the years 1945–1948.[321] Because of international trade interdependencies this led to European economic stagnation and delayed European recovery for several years.[322][323]
211
+
212
+ Recovery began with the mid-1948 currency reform in Western Germany, and was sped up by the liberalisation of European economic policy that the Marshall Plan (1948–1951) both directly and indirectly caused.[324][325] The post-1948 West German recovery has been called the German economic miracle.[326] Italy also experienced an economic boom[327] and the French economy rebounded.[328] By contrast, the United Kingdom was in a state of economic ruin,[329] and although receiving a quarter of the total Marshall Plan assistance, more than any other European country,[330] it continued in relative economic decline for decades.[331]
213
+
214
+ The Soviet Union, despite enormous human and material losses, also experienced rapid increase in production in the immediate post-war era.[332] Japan recovered much later.[333] China returned to its pre-war industrial production by 1952.[334]
215
+
216
+ Estimates for the total number of casualties in the war vary, because many deaths went unrecorded.[335] Most suggest that some 60 million people died in the war, including about 20 million military personnel and 40 million civilians.[336][337][338]
217
+ Many of the civilians died because of deliberate genocide, massacres, mass bombings, disease, and starvation.
218
+
219
+ The Soviet Union alone lost around 27 million people during the war,[339] including 8.7 million military and 19 million civilian deaths.[340] A quarter of the people in the Soviet Union were wounded or killed.[341] Germany sustained 5.3 million military losses, mostly on the Eastern Front and during the final battles in Germany.[342]
220
+
221
+ An estimated 11[343] to 17 million[344] civilians died as a direct or as an indirect result of Nazi racist policies, including mass killing of around 6 million Jews, along with Roma, homosexuals, at least 1.9 million ethnic Poles[345][346] and millions of other Slavs (including Russians, Ukrainians and Belarusians), and other ethnic and minority groups.[347][344] Between 1941 and 1945, more than 200,000 ethnic Serbs, along with gypsies and Jews, were persecuted and murdered by the Axis-aligned Croatian Ustaše in Yugoslavia.[348] Also, more than 100,000 Poles were massacred by the Ukrainian Insurgent Army in the Volhynia massacres, between 1943 and 1945.[349] At the same time about 10,000–15,000 Ukrainians were killed by the Polish Home Army and other Polish units, in reprisal attacks.[350]
222
+
223
+ In Asia and the Pacific, between 3 million and more than 10 million civilians, mostly Chinese (estimated at 7.5 million[351]), were killed by the Japanese occupation forces.[352] The most infamous Japanese atrocity was the Nanking Massacre, in which fifty to three hundred thousand Chinese civilians were raped and murdered.[353] Mitsuyoshi Himeta reported that 2.7 million casualties occurred during the Sankō Sakusen. General Yasuji Okamura implemented the policy in Heipei and Shantung.[354]
224
+
225
+ Axis forces employed biological and chemical weapons. The Imperial Japanese Army used a variety of such weapons during its invasion and occupation of China (see Unit 731)[355][356] and in early conflicts against the Soviets.[357] Both the Germans and the Japanese tested such weapons against civilians,[358] and sometimes on prisoners of war.[359]
226
+
227
+ The Soviet Union was responsible for the Katyn massacre of 22,000 Polish officers,[360] and the imprisonment or execution of thousands of political prisoners by the NKVD, along with mass civilian deportations to Siberia, in the Baltic states and eastern Poland annexed by the Red Army.[361]
228
+
229
+ The mass bombing of cities in Europe and Asia has often been called a war crime, although no positive or specific customary international humanitarian law with respect to aerial warfare existed before or during World War II.[362] The USAAF firebombed a total of 67 Japanese cities, killing 393,000 civilians and destroying 65% of built-up areas.[363]
230
+
231
+ Nazi Germany was responsible for the Holocaust (which killed approximately 6 million Jews) as well as for killing 2.7 million ethnic Poles[364] and 4 million others who were deemed "unworthy of life" (including the disabled and mentally ill, Soviet prisoners of war, Romani, homosexuals, Freemasons, and Jehovah's Witnesses) as part of a programme of deliberate extermination, in effect becoming a "genocidal state".[365] Soviet POWs were kept in especially unbearable conditions, and 3.6 million Soviet POWs out of 5.7 died in Nazi camps during the war.[366][367] In addition to concentration camps, death camps were created in Nazi Germany to exterminate people on an industrial scale. Nazi Germany extensively used forced labourers; about 12 million Europeans from German occupied countries were abducted and used as a slave work force in German industry, agriculture and war economy.[368]
232
+
233
+ The Soviet Gulag became a de facto system of deadly camps during 1942–43, when wartime privation and hunger caused numerous deaths of inmates,[369] including foreign citizens of Poland and other countries occupied in 1939–40 by the Soviet Union, as well as Axis POWs.[370] By the end of the war, most Soviet POWs liberated from Nazi camps and many repatriated civilians were detained in special filtration camps where they were subjected to NKVD evaluation, and 226,127 were sent to the Gulag as real or perceived Nazi collaborators.[371]
234
+
235
+ Japanese prisoner-of-war camps, many of which were used as labour camps, also had high death rates. The International Military Tribunal for the Far East found the death rate of Western prisoners was 27 per cent (for American POWs, 37 per cent),[372] seven times that of POWs under the Germans and Italians.[373] While 37,583 prisoners from the UK, 28,500 from the Netherlands, and 14,473 from the United States were released after the surrender of Japan, the number of Chinese released was only 56.[374]
236
+
237
+ At least five million Chinese civilians from northern China and Manchukuo were enslaved between 1935 and 1941 by the East Asia Development Board, or Kōain, for work in mines and war industries. After 1942, the number reached 10 million.[375] In Java, between 4 and 10 million rōmusha (Japanese: "manual labourers"), were forced to work by the Japanese military. About 270,000 of these Javanese labourers were sent to other Japanese-held areas in South East Asia, and only 52,000 were repatriated to Java.[376]
238
+
239
+ In Europe, occupation came under two forms. In Western, Northern, and Central Europe (France, Norway, Denmark, the Low Countries, and the annexed portions of Czechoslovakia) Germany established economic policies through which it collected roughly 69.5 billion reichmarks (27.8 billion US dollars) by the end of the war; this figure does not include the sizeable plunder of industrial products, military equipment, raw materials and other goods.[377] Thus, the income from occupied nations was over 40 per cent of the income Germany collected from taxation, a figure which increased to nearly 40 per cent of total German income as the war went on.[378]
240
+
241
+ In the East, the intended gains of Lebensraum were never attained as fluctuating front-lines and Soviet scorched earth policies denied resources to the German invaders.[379] Unlike in the West, the Nazi racial policy encouraged extreme brutality against what it considered to be the "inferior people" of Slavic descent; most German advances were thus followed by mass executions.[380] Although resistance groups formed in most occupied territories, they did not significantly hamper German operations in either the East[381] or the West[382] until late 1943.
242
+
243
+ In Asia, Japan termed nations under its occupation as being part of the Greater East Asia Co-Prosperity Sphere, essentially a Japanese hegemony which it claimed was for purposes of liberating colonised peoples.[383] Although Japanese forces were sometimes welcomed as liberators from European domination, Japanese war crimes frequently turned local public opinion against them.[384] During Japan's initial conquest it captured 4,000,000 barrels (640,000 m3) of oil (~5.5×105 tonnes) left behind by retreating Allied forces, and by 1943 was able to get production in the Dutch East Indies up to 50 million barrels (~6.8×10^6 t), 76 per cent of its 1940 output rate.[384]
244
+
245
+ In Europe, before the outbreak of the war, the Allies had significant advantages in both population and economics. In 1938, the Western Allies (United Kingdom, France, Poland and the British Dominions) had a 30 per cent larger population and a 30 per cent higher gross domestic product than the European Axis powers (Germany and Italy); if colonies are included, the Allies had more than a 5:1 advantage in population and a nearly 2:1 advantage in GDP.[385] In Asia at the same time, China had roughly six times the population of Japan but only an 89 per cent higher GDP; this is reduced to three times the population and only a 38 per cent higher GDP if Japanese colonies are included.[385]
246
+
247
+ The United States produced about two-thirds of all the munitions used by the Allies in WWII, including warships, transports, warplanes, artillery, tanks, trucks, and ammunition.[386]
248
+ Though the Allies' economic and population advantages were largely mitigated during the initial rapid blitzkrieg attacks of Germany and Japan, they became the decisive factor by 1942, after the United States and Soviet Union joined the Allies, as the war largely settled into one of attrition.[387] While the Allies' ability to out-produce the Axis is often attributed[by whom?] to the Allies having more access to natural resources, other factors, such as Germany and Japan's reluctance to employ women in the labour force,[388] Allied strategic bombing,[389] and Germany's late shift to a war economy[390] contributed significantly. Additionally, neither Germany nor Japan planned to fight a protracted war, and had not equipped themselves to do so.[391] To improve their production, Germany and Japan used millions of slave labourers;[392] Germany used about 12 million people, mostly from Eastern Europe,[368] while Japan used more than 18 million people in Far East Asia.[375][376]
249
+
250
+ Aircraft were used for reconnaissance, as fighters, bombers, and ground-support, and each role was advanced considerably. Innovation included airlift (the capability to quickly move limited high-priority supplies, equipment, and personnel);[393] and of strategic bombing (the bombing of enemy industrial and population centres to destroy the enemy's ability to wage war).[394] Anti-aircraft weaponry also advanced, including defences such as radar and surface-to-air artillery. The use of the jet aircraft was pioneered and, though late introduction meant it had little impact, it led to jets becoming standard in air forces worldwide.[395] Although guided missiles were being developed, they were not advanced enough to reliably target aircraft until some years after the war.
251
+
252
+ Advances were made in nearly every aspect of naval warfare, most notably with aircraft carriers and submarines. Although aeronautical warfare had relatively little success at the start of the war, actions at Taranto, Pearl Harbor, and the Coral Sea established the carrier as the dominant capital ship in place of the battleship.[396][397][398] In the Atlantic, escort carriers proved to be a vital part of Allied convoys, increasing the effective protection radius and helping to close the Mid-Atlantic gap.[399] Carriers were also more economical than battleships because of the relatively low cost of aircraft[400] and their not requiring to be as heavily armoured.[401] Submarines, which had proved to be an effective weapon during the First World War,[402] were anticipated by all sides to be important in the second. The British focused development on anti-submarine weaponry and tactics, such as sonar and convoys, while Germany focused on improving its offensive capability, with designs such as the Type VII submarine and wolfpack tactics.[403][better source needed] Gradually, improving Allied technologies such as the Leigh light, hedgehog, squid, and homing torpedoes proved victorious over the German submarines.[citation needed]
253
+
254
+ Land warfare changed from the static front lines of trench warfare of World War I, which had relied on improved artillery that outmatched the speed of both infantry and cavalry, to increased mobility and combined arms. The tank, which had been used predominantly for infantry support in the First World War, had evolved into the primary weapon.[404] In the late 1930s, tank design was considerably more advanced than it had been during World War I,[405] and advances continued throughout the war with increases in speed, armour and firepower.[citation needed] At the start of the war, most commanders thought enemy tanks should be met by tanks with superior specifications.[406] This idea was challenged by the poor performance of the relatively light early tank guns against armour, and German doctrine of avoiding tank-versus-tank combat. This, along with Germany's use of combined arms, were among the key elements of their highly successful blitzkrieg tactics across Poland and France.[404] Many means of destroying tanks, including indirect artillery, anti-tank guns (both towed and self-propelled), mines, short-ranged infantry antitank weapons, and other tanks were used.[406] Even with large-scale mechanisation, infantry remained the backbone of all forces,[407] and throughout the war, most infantry were equipped similarly to World War I.[408] The portable machine gun spread, a notable example being the German MG34, and various submachine guns which were suited to close combat in urban and jungle settings.[408] The assault rifle, a late war development incorporating many features of the rifle and submachine gun, became the standard postwar infantry weapon for most armed forces.[409]
255
+
256
+ Most major belligerents attempted to solve the problems of complexity and security involved in using large codebooks for cryptography by designing ciphering machines, the most well known being the German Enigma machine.[410] Development of SIGINT (signals intelligence) and cryptanalysis enabled the countering process of decryption. Notable examples were the Allied decryption of Japanese naval codes[411] and British Ultra, a pioneering method for decoding Enigma benefiting from information given to the United Kingdom by the Polish Cipher Bureau, which had been decoding early versions of Enigma before the war.[412] Another aspect of military intelligence was the use of deception, which the Allies used to great effect, such as in operations Mincemeat and Bodyguard.[411][413]
257
+
258
+ Other technological and engineering feats achieved during, or as a result of, the war include the world's first programmable computers (Z3, Colossus, and ENIAC), guided missiles and modern rockets, the Manhattan Project's development of nuclear weapons, operations research and the development of artificial harbours and oil pipelines under the English Channel.[citation needed] Penicillin was first mass-produced and used during the war (see Stabilization and mass production of penicillin).[414]
259
+
en/5324.html.txt ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Asia-Pacific
6
+
7
+ Mediterranean and Middle East
8
+
9
+ Other campaigns
10
+
11
+ Coups
12
+
13
+ World War II (WWII or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945. It involved the vast majority of the world's countries—including all the great powers—forming two opposing military alliances: the Allies and the Axis. In a state of total war, directly involving more than 100 million people from more than 30 countries, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, blurring the distinction between civilian and military resources. World War II was the deadliest conflict in human history, marked by 70 to 85 million fatalities. Tens of millions of people died due to genocides (including the Holocaust), premeditated death from starvation, massacres, and disease. Aircraft played a major role in the conflict, including in the use of strategic bombing of population centres, and the only uses of nuclear weapons in war.
14
+
15
+ World War II is generally considered to have begun on 1 September 1939, with the invasion of Poland by Germany and subsequent declarations of war on Germany by France and the United Kingdom. From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours: Poland, Finland, Romania and the Baltic states. Following the onset of campaigns in North Africa and East Africa, and the Fall of France in mid-1940, the war continued primarily between the European Axis powers and the British Empire, with war in the Balkans, the aerial Battle of Britain, the Blitz, and the Battle of the Atlantic. On 22 June 1941, Germany led the European Axis powers in an invasion of the Soviet Union, opening the largest land theatre of war in history and trapping the Axis, crucially the German Wehrmacht, in a war of attrition.
16
+
17
+ Japan, which aimed to dominate Asia and the Pacific, was at war with the Republic of China by 1937. In December 1941, Japan launched a surprise attack on the United States as well as European colonies in East Asia and the Pacific. Following an immediate US declaration of war against Japan, supported by one from the UK, the European Axis powers declared war on the United States in solidarity with their ally. Japan soon captured much of the Western Pacific, but its advances were halted in 1942 after Japan lost the critical Battle of Midway; later, Germany and Italy were defeated in North Africa and at Stalingrad in the Soviet Union. Key setbacks in 1943—which included a series of German defeats on the Eastern Front, the Allied invasions of Sicily and Italy, and Allied offensives in the Pacific—cost the Axis its initiative and forced it into strategic retreat on all fronts. In 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained its territorial losses and turned towards Germany and its allies. During 1944 and 1945, the Japanese suffered reversals in mainland Asia, while the Allies crippled the Japanese Navy and captured key Western Pacific islands.
18
+
19
+ The war in Europe concluded with an invasion of Germany by the Western Allies and the Soviet Union, culminating in the capture of Berlin by Soviet troops, the suicide of Adolf Hitler and the German unconditional surrender on 8 May 1945. Following the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender on its terms, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki on 6 and 9 August, respectively. Faced with an imminent invasion of the Japanese archipelago, the possibility of additional atomic bombings, and the Soviet entry into the war against Japan and its invasion of Manchuria on 9 August, Japan announced its intention to surrender on 15 August 1945, cementing total victory in Asia for the Allies. In the wake of the war, Germany and Japan were occupied and war crimes tribunals were conducted against German and Japanese leaders.
20
+
21
+ World War II changed the political alignment and social structure of the globe. The United Nations (UN) was established to foster international co-operation and prevent future conflicts, and the victorious great powers—China, France, the Soviet Union, the United Kingdom, and the United States—became the permanent members of its Security Council. The Soviet Union and the United States emerged as rival superpowers, setting the stage for the nearly half-century-long Cold War. In the wake of European devastation, the influence of its great powers waned, triggering the decolonisation of Africa and Asia. Most countries whose industries had been damaged moved towards economic recovery and expansion. Political integration, especially in Europe, began as an effort to forestall future hostilities, end pre-war enmities and forge a sense of common identity.
22
+
23
+ The start of the war in Europe is generally held to be 1 September 1939,[1][2] beginning with the German invasion of Poland; the United Kingdom and France declared war on Germany two days later. The dates for the beginning of war in the Pacific include the start of the Second Sino-Japanese War on 7 July 1937,[3][4] or even the Japanese invasion of Manchuria on 19 September 1931.[5][6][7]
24
+
25
+ Others follow the British historian A. J. P. Taylor, who held that the Sino-Japanese War and war in Europe and its colonies occurred simultaneously, and the two wars merged in 1941. This article uses the conventional dating. Other starting dates sometimes used for World War II include the Italian invasion of Abyssinia on 3 October 1935.[8] The British historian Antony Beevor views the beginning of World War II as the Battles of Khalkhin Gol fought between Japan and the forces of Mongolia and the Soviet Union from May to September 1939.[9]
26
+
27
+ The exact date of the war's end is also not universally agreed upon. It was generally accepted at the time that the war ended with the armistice of 14 August 1945 (V-J Day), rather than the formal surrender of Japan, which was on 2 September 1945 that officially ended the war in Asia. A peace treaty with Japan was signed in 1951.[10] A treaty regarding Germany's future allowed the reunification of East and West Germany to take place in 1990 and resolved most post-World War II issues.[11] No formal peace treaty between Japan and the Soviet Union was ever signed.[12]
28
+
29
+ World War I had radically altered the political European map, with the defeat of the Central Powers—including Austria-Hungary, Germany, Bulgaria and the Ottoman Empire—and the 1917 Bolshevik seizure of power in Russia, which eventually led to the founding of the Soviet Union. Meanwhile, the victorious Allies of World War I, such as France, Belgium, Italy, Romania, and Greece, gained territory, and new nation-states were created out of the collapse of Austria-Hungary and the Ottoman and Russian Empires.
30
+
31
+ To prevent a future world war, the League of Nations was created during the 1919 Paris Peace Conference. The organisation's primary goals were to prevent armed conflict through collective security, military and naval disarmament, and settling international disputes through peaceful negotiations and arbitration.
32
+
33
+ Despite strong pacifist sentiment after World War I,[13] its aftermath still caused irredentist and revanchist nationalism in several European states. These sentiments were especially marked in Germany because of the significant territorial, colonial, and financial losses incurred by the Treaty of Versailles. Under the treaty, Germany lost around 13 percent of its home territory and all its overseas possessions, while German annexation of other states was prohibited, reparations were imposed, and limits were placed on the size and capability of the country's armed forces.[14]
34
+
35
+ The German Empire was dissolved in the German Revolution of 1918–1919, and a democratic government, later known as the Weimar Republic, was created. The interwar period saw strife between supporters of the new republic and hardline opponents on both the right and left. Italy, as an Entente ally, had made some post-war territorial gains; however, Italian nationalists were angered that the promises made by the United Kingdom and France to secure Italian entrance into the war were not fulfilled in the peace settlement. From 1922 to 1925, the Fascist movement led by Benito Mussolini seized power in Italy with a nationalist, totalitarian, and class collaborationist agenda that abolished representative democracy, repressed socialist, left-wing and liberal forces, and pursued an aggressive expansionist foreign policy aimed at making Italy a world power, promising the creation of a "New Roman Empire".[15]
36
+
37
+ Adolf Hitler, after an unsuccessful attempt to overthrow the German government in 1923, eventually became the Chancellor of Germany in 1933. He abolished democracy, espousing a radical, racially motivated revision of the world order, and soon began a massive rearmament campaign.[16] Meanwhile, France, to secure its alliance, allowed Italy a free hand in Ethiopia, which Italy desired as a colonial possession. The situation was aggravated in early 1935 when the Territory of the Saar Basin was legally reunited with Germany and Hitler repudiated the Treaty of Versailles, accelerated his rearmament programme, and introduced conscription.[17]
38
+
39
+ The United Kingdom, France and Italy formed the Stresa Front in April 1935 in order to contain Germany, a key step towards military globalisation; however, that June, the United Kingdom made an independent naval agreement with Germany, easing prior restrictions. The Soviet Union, concerned by Germany's goals of capturing vast areas of Eastern Europe, drafted a treaty of mutual assistance with France. Before taking effect, though, the Franco-Soviet pact was required to go through the bureaucracy of the League of Nations, which rendered it essentially toothless.[18] The United States, concerned with events in Europe and Asia, passed the Neutrality Act in August of the same year.[19]
40
+
41
+ Hitler defied the Versailles and Locarno treaties by remilitarising the Rhineland in March 1936, encountering little opposition due to appeasement.[20] In October 1936, Germany and Italy formed the Rome–Berlin Axis. A month later, Germany and Japan signed the Anti-Comintern Pact, which Italy joined the following year.[21]
42
+
43
+ The Kuomintang (KMT) party in China launched a unification campaign against regional warlords and nominally unified China in the mid-1920s, but was soon embroiled in a civil war against its former Chinese Communist Party allies[22] and new regional warlords. In 1931, an increasingly militaristic Empire of Japan, which had long sought influence in China[23] as the first step of what its government saw as the country's right to rule Asia, staged the Mukden Incident as a pretext to invade Manchuria and establish the puppet state of Manchukuo.[24]
44
+
45
+ China appealed to the League of Nations to stop the Japanese invasion of Manchuria. Japan withdrew from the League of Nations after being condemned for its incursion into Manchuria. The two nations then fought several battles, in Shanghai, Rehe and Hebei, until the Tanggu Truce was signed in 1933. Thereafter, Chinese volunteer forces continued the resistance to Japanese aggression in Manchuria, and Chahar and Suiyuan.[25] After the 1936 Xi'an Incident, the Kuomintang and communist forces agreed on a ceasefire to present a united front to oppose Japan.[26]
46
+
47
+ The Second Italo–Ethiopian War was a brief colonial war that began in October 1935 and ended in May 1936. The war began with the invasion of the Ethiopian Empire (also known as Abyssinia) by the armed forces of the Kingdom of Italy (Regno d'Italia), which was launched from Italian Somaliland and Eritrea.[27] The war resulted in the military occupation of Ethiopia and its annexation into the newly created colony of Italian East Africa (Africa Orientale Italiana, or AOI); in addition it exposed the weakness of the League of Nations as a force to preserve peace. Both Italy and Ethiopia were member nations, but the League did little when the former clearly violated Article X of the League's Covenant.[28] The United Kingdom and France supported imposing sanctions on Italy for the invasion, but they were not fully enforced and failed to end the Italian invasion.[29] Italy subsequently dropped its objections to Germany's goal of absorbing Austria.[30]
48
+
49
+ When civil war broke out in Spain, Hitler and Mussolini lent military support to the Nationalist rebels, led by General Francisco Franco. Italy supported the Nationalists to a greater extent than the Nazis did: altogether Mussolini sent to Spain more than 70,000 ground troops and 6,000 aviation personnel, as well as about 720 aircraft.[31] The Soviet Union supported the existing government, the Spanish Republic. More than 30,000 foreign volunteers, known as the International Brigades, also fought against the Nationalists. Both Germany and the Soviet Union used this proxy war as an opportunity to test in combat their most advanced weapons and tactics. The Nationalists won the civil war in April 1939; Franco, now dictator, remained officially neutral during World War II but generally favoured the Axis.[32] His greatest collaboration with Germany was the sending of volunteers to fight on the Eastern Front.[33]
50
+
51
+ In July 1937, Japan captured the former Chinese imperial capital of Peking after instigating the Marco Polo Bridge Incident, which culminated in the Japanese campaign to invade all of China.[34] The Soviets quickly signed a non-aggression pact with China to lend materiel support, effectively ending China's prior co-operation with Germany. From September to November, the Japanese attacked Taiyuan, engaged the Kuomintang Army around Xinkou,[35] and fought Communist forces in Pingxingguan.[36][37] Generalissimo Chiang Kai-shek deployed his best army to defend Shanghai, but, after three months of fighting, Shanghai fell. The Japanese continued to push the Chinese forces back, capturing the capital Nanking in December 1937. After the fall of Nanking, tens of thousands if not hundreds of thousands of Chinese civilians and disarmed combatants were murdered by the Japanese.[38][39]
52
+
53
+ In March 1938, Nationalist Chinese forces won their first major victory at Taierzhuang but then the city of Xuzhou was taken by Japanese in May.[40] In June 1938, Chinese forces stalled the Japanese advance by flooding the Yellow River; this manoeuvre bought time for the Chinese to prepare their defences at Wuhan, but the city was taken by October.[41] Japanese military victories did not bring about the collapse of Chinese resistance that Japan had hoped to achieve; instead, the Chinese government relocated inland to Chongqing and continued the war.[42][43]
54
+
55
+ In the mid-to-late 1930s, Japanese forces in Manchukuo had sporadic border clashes with the Soviet Union and Mongolia. The Japanese doctrine of Hokushin-ron, which emphasised Japan's expansion northward, was favoured by the Imperial Army during this time. With the Japanese defeat at Khalkin Gol in 1939, the ongoing Second Sino-Japanese War[44] and ally Nazi Germany pursuing neutrality with the Soviets, this policy would prove difficult to maintain. Japan and the Soviet Union eventually signed a Neutrality Pact in April 1941, and Japan adopted the doctrine of Nanshin-ron, promoted by the Navy, which took its focus southward, eventually leading to its war with the United States and the Western Allies.[45][46]
56
+
57
+ In Europe, Germany and Italy were becoming more aggressive. In March 1938, Germany annexed Austria, again provoking little response from other European powers.[47] Encouraged, Hitler began pressing German claims on the Sudetenland, an area of Czechoslovakia with a predominantly ethnic German population. Soon the United Kingdom and France followed the appeasement policy of British Prime Minister Neville Chamberlain and conceded this territory to Germany in the Munich Agreement, which was made against the wishes of the Czechoslovak government, in exchange for a promise of no further territorial demands.[48] Soon afterwards, Germany and Italy forced Czechoslovakia to cede additional territory to Hungary, and Poland annexed Czechoslovakia's Zaolzie region.[49]
58
+
59
+ Although all of Germany's stated demands had been satisfied by the agreement, privately Hitler was furious that British interference had prevented him from seizing all of Czechoslovakia in one operation. In subsequent speeches Hitler attacked British and Jewish "war-mongers" and in January 1939 secretly ordered a major build-up of the German navy to challenge British naval supremacy. In March 1939, Germany invaded the remainder of Czechoslovakia and subsequently split it into the German Protectorate of Bohemia and Moravia and a pro-German client state, the Slovak Republic.[50] Hitler also delivered 20 March 1939 ultimatum to Lithuania, forcing the concession of the Klaipėda Region, formerly the German Memelland.[51]
60
+
61
+ Greatly alarmed and with Hitler making further demands on the Free City of Danzig, the United Kingdom and France guaranteed their support for Polish independence; when Italy conquered Albania in April 1939, the same guarantee was extended to Romania and Greece.[52] Shortly after the Franco-British pledge to Poland, Germany and Italy formalised their own alliance with the Pact of Steel.[53] Hitler accused the United Kingdom and Poland of trying to "encircle" Germany and renounced the Anglo-German Naval Agreement and the German–Polish Non-Aggression Pact.[54]
62
+
63
+ The situation reached a general crisis in late August as German troops continued to mobilise against the Polish border. On 23 August, when tripartite negotiations about a military alliance between France, the United Kingdom and Soviet Union stalled,[55] the Soviet Union signed a non-aggression pact with Germany.[56] This pact had a secret protocol that defined German and Soviet "spheres of influence" (western Poland and Lithuania for Germany; eastern Poland, Finland, Estonia, Latvia and Bessarabia for the Soviet Union), and raised the question of continuing Polish independence.[57] The pact neutralised the possibility of Soviet opposition to a campaign against Poland and assured that Germany would not have to face the prospect of a two-front war, as it had in World War I. Immediately after that, Hitler ordered the attack to proceed on 26 August, but upon hearing that the United Kingdom had concluded a formal mutual assistance pact with Poland, and that Italy would maintain neutrality, he decided to delay it.[58]
64
+
65
+ In response to British requests for direct negotiations to avoid war, Germany made demands on Poland, which only served as a pretext to worsen relations.[59] On 29 August, Hitler demanded that a Polish plenipotentiary immediately travel to Berlin to negotiate the handover of Danzig, and to allow a plebiscite in the Polish Corridor in which the German minority would vote on secession.[59] The Poles refused to comply with the German demands, and on the night of 30–31 August in a stormy meeting with the British ambassador Neville Henderson, Ribbentrop declared that Germany considered its claims rejected.[60]
66
+
67
+ On 1 September 1939, Germany invaded Poland after having staged several false flag border incidents as a pretext to initiate the invasion.[61] The first German attack of the war came against the Polish defenses at Westerplatte.[62] The United Kingdom responded with an ultimatum to Germany to cease military operations, and on 3 September, after the ultimatum was ignored, France and Britain declared war on Germany, followed by Australia, New Zealand, South Africa and Canada. The alliance provided no direct military support to Poland, outside of a cautious French probe into the Saarland.[63] The Western Allies also began a naval blockade of Germany, which aimed to damage the country's economy and the war effort.[64] Germany responded by ordering U-boat warfare against Allied merchant and warships, which would later escalate into the Battle of the Atlantic.[65]
68
+
69
+ On 8 September, German troops reached the suburbs of Warsaw. The Polish counter offensive to the west halted the German advance for several days, but it was outflanked and encircled by the Wehrmacht. Remnants of the Polish army broke through to besieged Warsaw. On 17 September 1939, after signing a cease-fire with Japan, the Soviets invaded Eastern Poland[66] under a pretext that the Polish state had ostensibly ceased to exist.[67] On 27 September, the Warsaw garrison surrendered to the Germans, and the last large operational unit of the Polish Army surrendered on 6 October. Despite the military defeat, Poland never surrendered; instead it formed the Polish government-in-exile and a clandestine state apparatus remained in occupied Poland.[68] A significant part of Polish military personnel evacuated to Romania and the Baltic countries; many of them would fight against the Axis in other theatres of the war.[69]
70
+
71
+ Germany annexed the western and occupied the central part of Poland, and the Soviet Union annexed its eastern part; small shares of Polish territory were transferred to Lithuania and Slovakia. On 6 October, Hitler made a public peace overture to the United Kingdom and France but said that the future of Poland was to be determined exclusively by Germany and the Soviet Union. The proposal was rejected,[60] and Hitler ordered an immediate offensive against France,[70] which would be postponed until the spring of 1940 due to bad weather.[71][72][73]
72
+
73
+ The Soviet Union forced the Baltic countries—Estonia, Latvia and Lithuania, the states that were in the Soviet "sphere of influence" under the Molotov-Ribbentrop pact—to sign "mutual assistance pacts" that stipulated stationing Soviet troops in these countries. Soon after, significant Soviet military contingents were moved there.[74][75][76] Finland refused to sign a similar pact and rejected ceding part of its territory to the Soviet Union. The Soviet Union invaded Finland in November 1939,[77] and the Soviet Union was expelled from the League of Nations.[78] Despite overwhelming numerical superiority, Soviet military success was modest, and the Finno-Soviet war ended in March 1940 with minimal Finnish concessions.[79]
74
+
75
+ In June 1940, the Soviet Union forcibly annexed Estonia, Latvia and Lithuania,[75] and the disputed Romanian regions of Bessarabia, northern Bukovina and Hertza. Meanwhile, Nazi-Soviet political rapprochement and economic co-operation[80][81] gradually stalled,[82][83] and both states began preparations for war.[84]
76
+
77
+ In April 1940, Germany invaded Denmark and Norway to protect shipments of iron ore from Sweden, which the Allies were attempting to cut off.[85] Denmark capitulated after a few hours, and Norway was conquered within two months[86] despite Allied support. British discontent over the Norwegian campaign led to the appointment of Winston Churchill as Prime Minister on 10 May 1940.[87]
78
+
79
+ On the same day, Germany launched an offensive against France. To circumvent the strong Maginot Line fortifications on the Franco-German border, Germany directed its attack at the neutral nations of Belgium, the Netherlands, and Luxembourg.[88] The Germans carried out a flanking manoeuvre through the Ardennes region,[89] which was mistakenly perceived by Allies as an impenetrable natural barrier against armoured vehicles.[90][91] By successfully implementing new blitzkrieg tactics, the Wehrmacht rapidly advanced to the Channel and cut off the Allied forces in Belgium, trapping the bulk of the Allied armies in a cauldron on the Franco-Belgian border near Lille. The United Kingdom was able to evacuate a significant number of Allied troops from the continent by early June, although abandoning almost all their equipment.[92]
80
+
81
+ On 10 June, Italy invaded France, declaring war on both France and the United Kingdom.[93] The Germans turned south against the weakened French army, and Paris fell to them on 14 June. Eight days later France signed an armistice with Germany; it was divided into German and Italian occupation zones,[94] and an unoccupied rump state under the Vichy Regime, which, though officially neutral, was generally aligned with Germany. France kept its fleet, which the United Kingdom attacked on 3 July in an attempt to prevent its seizure by Germany.[95]
82
+
83
+ The Battle of Britain[96] began in early July with Luftwaffe attacks on shipping and harbours.[97] The United Kingdom rejected Hitler's ultimatum,[which?][98] and the German air superiority campaign started in August but failed to defeat RAF Fighter Command, forcing the indefinite postponement of the proposed German invasion of Britain. The German strategic bombing offensive intensified with night attacks on London and other cities in the Blitz, but failed to significantly disrupt the British war effort[97] and largely ended in May 1941.[99]
84
+
85
+ Using newly captured French ports, the German Navy enjoyed success against an over-extended Royal Navy, using U-boats against British shipping in the Atlantic.[100] The British Home Fleet scored a significant victory on 27 May 1941 by sinking the German battleship Bismarck.[101]
86
+
87
+ In November 1939, the United States was taking measures to assist China and the Western Allies, and amended the Neutrality Act to allow "cash and carry" purchases by the Allies.[102] In 1940, following the German capture of Paris, the size of the United States Navy was significantly increased. In September the United States further agreed to a trade of American destroyers for British bases.[103] Still, a large majority of the American public continued to oppose any direct military intervention in the conflict well into 1941.[104] In December 1940 Roosevelt accused Hitler of planning world conquest and ruled out any negotiations as useless, calling for the United States to become an "arsenal of democracy" and promoting Lend-Lease programmes of aid to support the British war effort.[98] The United States started strategic planning to prepare for a full-scale offensive against Germany.[105]
88
+
89
+ At the end of September 1940, the Tripartite Pact formally united Japan, Italy, and Germany as the Axis Powers. The Tripartite Pact stipulated that any country, with the exception of the Soviet Union, which attacked any Axis Power would be forced to go to war against all three.[106] The Axis expanded in November 1940 when Hungary, Slovakia and Romania joined.[107] Romania and Hungary would make major contributions to the Axis war against the Soviet Union, in Romania's case partially to recapture territory ceded to the Soviet Union.[108]
90
+
91
+ In early June 1940 the Italian Regia Aeronautica attacked and besieged Malta, a British possession. In late summer through early autumn Italy conquered British Somaliland and made an incursion into British-held Egypt. In October Italy attacked Greece, but the attack was repulsed with heavy Italian casualties; the campaign ended within months with minor territorial changes.[109] Germany started preparation for an invasion of the Balkans to assist Italy, to prevent the British from gaining a foothold there, which would be a potential threat for Romanian oil fields, and to strike against the British dominance of the Mediterranean.[110]
92
+
93
+ In December 1940, British Empire forces began counter-offensives against Italian forces in Egypt and Italian East Africa.[111] The offensives were highly successful; by early February 1941 Italy had lost control of eastern Libya, and large numbers of Italian troops had been taken, prisoner. The Italian Navy also suffered significant defeats, with the Royal Navy putting three Italian battleships out of commission by a carrier attack at Taranto and neutralising several more warships at the Battle of Cape Matapan.[112]
94
+
95
+ Italian defeats prompted Germany to deploy an expeditionary force to North Africa, and at the end of March 1941 Rommel's Afrika Korps launched an offensive which drove back the Commonwealth forces.[113] In under a month, Axis forces advanced to western Egypt and besieged the port of Tobruk.[114]
96
+
97
+ By late March 1941 Bulgaria and Yugoslavia signed the Tripartite Pact; however, the Yugoslav government was overthrown two days later by pro-British nationalists. Germany responded with simultaneous invasions of both Yugoslavia and Greece, commencing on 6 April 1941; both nations were forced to surrender within the month.[115] The airborne invasion of the Greek island of Crete at the end of May completed the German conquest of the Balkans.[116] Although the Axis victory was swift, bitter and large-scale partisan warfare subsequently broke out against the Axis occupation of Yugoslavia, which continued until the end of the war.[117]
98
+
99
+ In the Middle East, in May Commonwealth forces quashed an uprising in Iraq which had been supported by German aircraft from bases within Vichy-controlled Syria.[118] Between June and July they invaded and occupied the French possessions Syria and Lebanon, with the assistance of the Free French.[119]
100
+
101
+ With the situation in Europe and Asia relatively stable, Germany, Japan, and the Soviet Union made preparations. With the Soviets wary of mounting tensions with Germany and the Japanese planning to take advantage of the European War by seizing resource-rich European possessions in Southeast Asia, the two powers signed the Soviet–Japanese Neutrality Pact in April 1941.[120] By contrast, the Germans were steadily making preparations for an attack on the Soviet Union, massing forces on the Soviet border.[121]
102
+
103
+ Hitler believed that the United Kingdom's refusal to end the war was based on the hope that the United States and the Soviet Union would enter the war against Germany sooner or later.[122] He, therefore, decided to try to strengthen Germany's relations with the Soviets, or failing that to attack and eliminate them as a factor. In November 1940, negotiations took place to determine if the Soviet Union would join the Tripartite Pact. The Soviets showed some interest but asked for concessions from Finland, Bulgaria, Turkey, and Japan that Germany considered unacceptable. On 18 December 1940, Hitler issued the directive to prepare for an invasion of the Soviet Union.[123]
104
+
105
+ On 22 June 1941, Germany, supported by Italy and Romania, invaded the Soviet Union in Operation Barbarossa, with Germany accusing the Soviets of plotting against them. They were joined shortly by Finland and Hungary.[124] The primary targets of this surprise offensive[125] were the Baltic region, Moscow and Ukraine, with the ultimate goal of ending the 1941 campaign near the Arkhangelsk-Astrakhan line, from the Caspian to the White Seas. Hitler's objectives were to eliminate the Soviet Union as a military power, exterminate Communism, generate Lebensraum ("living space")[126] by dispossessing the native population[127] and guarantee access to the strategic resources needed to defeat Germany's remaining rivals.[128]
106
+
107
+ Although the Red Army was preparing for strategic counter-offensives before the war,[129] Barbarossa forced the Soviet supreme command to adopt a strategic defence. During the summer, the Axis made significant gains into Soviet territory, inflicting immense losses in both personnel and materiel. By mid-August, however, the German Army High Command decided to suspend the offensive of a considerably depleted Army Group Centre, and to divert the 2nd Panzer Group to reinforce troops advancing towards central Ukraine and Leningrad.[130] The Kiev offensive was overwhelmingly successful, resulting in encirclement and elimination of four Soviet armies, and made possible further advance into Crimea and industrially developed Eastern Ukraine (the First Battle of Kharkov).[131]
108
+
109
+ The diversion of three quarters of the Axis troops and the majority of their air forces from France and the central Mediterranean to the Eastern Front[132] prompted the United Kingdom to reconsider its grand strategy.[133] In July, the UK and the Soviet Union formed a military alliance against Germany[134] and in August, the United Kingdom and the United States jointly issued the Atlantic Charter, which outlined British and American goals for the postwar world.[135] In late August the British and Soviets invaded neutral Iran to secure the Persian Corridor, Iran's oil fields, and preempt any Axis advances through Iran toward the Baku oil fields or British India.[136]
110
+
111
+ By October Axis operational objectives in Ukraine and the Baltic region were achieved, with only the sieges of Leningrad[137] and Sevastopol continuing.[138] A major offensive against Moscow was renewed; after two months of fierce battles in increasingly harsh weather, the German army almost reached the outer suburbs of Moscow, where the exhausted troops[139] were forced to suspend their offensive.[140] Large territorial gains were made by Axis forces, but their campaign had failed to achieve its main objectives: two key cities remained in Soviet hands, the Soviet capability to resist was not broken, and the Soviet Union retained a considerable part of its military potential. The blitzkrieg phase of the war in Europe had ended.[141]
112
+
113
+ By early December, freshly mobilised reserves[142] allowed the Soviets to achieve numerical parity with Axis troops.[143] This, as well as intelligence data which established that a minimal number of Soviet troops in the East would be sufficient to deter any attack by the Japanese Kwantung Army,[144] allowed the Soviets to begin a massive counter-offensive that started on 5 December all along the front and pushed German troops 100–250 kilometres (62–155 mi) west.[145]
114
+
115
+ Following the Japanese false flag Mukden Incident in 1931, the Japanese shelling of the American gunboat USS Panay in 1937, and the 1937-38 Nanjing Massacre Japanese-American relations deteriorated. In 1939, the United States notified Japan that it would not be extending its trade treaty and American public opinion opposing Japanese expansionism led to a series of economic sanctions, the Export Control Acts, which banned U.S. exports of chemicals, minerals and military parts to Japan and increased economic pressure on the Japanese regime.[98][146][147] During 1939 Japan launched its first attack against Changsha, a strategically important Chinese city, but was repulsed by late September.[148] Despite several offensives by both sides, the war between China and Japan was stalemated by 1940. To increase pressure on China by blocking supply routes, and to better position Japanese forces in the event of a war with the Western powers, Japan invaded and occupied northern Indochina in September, 1940.[149]
116
+
117
+ Chinese nationalist forces launched a large-scale counter-offensive in early 1940. In August, Chinese communists launched an offensive in Central China; in retaliation, Japan instituted harsh measures in occupied areas to reduce human and material resources for the communists.[150] Continued antipathy between Chinese communist and nationalist forces culminated in armed clashes in January 1941, effectively ending their co-operation.[151] In March, the Japanese 11th army attacked the headquarters of the Chinese 19th army but was repulsed during Battle of Shanggao.[152] In September, Japan attempted to take the city of Changsha again and clashed with Chinese nationalist forces.[153]
118
+
119
+ German successes in Europe encouraged Japan to increase pressure on European governments in Southeast Asia. The Dutch government agreed to provide Japan some oil supplies from the Dutch East Indies, but negotiations for additional access to their resources ended in failure in June 1941.[154] In July 1941 Japan sent troops to southern Indochina, thus threatening British and Dutch possessions in the Far East. The United States, United Kingdom, and other Western governments reacted to this move with a freeze on Japanese assets and a total oil embargo.[155][156] At the same time, Japan was planning an invasion of the Soviet Far East, intending to capitalise off the German invasion in the west, but abandoned the operation after the sanctions.[157]
120
+
121
+ Since early 1941 the United States and Japan had been engaged in negotiations in an attempt to improve their strained relations and end the war in China. During these negotiations, Japan advanced a number of proposals which were dismissed by the Americans as inadequate.[158] At the same time the United States, the United Kingdom, and the Netherlands engaged in secret discussions for the joint defence of their territories, in the event of a Japanese attack against any of them.[159] Roosevelt reinforced the Philippines (an American protectorate scheduled for independence in 1946) and warned Japan that the United States would react to Japanese attacks against any "neighboring countries".[159]
122
+
123
+ Frustrated at the lack of progress and feeling the pinch of the American–British–Dutch sanctions, Japan prepared for war. On 20 November, a new government under Hideki Tojo presented an interim proposal as its final offer. It called for the end of American aid to China and for lifting the embargo on the supply of oil and other resources to Japan. In exchange, Japan promised not to launch any attacks in Southeast Asia and to withdraw its forces from southern Indochina.[158] The American counter-proposal of 26 November required that Japan evacuate all of China without conditions and conclude non-aggression pacts with all Pacific powers.[160] That meant Japan was essentially forced to choose between abandoning its ambitions in China, or seizing the natural resources it needed in the Dutch East Indies by force;[161][162] the Japanese military did not consider the former an option, and many officers considered the oil embargo an unspoken declaration of war.[163]
124
+
125
+ Japan planned to rapidly seize European colonies in Asia to create a large defensive perimeter stretching into the Central Pacific. The Japanese would then be free to exploit the resources of Southeast Asia while exhausting the over-stretched Allies by fighting a defensive war.[164][165] To prevent American intervention while securing the perimeter, it was further planned to neutralise the United States Pacific Fleet and the American military presence in the Philippines from the outset.[166] On 7 December 1941 (8 December in Asian time zones), Japan attacked British and American holdings with near-simultaneous offensives against Southeast Asia and the Central Pacific.[167] These included an attack on the American fleets at Pearl Harbor and the Philippines, landings in Malaya,[167], Thailand and the Battle of Hong Kong.[168]
126
+
127
+ The Japanese invasion of Thailand led to Thailand's decision to ally itself with Japan and the other Japanese attacks led the United States, United Kingdom, China, Australia, and several other states to formally declare war on Japan, whereas the Soviet Union, being heavily involved in large-scale hostilities with European Axis countries, maintained its neutrality agreement with Japan.[169] Germany, followed by the other Axis states, declared war on the United States[170] in solidarity with Japan, citing as justification the American attacks on German war vessels that had been ordered by Roosevelt.[124][171]
128
+
129
+ On 1 January 1942, the Allied Big Four[172]—the Soviet Union, China, the United Kingdom and the United States—and 22 smaller or exiled governments issued the Declaration by United Nations, thereby affirming the Atlantic Charter,[173] and agreeing not to sign a separate peace with the Axis powers.[174]
130
+
131
+ During 1942, Allied officials debated on the appropriate grand strategy to pursue. All agreed that defeating Germany was the primary objective. The Americans favoured a straightforward, large-scale attack on Germany through France. The Soviets were also demanding a second front. The British, on the other hand, argued that military operations should target peripheral areas to wear out German strength, leading to increasing demoralisation, and bolster resistance forces. Germany itself would be subject to a heavy bombing campaign. An offensive against Germany would then be launched primarily by Allied armour without using large-scale armies.[175] Eventually, the British persuaded the Americans that a landing in France was infeasible in 1942 and they should instead focus on driving the Axis out of North Africa.[176]
132
+
133
+ At the Casablanca Conference in early 1943, the Allies reiterated the statements issued in the 1942 Declaration, and demanded the unconditional surrender of their enemies. The British and Americans agreed to continue to press the initiative in the Mediterranean by invading Sicily to fully secure the Mediterranean supply routes.[177] Although the British argued for further operations in the Balkans to bring Turkey into the war, in May 1943, the Americans extracted a British commitment to limit Allied operations in the Mediterranean to an invasion of the Italian mainland and to invade France in 1944.[178]
134
+
135
+ By the end of April 1942, Japan and its ally Thailand had almost fully conquered Burma, Malaya, the Dutch East Indies, Singapore, and Rabaul, inflicting severe losses on Allied troops and taking a large number of prisoners.[179] Despite stubborn resistance by Filipino and US forces, the Philippine Commonwealth was eventually captured in May 1942, forcing its government into exile.[180] On 16 April, in Burma, 7,000 British soldiers were encircled by the Japanese 33rd Division during the Battle of Yenangyaung and rescued by the Chinese 38th Division.[181] Japanese forces also achieved naval victories in the South China Sea, Java Sea and Indian Ocean,[182] and bombed the Allied naval base at Darwin, Australia. In January 1942, the only Allied success against Japan was a Chinese victory at Changsha.[183] These easy victories over unprepared US and European opponents left Japan overconfident, as well as overextended.[184]
136
+
137
+ In early May 1942, Japan initiated operations to capture Port Moresby by amphibious assault and thus sever communications and supply lines between the United States and Australia. The planned invasion was thwarted when an Allied task force, centred on two American fleet carriers, fought Japanese naval forces to a draw in the Battle of the Coral Sea.[185] Japan's next plan, motivated by the earlier Doolittle Raid, was to seize Midway Atoll and lure American carriers into battle to be eliminated; as a diversion, Japan would also send forces to occupy the Aleutian Islands in Alaska.[186] In mid-May, Japan started the Zhejiang-Jiangxi Campaign in China, with the goal of inflicting retribution on the Chinese who aided the surviving American airmen in the Doolittle Raid by destroying air bases and fighting against the Chinese 23rd and 32nd Army Groups.[187][188] In early June, Japan put its operations into action, but the Americans, having broken Japanese naval codes in late May, were fully aware of the plans and order of battle, and used this knowledge to achieve a decisive victory at Midway over the Imperial Japanese Navy.[189]
138
+
139
+ With its capacity for aggressive action greatly diminished as a result of the Midway battle, Japan chose to focus on a belated attempt to capture Port Moresby by an overland campaign in the Territory of Papua.[190] The Americans planned a counter-attack against Japanese positions in the southern Solomon Islands, primarily Guadalcanal, as a first step towards capturing Rabaul, the main Japanese base in Southeast Asia.[191]
140
+
141
+ Both plans started in July, but by mid-September, the Battle for Guadalcanal took priority for the Japanese, and troops in New Guinea were ordered to withdraw from the Port Moresby area to the northern part of the island, where they faced Australian and United States troops in the Battle of Buna-Gona.[192] Guadalcanal soon became a focal point for both sides with heavy commitments of troops and ships in the battle for Guadalcanal. By the start of 1943, the Japanese were defeated on the island and withdrew their troops.[193] In Burma, Commonwealth forces mounted two operations. The first, an offensive into the Arakan region in late 1942, went disastrously, forcing a retreat back to India by May 1943.[194] The second was the insertion of irregular forces behind Japanese front-lines in February which, by the end of April, had achieved mixed results.[195]
142
+
143
+ Despite considerable losses, in early 1942 Germany and its allies stopped a major Soviet offensive in central and southern Russia, keeping most territorial gains they had achieved during the previous year.[196] In May the Germans defeated Soviet offensives in the Kerch Peninsula and at Kharkov,[197] and then launched their main summer offensive against southern Russia in June 1942, to seize the oil fields of the Caucasus and occupy Kuban steppe, while maintaining positions on the northern and central areas of the front. The Germans split Army Group South into two groups: Army Group A advanced to the lower Don River and struck south-east to the Caucasus, while Army Group B headed towards the Volga River. The Soviets decided to make their stand at Stalingrad on the Volga.[198]
144
+
145
+ By mid-November, the Germans had nearly taken Stalingrad in bitter street fighting. The Soviets began their second winter counter-offensive, starting with an encirclement of German forces at Stalingrad,[199] and an assault on the Rzhev salient near Moscow, though the latter failed disastrously.[200] By early February 1943, the German Army had taken tremendous losses; German troops at Stalingrad had been defeated,[201] and the front-line had been pushed back beyond its position before the summer offensive. In mid-February, after the Soviet push had tapered off, the Germans launched another attack on Kharkov, creating a salient in their front line around the Soviet city of Kursk.[202]
146
+
147
+ Exploiting poor American naval command decisions, the German navy ravaged Allied shipping off the American Atlantic coast.[203] By November 1941, Commonwealth forces had launched a counter-offensive, Operation Crusader, in North Africa, and reclaimed all the gains the Germans and Italians had made.[204] In North Africa, the Germans launched an offensive in January, pushing the British back to positions at the Gazala Line by early February,[205] followed by a temporary lull in combat which Germany used to prepare for their upcoming offensives.[206] Concerns the Japanese might use bases in Vichy-held Madagascar caused the British to invade the island in early May 1942.[207] An Axis offensive in Libya forced an Allied retreat deep inside Egypt until Axis forces were stopped at El Alamein.[208] On the Continent, raids of Allied commandos on strategic targets, culminating in the disastrous Dieppe Raid,[209] demonstrated the Western Allies' inability to launch an invasion of continental Europe without much better preparation, equipment, and operational security.[210][page needed]
148
+
149
+ In August 1942, the Allies succeeded in repelling a second attack against El Alamein[211] and, at a high cost, managed to deliver desperately needed supplies to the besieged Malta.[212] A few months later, the Allies commenced an attack of their own in Egypt, dislodging the Axis forces and beginning a drive west across Libya.[213] This attack was followed up shortly after by Anglo-American landings in French North Africa, which resulted in the region joining the Allies.[214] Hitler responded to the French colony's defection by ordering the occupation of Vichy France;[214] although Vichy forces did not resist this violation of the armistice, they managed to scuttle their fleet to prevent its capture by German forces.[214][215] The Axis forces in Africa withdrew into Tunisia, which was conquered by the Allies in May 1943.[214][216]
150
+
151
+ In June 1943 the British and Americans began a strategic bombing campaign against Germany with a goal to disrupt the war economy, reduce morale, and "de-house" the civilian population.[217] The firebombing of Hamburg was among the first attacks in this campaign, inflicting significant casualties and considerable losses on infrastructure of this important industrial centre.[218]
152
+
153
+ After the Guadalcanal Campaign, the Allies initiated several operations against Japan in the Pacific. In May 1943, Canadian and US forces were sent to eliminate Japanese forces from the Aleutians.[219] Soon after, the United States, with support from Australia, New Zealand and Pacific Islander forces, began major ground, sea and air operations to isolate Rabaul by capturing surrounding islands, and breach the Japanese Central Pacific perimeter at the Gilbert and Marshall Islands.[220] By the end of March 1944, the Allies had completed both of these objectives and had also neutralised the major Japanese base at Truk in the Caroline Islands. In April, the Allies launched an operation to retake Western New Guinea.[221]
154
+
155
+ In the Soviet Union, both the Germans and the Soviets spent the spring and early summer of 1943 preparing for large offensives in central Russia. On 4 July 1943, Germany attacked Soviet forces around the Kursk Bulge. Within a week, German forces had exhausted themselves against the Soviets' deeply echeloned and well-constructed defences,[222] and for the first time in the war Hitler cancelled the operation before it had achieved tactical or operational success.[223] This decision was partially affected by the Western Allies' invasion of Sicily launched on 9 July, which, combined with previous Italian failures, resulted in the ousting and arrest of Mussolini later that month.[224]
156
+
157
+ On 12 July 1943, the Soviets launched their own counter-offensives, thereby dispelling any chance of German victory or even stalemate in the east. The Soviet victory at Kursk marked the end of German superiority,[225] giving the Soviet Union the initiative on the Eastern Front.[226][227] The Germans tried to stabilise their eastern front along the hastily fortified Panther–Wotan line, but the Soviets broke through it at Smolensk and by the Lower Dnieper Offensives.[228]
158
+
159
+ On 3 September 1943, the Western Allies invaded the Italian mainland, following Italy's armistice with the Allies.[229] Germany with the help of fascists responded by disarming Italian forces that were in many places without superior orders, seizing military control of Italian areas,[230] and creating a series of defensive lines.[231] German special forces then rescued Mussolini, who then soon established a new client state in German-occupied Italy named the Italian Social Republic,[232] causing an Italian civil war. The Western Allies fought through several lines until reaching the main German defensive line in mid-November.[233]
160
+
161
+ German operations in the Atlantic also suffered. By May 1943, as Allied counter-measures became increasingly effective, the resulting sizeable German submarine losses forced a temporary halt of the German Atlantic naval campaign.[234] In November 1943, Franklin D. Roosevelt and Winston Churchill met with Chiang Kai-shek in Cairo and then with Joseph Stalin in Tehran.[235] The former conference determined the post-war return of Japanese territory[236] and the military planning for the Burma Campaign,[237] while the latter included agreement that the Western Allies would invade Europe in 1944 and that the Soviet Union would declare war on Japan within three months of Germany's defeat.[238]
162
+
163
+ From November 1943, during the seven-week Battle of Changde, the Chinese forced Japan to fight a costly war of attrition, while awaiting Allied relief.[239][240][241] In January 1944, the Allies launched a series of attacks in Italy against the line at Monte Cassino and tried to outflank it with landings at Anzio.[242]
164
+
165
+ On 27 January 1944, Soviet troops launched a major offensive that expelled German forces from the Leningrad region, thereby ending the most lethal siege in history.[243] The following Soviet offensive was halted on the pre-war Estonian border by the German Army Group North aided by Estonians hoping to re-establish national independence. This delay slowed subsequent Soviet operations in the Baltic Sea region.[244] By late May 1944, the Soviets had liberated Crimea, largely expelled Axis forces from Ukraine, and made incursions into Romania, which were repulsed by the Axis troops.[245] The Allied offensives in Italy had succeeded and, at the expense of allowing several German divisions to retreat, on 4 June Rome was captured.[246]
166
+
167
+ The Allies had mixed success in mainland Asia. In March 1944, the Japanese launched the first of two invasions, an operation against British positions in Assam, India,[247] and soon besieged Commonwealth positions at Imphal and Kohima.[248] In May 1944, British forces mounted a counter-offensive that drove Japanese troops back to Burma by July,[248] and Chinese forces that had invaded northern Burma in late 1943 besieged Japanese troops in Myitkyina.[249] The second Japanese invasion of China aimed to destroy China's main fighting forces, secure railways between Japanese-held territory and capture Allied airfields.[250] By June, the Japanese had conquered the province of Henan and begun a new attack on Changsha in Hunan province.[251]
168
+
169
+ On 6 June 1944 (known as D-Day), after three years of Soviet pressure,[252] the Western Allies invaded northern France. After reassigning several Allied divisions from Italy, they also attacked southern France.[253] These landings were successful, and led to the defeat of the German Army units in France. Paris was liberated on 25 August by the local resistance assisted by the Free French Forces, both led by General Charles de Gaulle,[254] and the Western Allies continued to push back German forces in western Europe during the latter part of the year. An attempt to advance into northern Germany spearheaded by a major airborne operation in the Netherlands failed.[255] After that, the Western Allies slowly pushed into Germany, but failed to cross the Rur river in a large offensive. In Italy, Allied advance also slowed due to the last major German defensive line.[256]
170
+
171
+ On 22 June, the Soviets launched a strategic offensive in Belarus ("Operation Bagration") that destroyed the German Army Group Centre almost completely.[257] Soon after that, another Soviet strategic offensive forced German troops from Western Ukraine and Eastern Poland. The Soviets formed the Polish Committee of National Liberation to control territory in Poland and combat the Polish Armia Krajowa; The Soviet Red Army remained in the Praga district on the other side of the Vistula and watched passively as the Germans quelled the Warsaw Uprising initiated by the Armia Krajowa.[258] The national uprising in Slovakia was also quelled by the Germans.[259] The Soviet Red Army's strategic offensive in eastern Romania cut off and destroyed the considerable German troops there and triggered a successful coup d'état in Romania and in Bulgaria, followed by those countries' shift to the Allied side.[260]
172
+
173
+ In September 1944, Soviet troops advanced into Yugoslavia and forced the rapid withdrawal of German Army Groups E and F in Greece, Albania and Yugoslavia to rescue them from being cut off.[261] By this point, the Communist-led Partisans under Marshal Josip Broz Tito, who had led an increasingly successful guerrilla campaign against the occupation since 1941, controlled much of the territory of Yugoslavia and engaged in delaying efforts against German forces further south. In northern Serbia, the Soviet Red Army, with limited support from Bulgarian forces, assisted the Partisans in a joint liberation of the capital city of Belgrade on 20 October. A few days later, the Soviets launched a massive assault against German-occupied Hungary that lasted until the fall of Budapest in February 1945.[262] Unlike impressive Soviet victories in the Balkans, bitter Finnish resistance to the Soviet offensive in the Karelian Isthmus denied the Soviets occupation of Finland and led to a Soviet-Finnish armistice on relatively mild conditions,[263] although Finland was forced to fight their former ally Germany.[264][broken footnote]
174
+
175
+ By the start of July 1944, Commonwealth forces in Southeast Asia had repelled the Japanese sieges in Assam, pushing the Japanese back to the Chindwin River[265] while the Chinese captured Myitkyina. In September 1944, Chinese forces captured Mount Song and reopened the Burma Road.[266] In China, the Japanese had more successes, having finally captured Changsha in mid-June and the city of Hengyang by early August.[267] Soon after, they invaded the province of Guangxi, winning major engagements against Chinese forces at Guilin and Liuzhou by the end of November[268] and successfully linking up their forces in China and Indochina by mid-December.[269]
176
+
177
+ In the Pacific, US forces continued to press back the Japanese perimeter. In mid-June 1944, they began their offensive against the Mariana and Palau islands, and decisively defeated Japanese forces in the Battle of the Philippine Sea. These defeats led to the resignation of the Japanese Prime Minister, Hideki Tojo, and provided the United States with air bases to launch intensive heavy bomber attacks on the Japanese home islands. In late October, American forces invaded the Filipino island of Leyte; soon after, Allied naval forces scored another large victory in the Battle of Leyte Gulf, one of the largest naval battles in history.[270]
178
+
179
+ On 16 December 1944, Germany made a last attempt on the Western Front by using most of its remaining reserves to launch a massive counter-offensive in the Ardennes and along with the French-German border to split the Western Allies, encircle large portions of Western Allied troops and capture their primary supply port at Antwerp to prompt a political settlement.[271] By January, the offensive had been repulsed with no strategic objectives fulfilled.[271] In Italy, the Western Allies remained stalemated at the German defensive line. In mid-January 1945, the Soviets and Poles attacked in Poland, pushing from the Vistula to the Oder river in Germany, and overran East Prussia.[272] On 4 February Soviet, British, and US leaders met for the Yalta Conference. They agreed on the occupation of post-war Germany, and on when the Soviet Union would join the war against Japan.[273]
180
+
181
+ In February, the Soviets entered Silesia and Pomerania, while Western Allies entered western Germany and closed to the Rhine river. By March, the Western Allies crossed the Rhine north and south of the Ruhr, encircling the German Army Group B.[274] In early March, in an attempt to protect its last oil reserves in Hungary and to retake Budapest, Germany launched its last major offensive against Soviet troops near Lake Balaton. In two weeks, the offensive had been repulsed, the Soviets advanced to Vienna, and captured the city. In early April, Soviet troops captured Königsberg, while the Western Allies finally pushed forward in Italy and swept across western Germany capturing Hamburg and Nuremberg. American and Soviet forces met at the Elbe river on 25 April, leaving several unoccupied pockets in southern Germany and around Berlin.
182
+
183
+ Soviet and Polish forces stormed and captured Berlin in late April. In Italy, German forces surrendered on 29 April. On 30 April, the Reichstag was captured, signalling the military defeat of Nazi Germany,[275] Berlin garrison surrendered on 2 May.
184
+
185
+ Several changes in leadership occurred during this period. On 12 April, President Roosevelt died and was succeeded by Harry S. Truman. Benito Mussolini was killed by Italian partisans on 28 April.[276] Two days later, Hitler committed suicide in besieged Berlin, and he was succeeded by Grand Admiral Karl Dönitz.[277]
186
+ Total and unconditional surrender in Europe was signed on 7 and 8 May, to be effective by the end of 8 May.[278] German Army Group Centre resisted in Prague until 11 May.[279]
187
+
188
+ In the Pacific theatre, American forces accompanied by the forces of the Philippine Commonwealth advanced in the Philippines, clearing Leyte by the end of April 1945. They landed on Luzon in January 1945 and recaptured Manila in March. Fighting continued on Luzon, Mindanao, and other islands of the Philippines until the end of the war.[280] Meanwhile, the United States Army Air Forces launched a massive firebombing campaign of strategic cities in Japan in an effort to destroy Japanese war industry and civilian morale. A devastating bombing raid on Tokyo of 9–10 March was the deadliest conventional bombing raid in history.[281]
189
+
190
+ In May 1945, Australian troops landed in Borneo, over-running the oilfields there. British, American, and Chinese forces defeated the Japanese in northern Burma in March, and the British pushed on to reach Rangoon by 3 May.[282] Chinese forces started a counterattack in the Battle of West Hunan that occurred between 6 April and 7 June 1945. American naval and amphibious forces also moved towards Japan, taking Iwo Jima by March, and Okinawa by the end of June.[283] At the same time, American submarines cut off Japanese imports, drastically reducing Japan's ability to supply its overseas forces.[284]
191
+
192
+ On 11 July, Allied leaders met in Potsdam, Germany. They confirmed earlier agreements about Germany,[285] and the American, British and Chinese governments reiterated the demand for unconditional surrender of Japan, specifically stating that "the alternative for Japan is prompt and utter destruction".[286] During this conference, the United Kingdom held its general election, and Clement Attlee replaced Churchill as Prime Minister.[287]
193
+
194
+ The call for unconditional surrender was rejected by the Japanese government, which believed it would be capable of negotiating for more favourable surrender terms.[288] In early August, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki. Between the two bombings, the Soviets, pursuant to the Yalta agreement, invaded Japanese-held Manchuria and quickly defeated the Kwantung Army, which was the largest Japanese fighting force.[289] These two events persuaded previously adamant Imperial Army leaders to accept surrender terms.[290] The Red Army also captured the southern part of Sakhalin Island and the Kuril Islands. On 15 August 1945, Japan surrendered, with the surrender documents finally signed at Tokyo Bay on the deck of the American battleship USS Missouri on 2 September 1945, ending the war.[291]
195
+
196
+ The Allies established occupation administrations in Austria and Germany. The former became a neutral state, non-aligned with any political bloc. The latter was divided into western and eastern occupation zones controlled by the Western Allies and the Soviet Union. A denazification programme in Germany led to the prosecution of Nazi war criminals in the Nuremberg trials and the removal of ex-Nazis from power, although this policy moved towards amnesty and re-integration of ex-Nazis into West German society.[292]
197
+
198
+ Germany lost a quarter of its pre-war (1937) territory. Among the eastern territories, Silesia, Neumark and most of Pomerania were taken over by Poland,[293] and East Prussia was divided between Poland and the Soviet Union, followed by the expulsion to Germany of the nine million Germans from these provinces,[294][295] as well as three million Germans from the Sudetenland in Czechoslovakia. By the 1950s, one-fifth of West Germans were refugees from the east. The Soviet Union also took over the Polish provinces east of the Curzon line,[296] from which 2 million Poles were expelled;[295][297] north-east Romania,[298][299] parts of eastern Finland,[300] and the three Baltic states were incorporated into the Soviet Union.[301][302]
199
+
200
+ In an effort to maintain world peace,[303] the Allies formed the United Nations, which officially came into existence on 24 October 1945,[304] and adopted the Universal Declaration of Human Rights in 1948 as a common standard for all member nations.[305] The great powers that were the victors of the war—France, China, the United Kingdom, the Soviet Union and the United States—became the permanent members of the UN's Security Council.[306] The five permanent members remain so to the present, although there have been two seat changes, between the Republic of China and the People's Republic of China in 1971, and between the Soviet Union and its successor state, the Russian Federation, following the dissolution of the Soviet Union in 1991. The alliance between the Western Allies and the Soviet Union had begun to deteriorate even before the war was over.[307]
201
+
202
+ Germany had been de facto divided, and two independent states, the Federal Republic of Germany (West Germany) and the German Democratic Republic (East Germany),[308] were created within the borders of Allied and Soviet occupation zones. The rest of Europe was also divided into Western and Soviet spheres of influence.[309] Most eastern and central European countries fell into the Soviet sphere, which led to establishment of Communist-led regimes, with full or partial support of the Soviet occupation authorities. As a result, East Germany,[310] Poland, Hungary, Romania, Czechoslovakia, and Albania[311] became Soviet satellite states. Communist Yugoslavia conducted a fully independent policy, causing tension with the Soviet Union.[312]
203
+
204
+ Post-war division of the world was formalised by two international military alliances, the United States-led NATO and the Soviet-led Warsaw Pact.[313] The long period of political tensions and military competition between them, the Cold War, would be accompanied by an unprecedented arms race and proxy wars.[314]
205
+
206
+ In Asia, the United States led the occupation of Japan and administrated Japan's former islands in the Western Pacific, while the Soviets annexed South Sakhalin and the Kuril Islands.[315] Korea, formerly under Japanese rule, was divided and occupied by the Soviet Union in the North and the United States in the South between 1945 and 1948. Separate republics emerged on both sides of the 38th parallel in 1948, each claiming to be the legitimate government for all of Korea, which led ultimately to the Korean War.[316]
207
+
208
+ In China, nationalist and communist forces resumed the civil war in June 1946. Communist forces were victorious and established the People's Republic of China on the mainland, while nationalist forces retreated to Taiwan in 1949.[317] In the Middle East, the Arab rejection of the United Nations Partition Plan for Palestine and the creation of Israel marked the escalation of the Arab–Israeli conflict. While European powers attempted to retain some or all of their colonial empires, their losses of prestige and resources during the war rendered this unsuccessful, leading to decolonisation.[318][319]
209
+
210
+ The global economy suffered heavily from the war, although participating nations were affected differently. The United States emerged much richer than any other nation, leading to a baby boom, and by 1950 its gross domestic product per person was much higher than that of any of the other powers, and it dominated the world economy.[320] The UK and US pursued a policy of industrial disarmament in Western Germany in the years 1945–1948.[321] Because of international trade interdependencies this led to European economic stagnation and delayed European recovery for several years.[322][323]
211
+
212
+ Recovery began with the mid-1948 currency reform in Western Germany, and was sped up by the liberalisation of European economic policy that the Marshall Plan (1948–1951) both directly and indirectly caused.[324][325] The post-1948 West German recovery has been called the German economic miracle.[326] Italy also experienced an economic boom[327] and the French economy rebounded.[328] By contrast, the United Kingdom was in a state of economic ruin,[329] and although receiving a quarter of the total Marshall Plan assistance, more than any other European country,[330] it continued in relative economic decline for decades.[331]
213
+
214
+ The Soviet Union, despite enormous human and material losses, also experienced rapid increase in production in the immediate post-war era.[332] Japan recovered much later.[333] China returned to its pre-war industrial production by 1952.[334]
215
+
216
+ Estimates for the total number of casualties in the war vary, because many deaths went unrecorded.[335] Most suggest that some 60 million people died in the war, including about 20 million military personnel and 40 million civilians.[336][337][338]
217
+ Many of the civilians died because of deliberate genocide, massacres, mass bombings, disease, and starvation.
218
+
219
+ The Soviet Union alone lost around 27 million people during the war,[339] including 8.7 million military and 19 million civilian deaths.[340] A quarter of the people in the Soviet Union were wounded or killed.[341] Germany sustained 5.3 million military losses, mostly on the Eastern Front and during the final battles in Germany.[342]
220
+
221
+ An estimated 11[343] to 17 million[344] civilians died as a direct or as an indirect result of Nazi racist policies, including mass killing of around 6 million Jews, along with Roma, homosexuals, at least 1.9 million ethnic Poles[345][346] and millions of other Slavs (including Russians, Ukrainians and Belarusians), and other ethnic and minority groups.[347][344] Between 1941 and 1945, more than 200,000 ethnic Serbs, along with gypsies and Jews, were persecuted and murdered by the Axis-aligned Croatian Ustaše in Yugoslavia.[348] Also, more than 100,000 Poles were massacred by the Ukrainian Insurgent Army in the Volhynia massacres, between 1943 and 1945.[349] At the same time about 10,000–15,000 Ukrainians were killed by the Polish Home Army and other Polish units, in reprisal attacks.[350]
222
+
223
+ In Asia and the Pacific, between 3 million and more than 10 million civilians, mostly Chinese (estimated at 7.5 million[351]), were killed by the Japanese occupation forces.[352] The most infamous Japanese atrocity was the Nanking Massacre, in which fifty to three hundred thousand Chinese civilians were raped and murdered.[353] Mitsuyoshi Himeta reported that 2.7 million casualties occurred during the Sankō Sakusen. General Yasuji Okamura implemented the policy in Heipei and Shantung.[354]
224
+
225
+ Axis forces employed biological and chemical weapons. The Imperial Japanese Army used a variety of such weapons during its invasion and occupation of China (see Unit 731)[355][356] and in early conflicts against the Soviets.[357] Both the Germans and the Japanese tested such weapons against civilians,[358] and sometimes on prisoners of war.[359]
226
+
227
+ The Soviet Union was responsible for the Katyn massacre of 22,000 Polish officers,[360] and the imprisonment or execution of thousands of political prisoners by the NKVD, along with mass civilian deportations to Siberia, in the Baltic states and eastern Poland annexed by the Red Army.[361]
228
+
229
+ The mass bombing of cities in Europe and Asia has often been called a war crime, although no positive or specific customary international humanitarian law with respect to aerial warfare existed before or during World War II.[362] The USAAF firebombed a total of 67 Japanese cities, killing 393,000 civilians and destroying 65% of built-up areas.[363]
230
+
231
+ Nazi Germany was responsible for the Holocaust (which killed approximately 6 million Jews) as well as for killing 2.7 million ethnic Poles[364] and 4 million others who were deemed "unworthy of life" (including the disabled and mentally ill, Soviet prisoners of war, Romani, homosexuals, Freemasons, and Jehovah's Witnesses) as part of a programme of deliberate extermination, in effect becoming a "genocidal state".[365] Soviet POWs were kept in especially unbearable conditions, and 3.6 million Soviet POWs out of 5.7 died in Nazi camps during the war.[366][367] In addition to concentration camps, death camps were created in Nazi Germany to exterminate people on an industrial scale. Nazi Germany extensively used forced labourers; about 12 million Europeans from German occupied countries were abducted and used as a slave work force in German industry, agriculture and war economy.[368]
232
+
233
+ The Soviet Gulag became a de facto system of deadly camps during 1942–43, when wartime privation and hunger caused numerous deaths of inmates,[369] including foreign citizens of Poland and other countries occupied in 1939–40 by the Soviet Union, as well as Axis POWs.[370] By the end of the war, most Soviet POWs liberated from Nazi camps and many repatriated civilians were detained in special filtration camps where they were subjected to NKVD evaluation, and 226,127 were sent to the Gulag as real or perceived Nazi collaborators.[371]
234
+
235
+ Japanese prisoner-of-war camps, many of which were used as labour camps, also had high death rates. The International Military Tribunal for the Far East found the death rate of Western prisoners was 27 per cent (for American POWs, 37 per cent),[372] seven times that of POWs under the Germans and Italians.[373] While 37,583 prisoners from the UK, 28,500 from the Netherlands, and 14,473 from the United States were released after the surrender of Japan, the number of Chinese released was only 56.[374]
236
+
237
+ At least five million Chinese civilians from northern China and Manchukuo were enslaved between 1935 and 1941 by the East Asia Development Board, or Kōain, for work in mines and war industries. After 1942, the number reached 10 million.[375] In Java, between 4 and 10 million rōmusha (Japanese: "manual labourers"), were forced to work by the Japanese military. About 270,000 of these Javanese labourers were sent to other Japanese-held areas in South East Asia, and only 52,000 were repatriated to Java.[376]
238
+
239
+ In Europe, occupation came under two forms. In Western, Northern, and Central Europe (France, Norway, Denmark, the Low Countries, and the annexed portions of Czechoslovakia) Germany established economic policies through which it collected roughly 69.5 billion reichmarks (27.8 billion US dollars) by the end of the war; this figure does not include the sizeable plunder of industrial products, military equipment, raw materials and other goods.[377] Thus, the income from occupied nations was over 40 per cent of the income Germany collected from taxation, a figure which increased to nearly 40 per cent of total German income as the war went on.[378]
240
+
241
+ In the East, the intended gains of Lebensraum were never attained as fluctuating front-lines and Soviet scorched earth policies denied resources to the German invaders.[379] Unlike in the West, the Nazi racial policy encouraged extreme brutality against what it considered to be the "inferior people" of Slavic descent; most German advances were thus followed by mass executions.[380] Although resistance groups formed in most occupied territories, they did not significantly hamper German operations in either the East[381] or the West[382] until late 1943.
242
+
243
+ In Asia, Japan termed nations under its occupation as being part of the Greater East Asia Co-Prosperity Sphere, essentially a Japanese hegemony which it claimed was for purposes of liberating colonised peoples.[383] Although Japanese forces were sometimes welcomed as liberators from European domination, Japanese war crimes frequently turned local public opinion against them.[384] During Japan's initial conquest it captured 4,000,000 barrels (640,000 m3) of oil (~5.5×105 tonnes) left behind by retreating Allied forces, and by 1943 was able to get production in the Dutch East Indies up to 50 million barrels (~6.8×10^6 t), 76 per cent of its 1940 output rate.[384]
244
+
245
+ In Europe, before the outbreak of the war, the Allies had significant advantages in both population and economics. In 1938, the Western Allies (United Kingdom, France, Poland and the British Dominions) had a 30 per cent larger population and a 30 per cent higher gross domestic product than the European Axis powers (Germany and Italy); if colonies are included, the Allies had more than a 5:1 advantage in population and a nearly 2:1 advantage in GDP.[385] In Asia at the same time, China had roughly six times the population of Japan but only an 89 per cent higher GDP; this is reduced to three times the population and only a 38 per cent higher GDP if Japanese colonies are included.[385]
246
+
247
+ The United States produced about two-thirds of all the munitions used by the Allies in WWII, including warships, transports, warplanes, artillery, tanks, trucks, and ammunition.[386]
248
+ Though the Allies' economic and population advantages were largely mitigated during the initial rapid blitzkrieg attacks of Germany and Japan, they became the decisive factor by 1942, after the United States and Soviet Union joined the Allies, as the war largely settled into one of attrition.[387] While the Allies' ability to out-produce the Axis is often attributed[by whom?] to the Allies having more access to natural resources, other factors, such as Germany and Japan's reluctance to employ women in the labour force,[388] Allied strategic bombing,[389] and Germany's late shift to a war economy[390] contributed significantly. Additionally, neither Germany nor Japan planned to fight a protracted war, and had not equipped themselves to do so.[391] To improve their production, Germany and Japan used millions of slave labourers;[392] Germany used about 12 million people, mostly from Eastern Europe,[368] while Japan used more than 18 million people in Far East Asia.[375][376]
249
+
250
+ Aircraft were used for reconnaissance, as fighters, bombers, and ground-support, and each role was advanced considerably. Innovation included airlift (the capability to quickly move limited high-priority supplies, equipment, and personnel);[393] and of strategic bombing (the bombing of enemy industrial and population centres to destroy the enemy's ability to wage war).[394] Anti-aircraft weaponry also advanced, including defences such as radar and surface-to-air artillery. The use of the jet aircraft was pioneered and, though late introduction meant it had little impact, it led to jets becoming standard in air forces worldwide.[395] Although guided missiles were being developed, they were not advanced enough to reliably target aircraft until some years after the war.
251
+
252
+ Advances were made in nearly every aspect of naval warfare, most notably with aircraft carriers and submarines. Although aeronautical warfare had relatively little success at the start of the war, actions at Taranto, Pearl Harbor, and the Coral Sea established the carrier as the dominant capital ship in place of the battleship.[396][397][398] In the Atlantic, escort carriers proved to be a vital part of Allied convoys, increasing the effective protection radius and helping to close the Mid-Atlantic gap.[399] Carriers were also more economical than battleships because of the relatively low cost of aircraft[400] and their not requiring to be as heavily armoured.[401] Submarines, which had proved to be an effective weapon during the First World War,[402] were anticipated by all sides to be important in the second. The British focused development on anti-submarine weaponry and tactics, such as sonar and convoys, while Germany focused on improving its offensive capability, with designs such as the Type VII submarine and wolfpack tactics.[403][better source needed] Gradually, improving Allied technologies such as the Leigh light, hedgehog, squid, and homing torpedoes proved victorious over the German submarines.[citation needed]
253
+
254
+ Land warfare changed from the static front lines of trench warfare of World War I, which had relied on improved artillery that outmatched the speed of both infantry and cavalry, to increased mobility and combined arms. The tank, which had been used predominantly for infantry support in the First World War, had evolved into the primary weapon.[404] In the late 1930s, tank design was considerably more advanced than it had been during World War I,[405] and advances continued throughout the war with increases in speed, armour and firepower.[citation needed] At the start of the war, most commanders thought enemy tanks should be met by tanks with superior specifications.[406] This idea was challenged by the poor performance of the relatively light early tank guns against armour, and German doctrine of avoiding tank-versus-tank combat. This, along with Germany's use of combined arms, were among the key elements of their highly successful blitzkrieg tactics across Poland and France.[404] Many means of destroying tanks, including indirect artillery, anti-tank guns (both towed and self-propelled), mines, short-ranged infantry antitank weapons, and other tanks were used.[406] Even with large-scale mechanisation, infantry remained the backbone of all forces,[407] and throughout the war, most infantry were equipped similarly to World War I.[408] The portable machine gun spread, a notable example being the German MG34, and various submachine guns which were suited to close combat in urban and jungle settings.[408] The assault rifle, a late war development incorporating many features of the rifle and submachine gun, became the standard postwar infantry weapon for most armed forces.[409]
255
+
256
+ Most major belligerents attempted to solve the problems of complexity and security involved in using large codebooks for cryptography by designing ciphering machines, the most well known being the German Enigma machine.[410] Development of SIGINT (signals intelligence) and cryptanalysis enabled the countering process of decryption. Notable examples were the Allied decryption of Japanese naval codes[411] and British Ultra, a pioneering method for decoding Enigma benefiting from information given to the United Kingdom by the Polish Cipher Bureau, which had been decoding early versions of Enigma before the war.[412] Another aspect of military intelligence was the use of deception, which the Allies used to great effect, such as in operations Mincemeat and Bodyguard.[411][413]
257
+
258
+ Other technological and engineering feats achieved during, or as a result of, the war include the world's first programmable computers (Z3, Colossus, and ENIAC), guided missiles and modern rockets, the Manhattan Project's development of nuclear weapons, operations research and the development of artificial harbours and oil pipelines under the English Channel.[citation needed] Penicillin was first mass-produced and used during the war (see Stabilization and mass production of penicillin).[414]
259
+
en/5325.html.txt ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Asia-Pacific
6
+
7
+ Mediterranean and Middle East
8
+
9
+ Other campaigns
10
+
11
+ Coups
12
+
13
+ World War II (WWII or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945. It involved the vast majority of the world's countries—including all the great powers—forming two opposing military alliances: the Allies and the Axis. In a state of total war, directly involving more than 100 million people from more than 30 countries, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, blurring the distinction between civilian and military resources. World War II was the deadliest conflict in human history, marked by 70 to 85 million fatalities. Tens of millions of people died due to genocides (including the Holocaust), premeditated death from starvation, massacres, and disease. Aircraft played a major role in the conflict, including in the use of strategic bombing of population centres, and the only uses of nuclear weapons in war.
14
+
15
+ World War II is generally considered to have begun on 1 September 1939, with the invasion of Poland by Germany and subsequent declarations of war on Germany by France and the United Kingdom. From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours: Poland, Finland, Romania and the Baltic states. Following the onset of campaigns in North Africa and East Africa, and the Fall of France in mid-1940, the war continued primarily between the European Axis powers and the British Empire, with war in the Balkans, the aerial Battle of Britain, the Blitz, and the Battle of the Atlantic. On 22 June 1941, Germany led the European Axis powers in an invasion of the Soviet Union, opening the largest land theatre of war in history and trapping the Axis, crucially the German Wehrmacht, in a war of attrition.
16
+
17
+ Japan, which aimed to dominate Asia and the Pacific, was at war with the Republic of China by 1937. In December 1941, Japan launched a surprise attack on the United States as well as European colonies in East Asia and the Pacific. Following an immediate US declaration of war against Japan, supported by one from the UK, the European Axis powers declared war on the United States in solidarity with their ally. Japan soon captured much of the Western Pacific, but its advances were halted in 1942 after Japan lost the critical Battle of Midway; later, Germany and Italy were defeated in North Africa and at Stalingrad in the Soviet Union. Key setbacks in 1943—which included a series of German defeats on the Eastern Front, the Allied invasions of Sicily and Italy, and Allied offensives in the Pacific—cost the Axis its initiative and forced it into strategic retreat on all fronts. In 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained its territorial losses and turned towards Germany and its allies. During 1944 and 1945, the Japanese suffered reversals in mainland Asia, while the Allies crippled the Japanese Navy and captured key Western Pacific islands.
18
+
19
+ The war in Europe concluded with an invasion of Germany by the Western Allies and the Soviet Union, culminating in the capture of Berlin by Soviet troops, the suicide of Adolf Hitler and the German unconditional surrender on 8 May 1945. Following the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender on its terms, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki on 6 and 9 August, respectively. Faced with an imminent invasion of the Japanese archipelago, the possibility of additional atomic bombings, and the Soviet entry into the war against Japan and its invasion of Manchuria on 9 August, Japan announced its intention to surrender on 15 August 1945, cementing total victory in Asia for the Allies. In the wake of the war, Germany and Japan were occupied and war crimes tribunals were conducted against German and Japanese leaders.
20
+
21
+ World War II changed the political alignment and social structure of the globe. The United Nations (UN) was established to foster international co-operation and prevent future conflicts, and the victorious great powers—China, France, the Soviet Union, the United Kingdom, and the United States—became the permanent members of its Security Council. The Soviet Union and the United States emerged as rival superpowers, setting the stage for the nearly half-century-long Cold War. In the wake of European devastation, the influence of its great powers waned, triggering the decolonisation of Africa and Asia. Most countries whose industries had been damaged moved towards economic recovery and expansion. Political integration, especially in Europe, began as an effort to forestall future hostilities, end pre-war enmities and forge a sense of common identity.
22
+
23
+ The start of the war in Europe is generally held to be 1 September 1939,[1][2] beginning with the German invasion of Poland; the United Kingdom and France declared war on Germany two days later. The dates for the beginning of war in the Pacific include the start of the Second Sino-Japanese War on 7 July 1937,[3][4] or even the Japanese invasion of Manchuria on 19 September 1931.[5][6][7]
24
+
25
+ Others follow the British historian A. J. P. Taylor, who held that the Sino-Japanese War and war in Europe and its colonies occurred simultaneously, and the two wars merged in 1941. This article uses the conventional dating. Other starting dates sometimes used for World War II include the Italian invasion of Abyssinia on 3 October 1935.[8] The British historian Antony Beevor views the beginning of World War II as the Battles of Khalkhin Gol fought between Japan and the forces of Mongolia and the Soviet Union from May to September 1939.[9]
26
+
27
+ The exact date of the war's end is also not universally agreed upon. It was generally accepted at the time that the war ended with the armistice of 14 August 1945 (V-J Day), rather than the formal surrender of Japan, which was on 2 September 1945 that officially ended the war in Asia. A peace treaty with Japan was signed in 1951.[10] A treaty regarding Germany's future allowed the reunification of East and West Germany to take place in 1990 and resolved most post-World War II issues.[11] No formal peace treaty between Japan and the Soviet Union was ever signed.[12]
28
+
29
+ World War I had radically altered the political European map, with the defeat of the Central Powers—including Austria-Hungary, Germany, Bulgaria and the Ottoman Empire—and the 1917 Bolshevik seizure of power in Russia, which eventually led to the founding of the Soviet Union. Meanwhile, the victorious Allies of World War I, such as France, Belgium, Italy, Romania, and Greece, gained territory, and new nation-states were created out of the collapse of Austria-Hungary and the Ottoman and Russian Empires.
30
+
31
+ To prevent a future world war, the League of Nations was created during the 1919 Paris Peace Conference. The organisation's primary goals were to prevent armed conflict through collective security, military and naval disarmament, and settling international disputes through peaceful negotiations and arbitration.
32
+
33
+ Despite strong pacifist sentiment after World War I,[13] its aftermath still caused irredentist and revanchist nationalism in several European states. These sentiments were especially marked in Germany because of the significant territorial, colonial, and financial losses incurred by the Treaty of Versailles. Under the treaty, Germany lost around 13 percent of its home territory and all its overseas possessions, while German annexation of other states was prohibited, reparations were imposed, and limits were placed on the size and capability of the country's armed forces.[14]
34
+
35
+ The German Empire was dissolved in the German Revolution of 1918–1919, and a democratic government, later known as the Weimar Republic, was created. The interwar period saw strife between supporters of the new republic and hardline opponents on both the right and left. Italy, as an Entente ally, had made some post-war territorial gains; however, Italian nationalists were angered that the promises made by the United Kingdom and France to secure Italian entrance into the war were not fulfilled in the peace settlement. From 1922 to 1925, the Fascist movement led by Benito Mussolini seized power in Italy with a nationalist, totalitarian, and class collaborationist agenda that abolished representative democracy, repressed socialist, left-wing and liberal forces, and pursued an aggressive expansionist foreign policy aimed at making Italy a world power, promising the creation of a "New Roman Empire".[15]
36
+
37
+ Adolf Hitler, after an unsuccessful attempt to overthrow the German government in 1923, eventually became the Chancellor of Germany in 1933. He abolished democracy, espousing a radical, racially motivated revision of the world order, and soon began a massive rearmament campaign.[16] Meanwhile, France, to secure its alliance, allowed Italy a free hand in Ethiopia, which Italy desired as a colonial possession. The situation was aggravated in early 1935 when the Territory of the Saar Basin was legally reunited with Germany and Hitler repudiated the Treaty of Versailles, accelerated his rearmament programme, and introduced conscription.[17]
38
+
39
+ The United Kingdom, France and Italy formed the Stresa Front in April 1935 in order to contain Germany, a key step towards military globalisation; however, that June, the United Kingdom made an independent naval agreement with Germany, easing prior restrictions. The Soviet Union, concerned by Germany's goals of capturing vast areas of Eastern Europe, drafted a treaty of mutual assistance with France. Before taking effect, though, the Franco-Soviet pact was required to go through the bureaucracy of the League of Nations, which rendered it essentially toothless.[18] The United States, concerned with events in Europe and Asia, passed the Neutrality Act in August of the same year.[19]
40
+
41
+ Hitler defied the Versailles and Locarno treaties by remilitarising the Rhineland in March 1936, encountering little opposition due to appeasement.[20] In October 1936, Germany and Italy formed the Rome–Berlin Axis. A month later, Germany and Japan signed the Anti-Comintern Pact, which Italy joined the following year.[21]
42
+
43
+ The Kuomintang (KMT) party in China launched a unification campaign against regional warlords and nominally unified China in the mid-1920s, but was soon embroiled in a civil war against its former Chinese Communist Party allies[22] and new regional warlords. In 1931, an increasingly militaristic Empire of Japan, which had long sought influence in China[23] as the first step of what its government saw as the country's right to rule Asia, staged the Mukden Incident as a pretext to invade Manchuria and establish the puppet state of Manchukuo.[24]
44
+
45
+ China appealed to the League of Nations to stop the Japanese invasion of Manchuria. Japan withdrew from the League of Nations after being condemned for its incursion into Manchuria. The two nations then fought several battles, in Shanghai, Rehe and Hebei, until the Tanggu Truce was signed in 1933. Thereafter, Chinese volunteer forces continued the resistance to Japanese aggression in Manchuria, and Chahar and Suiyuan.[25] After the 1936 Xi'an Incident, the Kuomintang and communist forces agreed on a ceasefire to present a united front to oppose Japan.[26]
46
+
47
+ The Second Italo–Ethiopian War was a brief colonial war that began in October 1935 and ended in May 1936. The war began with the invasion of the Ethiopian Empire (also known as Abyssinia) by the armed forces of the Kingdom of Italy (Regno d'Italia), which was launched from Italian Somaliland and Eritrea.[27] The war resulted in the military occupation of Ethiopia and its annexation into the newly created colony of Italian East Africa (Africa Orientale Italiana, or AOI); in addition it exposed the weakness of the League of Nations as a force to preserve peace. Both Italy and Ethiopia were member nations, but the League did little when the former clearly violated Article X of the League's Covenant.[28] The United Kingdom and France supported imposing sanctions on Italy for the invasion, but they were not fully enforced and failed to end the Italian invasion.[29] Italy subsequently dropped its objections to Germany's goal of absorbing Austria.[30]
48
+
49
+ When civil war broke out in Spain, Hitler and Mussolini lent military support to the Nationalist rebels, led by General Francisco Franco. Italy supported the Nationalists to a greater extent than the Nazis did: altogether Mussolini sent to Spain more than 70,000 ground troops and 6,000 aviation personnel, as well as about 720 aircraft.[31] The Soviet Union supported the existing government, the Spanish Republic. More than 30,000 foreign volunteers, known as the International Brigades, also fought against the Nationalists. Both Germany and the Soviet Union used this proxy war as an opportunity to test in combat their most advanced weapons and tactics. The Nationalists won the civil war in April 1939; Franco, now dictator, remained officially neutral during World War II but generally favoured the Axis.[32] His greatest collaboration with Germany was the sending of volunteers to fight on the Eastern Front.[33]
50
+
51
+ In July 1937, Japan captured the former Chinese imperial capital of Peking after instigating the Marco Polo Bridge Incident, which culminated in the Japanese campaign to invade all of China.[34] The Soviets quickly signed a non-aggression pact with China to lend materiel support, effectively ending China's prior co-operation with Germany. From September to November, the Japanese attacked Taiyuan, engaged the Kuomintang Army around Xinkou,[35] and fought Communist forces in Pingxingguan.[36][37] Generalissimo Chiang Kai-shek deployed his best army to defend Shanghai, but, after three months of fighting, Shanghai fell. The Japanese continued to push the Chinese forces back, capturing the capital Nanking in December 1937. After the fall of Nanking, tens of thousands if not hundreds of thousands of Chinese civilians and disarmed combatants were murdered by the Japanese.[38][39]
52
+
53
+ In March 1938, Nationalist Chinese forces won their first major victory at Taierzhuang but then the city of Xuzhou was taken by Japanese in May.[40] In June 1938, Chinese forces stalled the Japanese advance by flooding the Yellow River; this manoeuvre bought time for the Chinese to prepare their defences at Wuhan, but the city was taken by October.[41] Japanese military victories did not bring about the collapse of Chinese resistance that Japan had hoped to achieve; instead, the Chinese government relocated inland to Chongqing and continued the war.[42][43]
54
+
55
+ In the mid-to-late 1930s, Japanese forces in Manchukuo had sporadic border clashes with the Soviet Union and Mongolia. The Japanese doctrine of Hokushin-ron, which emphasised Japan's expansion northward, was favoured by the Imperial Army during this time. With the Japanese defeat at Khalkin Gol in 1939, the ongoing Second Sino-Japanese War[44] and ally Nazi Germany pursuing neutrality with the Soviets, this policy would prove difficult to maintain. Japan and the Soviet Union eventually signed a Neutrality Pact in April 1941, and Japan adopted the doctrine of Nanshin-ron, promoted by the Navy, which took its focus southward, eventually leading to its war with the United States and the Western Allies.[45][46]
56
+
57
+ In Europe, Germany and Italy were becoming more aggressive. In March 1938, Germany annexed Austria, again provoking little response from other European powers.[47] Encouraged, Hitler began pressing German claims on the Sudetenland, an area of Czechoslovakia with a predominantly ethnic German population. Soon the United Kingdom and France followed the appeasement policy of British Prime Minister Neville Chamberlain and conceded this territory to Germany in the Munich Agreement, which was made against the wishes of the Czechoslovak government, in exchange for a promise of no further territorial demands.[48] Soon afterwards, Germany and Italy forced Czechoslovakia to cede additional territory to Hungary, and Poland annexed Czechoslovakia's Zaolzie region.[49]
58
+
59
+ Although all of Germany's stated demands had been satisfied by the agreement, privately Hitler was furious that British interference had prevented him from seizing all of Czechoslovakia in one operation. In subsequent speeches Hitler attacked British and Jewish "war-mongers" and in January 1939 secretly ordered a major build-up of the German navy to challenge British naval supremacy. In March 1939, Germany invaded the remainder of Czechoslovakia and subsequently split it into the German Protectorate of Bohemia and Moravia and a pro-German client state, the Slovak Republic.[50] Hitler also delivered 20 March 1939 ultimatum to Lithuania, forcing the concession of the Klaipėda Region, formerly the German Memelland.[51]
60
+
61
+ Greatly alarmed and with Hitler making further demands on the Free City of Danzig, the United Kingdom and France guaranteed their support for Polish independence; when Italy conquered Albania in April 1939, the same guarantee was extended to Romania and Greece.[52] Shortly after the Franco-British pledge to Poland, Germany and Italy formalised their own alliance with the Pact of Steel.[53] Hitler accused the United Kingdom and Poland of trying to "encircle" Germany and renounced the Anglo-German Naval Agreement and the German–Polish Non-Aggression Pact.[54]
62
+
63
+ The situation reached a general crisis in late August as German troops continued to mobilise against the Polish border. On 23 August, when tripartite negotiations about a military alliance between France, the United Kingdom and Soviet Union stalled,[55] the Soviet Union signed a non-aggression pact with Germany.[56] This pact had a secret protocol that defined German and Soviet "spheres of influence" (western Poland and Lithuania for Germany; eastern Poland, Finland, Estonia, Latvia and Bessarabia for the Soviet Union), and raised the question of continuing Polish independence.[57] The pact neutralised the possibility of Soviet opposition to a campaign against Poland and assured that Germany would not have to face the prospect of a two-front war, as it had in World War I. Immediately after that, Hitler ordered the attack to proceed on 26 August, but upon hearing that the United Kingdom had concluded a formal mutual assistance pact with Poland, and that Italy would maintain neutrality, he decided to delay it.[58]
64
+
65
+ In response to British requests for direct negotiations to avoid war, Germany made demands on Poland, which only served as a pretext to worsen relations.[59] On 29 August, Hitler demanded that a Polish plenipotentiary immediately travel to Berlin to negotiate the handover of Danzig, and to allow a plebiscite in the Polish Corridor in which the German minority would vote on secession.[59] The Poles refused to comply with the German demands, and on the night of 30–31 August in a stormy meeting with the British ambassador Neville Henderson, Ribbentrop declared that Germany considered its claims rejected.[60]
66
+
67
+ On 1 September 1939, Germany invaded Poland after having staged several false flag border incidents as a pretext to initiate the invasion.[61] The first German attack of the war came against the Polish defenses at Westerplatte.[62] The United Kingdom responded with an ultimatum to Germany to cease military operations, and on 3 September, after the ultimatum was ignored, France and Britain declared war on Germany, followed by Australia, New Zealand, South Africa and Canada. The alliance provided no direct military support to Poland, outside of a cautious French probe into the Saarland.[63] The Western Allies also began a naval blockade of Germany, which aimed to damage the country's economy and the war effort.[64] Germany responded by ordering U-boat warfare against Allied merchant and warships, which would later escalate into the Battle of the Atlantic.[65]
68
+
69
+ On 8 September, German troops reached the suburbs of Warsaw. The Polish counter offensive to the west halted the German advance for several days, but it was outflanked and encircled by the Wehrmacht. Remnants of the Polish army broke through to besieged Warsaw. On 17 September 1939, after signing a cease-fire with Japan, the Soviets invaded Eastern Poland[66] under a pretext that the Polish state had ostensibly ceased to exist.[67] On 27 September, the Warsaw garrison surrendered to the Germans, and the last large operational unit of the Polish Army surrendered on 6 October. Despite the military defeat, Poland never surrendered; instead it formed the Polish government-in-exile and a clandestine state apparatus remained in occupied Poland.[68] A significant part of Polish military personnel evacuated to Romania and the Baltic countries; many of them would fight against the Axis in other theatres of the war.[69]
70
+
71
+ Germany annexed the western and occupied the central part of Poland, and the Soviet Union annexed its eastern part; small shares of Polish territory were transferred to Lithuania and Slovakia. On 6 October, Hitler made a public peace overture to the United Kingdom and France but said that the future of Poland was to be determined exclusively by Germany and the Soviet Union. The proposal was rejected,[60] and Hitler ordered an immediate offensive against France,[70] which would be postponed until the spring of 1940 due to bad weather.[71][72][73]
72
+
73
+ The Soviet Union forced the Baltic countries—Estonia, Latvia and Lithuania, the states that were in the Soviet "sphere of influence" under the Molotov-Ribbentrop pact—to sign "mutual assistance pacts" that stipulated stationing Soviet troops in these countries. Soon after, significant Soviet military contingents were moved there.[74][75][76] Finland refused to sign a similar pact and rejected ceding part of its territory to the Soviet Union. The Soviet Union invaded Finland in November 1939,[77] and the Soviet Union was expelled from the League of Nations.[78] Despite overwhelming numerical superiority, Soviet military success was modest, and the Finno-Soviet war ended in March 1940 with minimal Finnish concessions.[79]
74
+
75
+ In June 1940, the Soviet Union forcibly annexed Estonia, Latvia and Lithuania,[75] and the disputed Romanian regions of Bessarabia, northern Bukovina and Hertza. Meanwhile, Nazi-Soviet political rapprochement and economic co-operation[80][81] gradually stalled,[82][83] and both states began preparations for war.[84]
76
+
77
+ In April 1940, Germany invaded Denmark and Norway to protect shipments of iron ore from Sweden, which the Allies were attempting to cut off.[85] Denmark capitulated after a few hours, and Norway was conquered within two months[86] despite Allied support. British discontent over the Norwegian campaign led to the appointment of Winston Churchill as Prime Minister on 10 May 1940.[87]
78
+
79
+ On the same day, Germany launched an offensive against France. To circumvent the strong Maginot Line fortifications on the Franco-German border, Germany directed its attack at the neutral nations of Belgium, the Netherlands, and Luxembourg.[88] The Germans carried out a flanking manoeuvre through the Ardennes region,[89] which was mistakenly perceived by Allies as an impenetrable natural barrier against armoured vehicles.[90][91] By successfully implementing new blitzkrieg tactics, the Wehrmacht rapidly advanced to the Channel and cut off the Allied forces in Belgium, trapping the bulk of the Allied armies in a cauldron on the Franco-Belgian border near Lille. The United Kingdom was able to evacuate a significant number of Allied troops from the continent by early June, although abandoning almost all their equipment.[92]
80
+
81
+ On 10 June, Italy invaded France, declaring war on both France and the United Kingdom.[93] The Germans turned south against the weakened French army, and Paris fell to them on 14 June. Eight days later France signed an armistice with Germany; it was divided into German and Italian occupation zones,[94] and an unoccupied rump state under the Vichy Regime, which, though officially neutral, was generally aligned with Germany. France kept its fleet, which the United Kingdom attacked on 3 July in an attempt to prevent its seizure by Germany.[95]
82
+
83
+ The Battle of Britain[96] began in early July with Luftwaffe attacks on shipping and harbours.[97] The United Kingdom rejected Hitler's ultimatum,[which?][98] and the German air superiority campaign started in August but failed to defeat RAF Fighter Command, forcing the indefinite postponement of the proposed German invasion of Britain. The German strategic bombing offensive intensified with night attacks on London and other cities in the Blitz, but failed to significantly disrupt the British war effort[97] and largely ended in May 1941.[99]
84
+
85
+ Using newly captured French ports, the German Navy enjoyed success against an over-extended Royal Navy, using U-boats against British shipping in the Atlantic.[100] The British Home Fleet scored a significant victory on 27 May 1941 by sinking the German battleship Bismarck.[101]
86
+
87
+ In November 1939, the United States was taking measures to assist China and the Western Allies, and amended the Neutrality Act to allow "cash and carry" purchases by the Allies.[102] In 1940, following the German capture of Paris, the size of the United States Navy was significantly increased. In September the United States further agreed to a trade of American destroyers for British bases.[103] Still, a large majority of the American public continued to oppose any direct military intervention in the conflict well into 1941.[104] In December 1940 Roosevelt accused Hitler of planning world conquest and ruled out any negotiations as useless, calling for the United States to become an "arsenal of democracy" and promoting Lend-Lease programmes of aid to support the British war effort.[98] The United States started strategic planning to prepare for a full-scale offensive against Germany.[105]
88
+
89
+ At the end of September 1940, the Tripartite Pact formally united Japan, Italy, and Germany as the Axis Powers. The Tripartite Pact stipulated that any country, with the exception of the Soviet Union, which attacked any Axis Power would be forced to go to war against all three.[106] The Axis expanded in November 1940 when Hungary, Slovakia and Romania joined.[107] Romania and Hungary would make major contributions to the Axis war against the Soviet Union, in Romania's case partially to recapture territory ceded to the Soviet Union.[108]
90
+
91
+ In early June 1940 the Italian Regia Aeronautica attacked and besieged Malta, a British possession. In late summer through early autumn Italy conquered British Somaliland and made an incursion into British-held Egypt. In October Italy attacked Greece, but the attack was repulsed with heavy Italian casualties; the campaign ended within months with minor territorial changes.[109] Germany started preparation for an invasion of the Balkans to assist Italy, to prevent the British from gaining a foothold there, which would be a potential threat for Romanian oil fields, and to strike against the British dominance of the Mediterranean.[110]
92
+
93
+ In December 1940, British Empire forces began counter-offensives against Italian forces in Egypt and Italian East Africa.[111] The offensives were highly successful; by early February 1941 Italy had lost control of eastern Libya, and large numbers of Italian troops had been taken, prisoner. The Italian Navy also suffered significant defeats, with the Royal Navy putting three Italian battleships out of commission by a carrier attack at Taranto and neutralising several more warships at the Battle of Cape Matapan.[112]
94
+
95
+ Italian defeats prompted Germany to deploy an expeditionary force to North Africa, and at the end of March 1941 Rommel's Afrika Korps launched an offensive which drove back the Commonwealth forces.[113] In under a month, Axis forces advanced to western Egypt and besieged the port of Tobruk.[114]
96
+
97
+ By late March 1941 Bulgaria and Yugoslavia signed the Tripartite Pact; however, the Yugoslav government was overthrown two days later by pro-British nationalists. Germany responded with simultaneous invasions of both Yugoslavia and Greece, commencing on 6 April 1941; both nations were forced to surrender within the month.[115] The airborne invasion of the Greek island of Crete at the end of May completed the German conquest of the Balkans.[116] Although the Axis victory was swift, bitter and large-scale partisan warfare subsequently broke out against the Axis occupation of Yugoslavia, which continued until the end of the war.[117]
98
+
99
+ In the Middle East, in May Commonwealth forces quashed an uprising in Iraq which had been supported by German aircraft from bases within Vichy-controlled Syria.[118] Between June and July they invaded and occupied the French possessions Syria and Lebanon, with the assistance of the Free French.[119]
100
+
101
+ With the situation in Europe and Asia relatively stable, Germany, Japan, and the Soviet Union made preparations. With the Soviets wary of mounting tensions with Germany and the Japanese planning to take advantage of the European War by seizing resource-rich European possessions in Southeast Asia, the two powers signed the Soviet–Japanese Neutrality Pact in April 1941.[120] By contrast, the Germans were steadily making preparations for an attack on the Soviet Union, massing forces on the Soviet border.[121]
102
+
103
+ Hitler believed that the United Kingdom's refusal to end the war was based on the hope that the United States and the Soviet Union would enter the war against Germany sooner or later.[122] He, therefore, decided to try to strengthen Germany's relations with the Soviets, or failing that to attack and eliminate them as a factor. In November 1940, negotiations took place to determine if the Soviet Union would join the Tripartite Pact. The Soviets showed some interest but asked for concessions from Finland, Bulgaria, Turkey, and Japan that Germany considered unacceptable. On 18 December 1940, Hitler issued the directive to prepare for an invasion of the Soviet Union.[123]
104
+
105
+ On 22 June 1941, Germany, supported by Italy and Romania, invaded the Soviet Union in Operation Barbarossa, with Germany accusing the Soviets of plotting against them. They were joined shortly by Finland and Hungary.[124] The primary targets of this surprise offensive[125] were the Baltic region, Moscow and Ukraine, with the ultimate goal of ending the 1941 campaign near the Arkhangelsk-Astrakhan line, from the Caspian to the White Seas. Hitler's objectives were to eliminate the Soviet Union as a military power, exterminate Communism, generate Lebensraum ("living space")[126] by dispossessing the native population[127] and guarantee access to the strategic resources needed to defeat Germany's remaining rivals.[128]
106
+
107
+ Although the Red Army was preparing for strategic counter-offensives before the war,[129] Barbarossa forced the Soviet supreme command to adopt a strategic defence. During the summer, the Axis made significant gains into Soviet territory, inflicting immense losses in both personnel and materiel. By mid-August, however, the German Army High Command decided to suspend the offensive of a considerably depleted Army Group Centre, and to divert the 2nd Panzer Group to reinforce troops advancing towards central Ukraine and Leningrad.[130] The Kiev offensive was overwhelmingly successful, resulting in encirclement and elimination of four Soviet armies, and made possible further advance into Crimea and industrially developed Eastern Ukraine (the First Battle of Kharkov).[131]
108
+
109
+ The diversion of three quarters of the Axis troops and the majority of their air forces from France and the central Mediterranean to the Eastern Front[132] prompted the United Kingdom to reconsider its grand strategy.[133] In July, the UK and the Soviet Union formed a military alliance against Germany[134] and in August, the United Kingdom and the United States jointly issued the Atlantic Charter, which outlined British and American goals for the postwar world.[135] In late August the British and Soviets invaded neutral Iran to secure the Persian Corridor, Iran's oil fields, and preempt any Axis advances through Iran toward the Baku oil fields or British India.[136]
110
+
111
+ By October Axis operational objectives in Ukraine and the Baltic region were achieved, with only the sieges of Leningrad[137] and Sevastopol continuing.[138] A major offensive against Moscow was renewed; after two months of fierce battles in increasingly harsh weather, the German army almost reached the outer suburbs of Moscow, where the exhausted troops[139] were forced to suspend their offensive.[140] Large territorial gains were made by Axis forces, but their campaign had failed to achieve its main objectives: two key cities remained in Soviet hands, the Soviet capability to resist was not broken, and the Soviet Union retained a considerable part of its military potential. The blitzkrieg phase of the war in Europe had ended.[141]
112
+
113
+ By early December, freshly mobilised reserves[142] allowed the Soviets to achieve numerical parity with Axis troops.[143] This, as well as intelligence data which established that a minimal number of Soviet troops in the East would be sufficient to deter any attack by the Japanese Kwantung Army,[144] allowed the Soviets to begin a massive counter-offensive that started on 5 December all along the front and pushed German troops 100–250 kilometres (62–155 mi) west.[145]
114
+
115
+ Following the Japanese false flag Mukden Incident in 1931, the Japanese shelling of the American gunboat USS Panay in 1937, and the 1937-38 Nanjing Massacre Japanese-American relations deteriorated. In 1939, the United States notified Japan that it would not be extending its trade treaty and American public opinion opposing Japanese expansionism led to a series of economic sanctions, the Export Control Acts, which banned U.S. exports of chemicals, minerals and military parts to Japan and increased economic pressure on the Japanese regime.[98][146][147] During 1939 Japan launched its first attack against Changsha, a strategically important Chinese city, but was repulsed by late September.[148] Despite several offensives by both sides, the war between China and Japan was stalemated by 1940. To increase pressure on China by blocking supply routes, and to better position Japanese forces in the event of a war with the Western powers, Japan invaded and occupied northern Indochina in September, 1940.[149]
116
+
117
+ Chinese nationalist forces launched a large-scale counter-offensive in early 1940. In August, Chinese communists launched an offensive in Central China; in retaliation, Japan instituted harsh measures in occupied areas to reduce human and material resources for the communists.[150] Continued antipathy between Chinese communist and nationalist forces culminated in armed clashes in January 1941, effectively ending their co-operation.[151] In March, the Japanese 11th army attacked the headquarters of the Chinese 19th army but was repulsed during Battle of Shanggao.[152] In September, Japan attempted to take the city of Changsha again and clashed with Chinese nationalist forces.[153]
118
+
119
+ German successes in Europe encouraged Japan to increase pressure on European governments in Southeast Asia. The Dutch government agreed to provide Japan some oil supplies from the Dutch East Indies, but negotiations for additional access to their resources ended in failure in June 1941.[154] In July 1941 Japan sent troops to southern Indochina, thus threatening British and Dutch possessions in the Far East. The United States, United Kingdom, and other Western governments reacted to this move with a freeze on Japanese assets and a total oil embargo.[155][156] At the same time, Japan was planning an invasion of the Soviet Far East, intending to capitalise off the German invasion in the west, but abandoned the operation after the sanctions.[157]
120
+
121
+ Since early 1941 the United States and Japan had been engaged in negotiations in an attempt to improve their strained relations and end the war in China. During these negotiations, Japan advanced a number of proposals which were dismissed by the Americans as inadequate.[158] At the same time the United States, the United Kingdom, and the Netherlands engaged in secret discussions for the joint defence of their territories, in the event of a Japanese attack against any of them.[159] Roosevelt reinforced the Philippines (an American protectorate scheduled for independence in 1946) and warned Japan that the United States would react to Japanese attacks against any "neighboring countries".[159]
122
+
123
+ Frustrated at the lack of progress and feeling the pinch of the American–British–Dutch sanctions, Japan prepared for war. On 20 November, a new government under Hideki Tojo presented an interim proposal as its final offer. It called for the end of American aid to China and for lifting the embargo on the supply of oil and other resources to Japan. In exchange, Japan promised not to launch any attacks in Southeast Asia and to withdraw its forces from southern Indochina.[158] The American counter-proposal of 26 November required that Japan evacuate all of China without conditions and conclude non-aggression pacts with all Pacific powers.[160] That meant Japan was essentially forced to choose between abandoning its ambitions in China, or seizing the natural resources it needed in the Dutch East Indies by force;[161][162] the Japanese military did not consider the former an option, and many officers considered the oil embargo an unspoken declaration of war.[163]
124
+
125
+ Japan planned to rapidly seize European colonies in Asia to create a large defensive perimeter stretching into the Central Pacific. The Japanese would then be free to exploit the resources of Southeast Asia while exhausting the over-stretched Allies by fighting a defensive war.[164][165] To prevent American intervention while securing the perimeter, it was further planned to neutralise the United States Pacific Fleet and the American military presence in the Philippines from the outset.[166] On 7 December 1941 (8 December in Asian time zones), Japan attacked British and American holdings with near-simultaneous offensives against Southeast Asia and the Central Pacific.[167] These included an attack on the American fleets at Pearl Harbor and the Philippines, landings in Malaya,[167], Thailand and the Battle of Hong Kong.[168]
126
+
127
+ The Japanese invasion of Thailand led to Thailand's decision to ally itself with Japan and the other Japanese attacks led the United States, United Kingdom, China, Australia, and several other states to formally declare war on Japan, whereas the Soviet Union, being heavily involved in large-scale hostilities with European Axis countries, maintained its neutrality agreement with Japan.[169] Germany, followed by the other Axis states, declared war on the United States[170] in solidarity with Japan, citing as justification the American attacks on German war vessels that had been ordered by Roosevelt.[124][171]
128
+
129
+ On 1 January 1942, the Allied Big Four[172]—the Soviet Union, China, the United Kingdom and the United States—and 22 smaller or exiled governments issued the Declaration by United Nations, thereby affirming the Atlantic Charter,[173] and agreeing not to sign a separate peace with the Axis powers.[174]
130
+
131
+ During 1942, Allied officials debated on the appropriate grand strategy to pursue. All agreed that defeating Germany was the primary objective. The Americans favoured a straightforward, large-scale attack on Germany through France. The Soviets were also demanding a second front. The British, on the other hand, argued that military operations should target peripheral areas to wear out German strength, leading to increasing demoralisation, and bolster resistance forces. Germany itself would be subject to a heavy bombing campaign. An offensive against Germany would then be launched primarily by Allied armour without using large-scale armies.[175] Eventually, the British persuaded the Americans that a landing in France was infeasible in 1942 and they should instead focus on driving the Axis out of North Africa.[176]
132
+
133
+ At the Casablanca Conference in early 1943, the Allies reiterated the statements issued in the 1942 Declaration, and demanded the unconditional surrender of their enemies. The British and Americans agreed to continue to press the initiative in the Mediterranean by invading Sicily to fully secure the Mediterranean supply routes.[177] Although the British argued for further operations in the Balkans to bring Turkey into the war, in May 1943, the Americans extracted a British commitment to limit Allied operations in the Mediterranean to an invasion of the Italian mainland and to invade France in 1944.[178]
134
+
135
+ By the end of April 1942, Japan and its ally Thailand had almost fully conquered Burma, Malaya, the Dutch East Indies, Singapore, and Rabaul, inflicting severe losses on Allied troops and taking a large number of prisoners.[179] Despite stubborn resistance by Filipino and US forces, the Philippine Commonwealth was eventually captured in May 1942, forcing its government into exile.[180] On 16 April, in Burma, 7,000 British soldiers were encircled by the Japanese 33rd Division during the Battle of Yenangyaung and rescued by the Chinese 38th Division.[181] Japanese forces also achieved naval victories in the South China Sea, Java Sea and Indian Ocean,[182] and bombed the Allied naval base at Darwin, Australia. In January 1942, the only Allied success against Japan was a Chinese victory at Changsha.[183] These easy victories over unprepared US and European opponents left Japan overconfident, as well as overextended.[184]
136
+
137
+ In early May 1942, Japan initiated operations to capture Port Moresby by amphibious assault and thus sever communications and supply lines between the United States and Australia. The planned invasion was thwarted when an Allied task force, centred on two American fleet carriers, fought Japanese naval forces to a draw in the Battle of the Coral Sea.[185] Japan's next plan, motivated by the earlier Doolittle Raid, was to seize Midway Atoll and lure American carriers into battle to be eliminated; as a diversion, Japan would also send forces to occupy the Aleutian Islands in Alaska.[186] In mid-May, Japan started the Zhejiang-Jiangxi Campaign in China, with the goal of inflicting retribution on the Chinese who aided the surviving American airmen in the Doolittle Raid by destroying air bases and fighting against the Chinese 23rd and 32nd Army Groups.[187][188] In early June, Japan put its operations into action, but the Americans, having broken Japanese naval codes in late May, were fully aware of the plans and order of battle, and used this knowledge to achieve a decisive victory at Midway over the Imperial Japanese Navy.[189]
138
+
139
+ With its capacity for aggressive action greatly diminished as a result of the Midway battle, Japan chose to focus on a belated attempt to capture Port Moresby by an overland campaign in the Territory of Papua.[190] The Americans planned a counter-attack against Japanese positions in the southern Solomon Islands, primarily Guadalcanal, as a first step towards capturing Rabaul, the main Japanese base in Southeast Asia.[191]
140
+
141
+ Both plans started in July, but by mid-September, the Battle for Guadalcanal took priority for the Japanese, and troops in New Guinea were ordered to withdraw from the Port Moresby area to the northern part of the island, where they faced Australian and United States troops in the Battle of Buna-Gona.[192] Guadalcanal soon became a focal point for both sides with heavy commitments of troops and ships in the battle for Guadalcanal. By the start of 1943, the Japanese were defeated on the island and withdrew their troops.[193] In Burma, Commonwealth forces mounted two operations. The first, an offensive into the Arakan region in late 1942, went disastrously, forcing a retreat back to India by May 1943.[194] The second was the insertion of irregular forces behind Japanese front-lines in February which, by the end of April, had achieved mixed results.[195]
142
+
143
+ Despite considerable losses, in early 1942 Germany and its allies stopped a major Soviet offensive in central and southern Russia, keeping most territorial gains they had achieved during the previous year.[196] In May the Germans defeated Soviet offensives in the Kerch Peninsula and at Kharkov,[197] and then launched their main summer offensive against southern Russia in June 1942, to seize the oil fields of the Caucasus and occupy Kuban steppe, while maintaining positions on the northern and central areas of the front. The Germans split Army Group South into two groups: Army Group A advanced to the lower Don River and struck south-east to the Caucasus, while Army Group B headed towards the Volga River. The Soviets decided to make their stand at Stalingrad on the Volga.[198]
144
+
145
+ By mid-November, the Germans had nearly taken Stalingrad in bitter street fighting. The Soviets began their second winter counter-offensive, starting with an encirclement of German forces at Stalingrad,[199] and an assault on the Rzhev salient near Moscow, though the latter failed disastrously.[200] By early February 1943, the German Army had taken tremendous losses; German troops at Stalingrad had been defeated,[201] and the front-line had been pushed back beyond its position before the summer offensive. In mid-February, after the Soviet push had tapered off, the Germans launched another attack on Kharkov, creating a salient in their front line around the Soviet city of Kursk.[202]
146
+
147
+ Exploiting poor American naval command decisions, the German navy ravaged Allied shipping off the American Atlantic coast.[203] By November 1941, Commonwealth forces had launched a counter-offensive, Operation Crusader, in North Africa, and reclaimed all the gains the Germans and Italians had made.[204] In North Africa, the Germans launched an offensive in January, pushing the British back to positions at the Gazala Line by early February,[205] followed by a temporary lull in combat which Germany used to prepare for their upcoming offensives.[206] Concerns the Japanese might use bases in Vichy-held Madagascar caused the British to invade the island in early May 1942.[207] An Axis offensive in Libya forced an Allied retreat deep inside Egypt until Axis forces were stopped at El Alamein.[208] On the Continent, raids of Allied commandos on strategic targets, culminating in the disastrous Dieppe Raid,[209] demonstrated the Western Allies' inability to launch an invasion of continental Europe without much better preparation, equipment, and operational security.[210][page needed]
148
+
149
+ In August 1942, the Allies succeeded in repelling a second attack against El Alamein[211] and, at a high cost, managed to deliver desperately needed supplies to the besieged Malta.[212] A few months later, the Allies commenced an attack of their own in Egypt, dislodging the Axis forces and beginning a drive west across Libya.[213] This attack was followed up shortly after by Anglo-American landings in French North Africa, which resulted in the region joining the Allies.[214] Hitler responded to the French colony's defection by ordering the occupation of Vichy France;[214] although Vichy forces did not resist this violation of the armistice, they managed to scuttle their fleet to prevent its capture by German forces.[214][215] The Axis forces in Africa withdrew into Tunisia, which was conquered by the Allies in May 1943.[214][216]
150
+
151
+ In June 1943 the British and Americans began a strategic bombing campaign against Germany with a goal to disrupt the war economy, reduce morale, and "de-house" the civilian population.[217] The firebombing of Hamburg was among the first attacks in this campaign, inflicting significant casualties and considerable losses on infrastructure of this important industrial centre.[218]
152
+
153
+ After the Guadalcanal Campaign, the Allies initiated several operations against Japan in the Pacific. In May 1943, Canadian and US forces were sent to eliminate Japanese forces from the Aleutians.[219] Soon after, the United States, with support from Australia, New Zealand and Pacific Islander forces, began major ground, sea and air operations to isolate Rabaul by capturing surrounding islands, and breach the Japanese Central Pacific perimeter at the Gilbert and Marshall Islands.[220] By the end of March 1944, the Allies had completed both of these objectives and had also neutralised the major Japanese base at Truk in the Caroline Islands. In April, the Allies launched an operation to retake Western New Guinea.[221]
154
+
155
+ In the Soviet Union, both the Germans and the Soviets spent the spring and early summer of 1943 preparing for large offensives in central Russia. On 4 July 1943, Germany attacked Soviet forces around the Kursk Bulge. Within a week, German forces had exhausted themselves against the Soviets' deeply echeloned and well-constructed defences,[222] and for the first time in the war Hitler cancelled the operation before it had achieved tactical or operational success.[223] This decision was partially affected by the Western Allies' invasion of Sicily launched on 9 July, which, combined with previous Italian failures, resulted in the ousting and arrest of Mussolini later that month.[224]
156
+
157
+ On 12 July 1943, the Soviets launched their own counter-offensives, thereby dispelling any chance of German victory or even stalemate in the east. The Soviet victory at Kursk marked the end of German superiority,[225] giving the Soviet Union the initiative on the Eastern Front.[226][227] The Germans tried to stabilise their eastern front along the hastily fortified Panther–Wotan line, but the Soviets broke through it at Smolensk and by the Lower Dnieper Offensives.[228]
158
+
159
+ On 3 September 1943, the Western Allies invaded the Italian mainland, following Italy's armistice with the Allies.[229] Germany with the help of fascists responded by disarming Italian forces that were in many places without superior orders, seizing military control of Italian areas,[230] and creating a series of defensive lines.[231] German special forces then rescued Mussolini, who then soon established a new client state in German-occupied Italy named the Italian Social Republic,[232] causing an Italian civil war. The Western Allies fought through several lines until reaching the main German defensive line in mid-November.[233]
160
+
161
+ German operations in the Atlantic also suffered. By May 1943, as Allied counter-measures became increasingly effective, the resulting sizeable German submarine losses forced a temporary halt of the German Atlantic naval campaign.[234] In November 1943, Franklin D. Roosevelt and Winston Churchill met with Chiang Kai-shek in Cairo and then with Joseph Stalin in Tehran.[235] The former conference determined the post-war return of Japanese territory[236] and the military planning for the Burma Campaign,[237] while the latter included agreement that the Western Allies would invade Europe in 1944 and that the Soviet Union would declare war on Japan within three months of Germany's defeat.[238]
162
+
163
+ From November 1943, during the seven-week Battle of Changde, the Chinese forced Japan to fight a costly war of attrition, while awaiting Allied relief.[239][240][241] In January 1944, the Allies launched a series of attacks in Italy against the line at Monte Cassino and tried to outflank it with landings at Anzio.[242]
164
+
165
+ On 27 January 1944, Soviet troops launched a major offensive that expelled German forces from the Leningrad region, thereby ending the most lethal siege in history.[243] The following Soviet offensive was halted on the pre-war Estonian border by the German Army Group North aided by Estonians hoping to re-establish national independence. This delay slowed subsequent Soviet operations in the Baltic Sea region.[244] By late May 1944, the Soviets had liberated Crimea, largely expelled Axis forces from Ukraine, and made incursions into Romania, which were repulsed by the Axis troops.[245] The Allied offensives in Italy had succeeded and, at the expense of allowing several German divisions to retreat, on 4 June Rome was captured.[246]
166
+
167
+ The Allies had mixed success in mainland Asia. In March 1944, the Japanese launched the first of two invasions, an operation against British positions in Assam, India,[247] and soon besieged Commonwealth positions at Imphal and Kohima.[248] In May 1944, British forces mounted a counter-offensive that drove Japanese troops back to Burma by July,[248] and Chinese forces that had invaded northern Burma in late 1943 besieged Japanese troops in Myitkyina.[249] The second Japanese invasion of China aimed to destroy China's main fighting forces, secure railways between Japanese-held territory and capture Allied airfields.[250] By June, the Japanese had conquered the province of Henan and begun a new attack on Changsha in Hunan province.[251]
168
+
169
+ On 6 June 1944 (known as D-Day), after three years of Soviet pressure,[252] the Western Allies invaded northern France. After reassigning several Allied divisions from Italy, they also attacked southern France.[253] These landings were successful, and led to the defeat of the German Army units in France. Paris was liberated on 25 August by the local resistance assisted by the Free French Forces, both led by General Charles de Gaulle,[254] and the Western Allies continued to push back German forces in western Europe during the latter part of the year. An attempt to advance into northern Germany spearheaded by a major airborne operation in the Netherlands failed.[255] After that, the Western Allies slowly pushed into Germany, but failed to cross the Rur river in a large offensive. In Italy, Allied advance also slowed due to the last major German defensive line.[256]
170
+
171
+ On 22 June, the Soviets launched a strategic offensive in Belarus ("Operation Bagration") that destroyed the German Army Group Centre almost completely.[257] Soon after that, another Soviet strategic offensive forced German troops from Western Ukraine and Eastern Poland. The Soviets formed the Polish Committee of National Liberation to control territory in Poland and combat the Polish Armia Krajowa; The Soviet Red Army remained in the Praga district on the other side of the Vistula and watched passively as the Germans quelled the Warsaw Uprising initiated by the Armia Krajowa.[258] The national uprising in Slovakia was also quelled by the Germans.[259] The Soviet Red Army's strategic offensive in eastern Romania cut off and destroyed the considerable German troops there and triggered a successful coup d'état in Romania and in Bulgaria, followed by those countries' shift to the Allied side.[260]
172
+
173
+ In September 1944, Soviet troops advanced into Yugoslavia and forced the rapid withdrawal of German Army Groups E and F in Greece, Albania and Yugoslavia to rescue them from being cut off.[261] By this point, the Communist-led Partisans under Marshal Josip Broz Tito, who had led an increasingly successful guerrilla campaign against the occupation since 1941, controlled much of the territory of Yugoslavia and engaged in delaying efforts against German forces further south. In northern Serbia, the Soviet Red Army, with limited support from Bulgarian forces, assisted the Partisans in a joint liberation of the capital city of Belgrade on 20 October. A few days later, the Soviets launched a massive assault against German-occupied Hungary that lasted until the fall of Budapest in February 1945.[262] Unlike impressive Soviet victories in the Balkans, bitter Finnish resistance to the Soviet offensive in the Karelian Isthmus denied the Soviets occupation of Finland and led to a Soviet-Finnish armistice on relatively mild conditions,[263] although Finland was forced to fight their former ally Germany.[264][broken footnote]
174
+
175
+ By the start of July 1944, Commonwealth forces in Southeast Asia had repelled the Japanese sieges in Assam, pushing the Japanese back to the Chindwin River[265] while the Chinese captured Myitkyina. In September 1944, Chinese forces captured Mount Song and reopened the Burma Road.[266] In China, the Japanese had more successes, having finally captured Changsha in mid-June and the city of Hengyang by early August.[267] Soon after, they invaded the province of Guangxi, winning major engagements against Chinese forces at Guilin and Liuzhou by the end of November[268] and successfully linking up their forces in China and Indochina by mid-December.[269]
176
+
177
+ In the Pacific, US forces continued to press back the Japanese perimeter. In mid-June 1944, they began their offensive against the Mariana and Palau islands, and decisively defeated Japanese forces in the Battle of the Philippine Sea. These defeats led to the resignation of the Japanese Prime Minister, Hideki Tojo, and provided the United States with air bases to launch intensive heavy bomber attacks on the Japanese home islands. In late October, American forces invaded the Filipino island of Leyte; soon after, Allied naval forces scored another large victory in the Battle of Leyte Gulf, one of the largest naval battles in history.[270]
178
+
179
+ On 16 December 1944, Germany made a last attempt on the Western Front by using most of its remaining reserves to launch a massive counter-offensive in the Ardennes and along with the French-German border to split the Western Allies, encircle large portions of Western Allied troops and capture their primary supply port at Antwerp to prompt a political settlement.[271] By January, the offensive had been repulsed with no strategic objectives fulfilled.[271] In Italy, the Western Allies remained stalemated at the German defensive line. In mid-January 1945, the Soviets and Poles attacked in Poland, pushing from the Vistula to the Oder river in Germany, and overran East Prussia.[272] On 4 February Soviet, British, and US leaders met for the Yalta Conference. They agreed on the occupation of post-war Germany, and on when the Soviet Union would join the war against Japan.[273]
180
+
181
+ In February, the Soviets entered Silesia and Pomerania, while Western Allies entered western Germany and closed to the Rhine river. By March, the Western Allies crossed the Rhine north and south of the Ruhr, encircling the German Army Group B.[274] In early March, in an attempt to protect its last oil reserves in Hungary and to retake Budapest, Germany launched its last major offensive against Soviet troops near Lake Balaton. In two weeks, the offensive had been repulsed, the Soviets advanced to Vienna, and captured the city. In early April, Soviet troops captured Königsberg, while the Western Allies finally pushed forward in Italy and swept across western Germany capturing Hamburg and Nuremberg. American and Soviet forces met at the Elbe river on 25 April, leaving several unoccupied pockets in southern Germany and around Berlin.
182
+
183
+ Soviet and Polish forces stormed and captured Berlin in late April. In Italy, German forces surrendered on 29 April. On 30 April, the Reichstag was captured, signalling the military defeat of Nazi Germany,[275] Berlin garrison surrendered on 2 May.
184
+
185
+ Several changes in leadership occurred during this period. On 12 April, President Roosevelt died and was succeeded by Harry S. Truman. Benito Mussolini was killed by Italian partisans on 28 April.[276] Two days later, Hitler committed suicide in besieged Berlin, and he was succeeded by Grand Admiral Karl Dönitz.[277]
186
+ Total and unconditional surrender in Europe was signed on 7 and 8 May, to be effective by the end of 8 May.[278] German Army Group Centre resisted in Prague until 11 May.[279]
187
+
188
+ In the Pacific theatre, American forces accompanied by the forces of the Philippine Commonwealth advanced in the Philippines, clearing Leyte by the end of April 1945. They landed on Luzon in January 1945 and recaptured Manila in March. Fighting continued on Luzon, Mindanao, and other islands of the Philippines until the end of the war.[280] Meanwhile, the United States Army Air Forces launched a massive firebombing campaign of strategic cities in Japan in an effort to destroy Japanese war industry and civilian morale. A devastating bombing raid on Tokyo of 9–10 March was the deadliest conventional bombing raid in history.[281]
189
+
190
+ In May 1945, Australian troops landed in Borneo, over-running the oilfields there. British, American, and Chinese forces defeated the Japanese in northern Burma in March, and the British pushed on to reach Rangoon by 3 May.[282] Chinese forces started a counterattack in the Battle of West Hunan that occurred between 6 April and 7 June 1945. American naval and amphibious forces also moved towards Japan, taking Iwo Jima by March, and Okinawa by the end of June.[283] At the same time, American submarines cut off Japanese imports, drastically reducing Japan's ability to supply its overseas forces.[284]
191
+
192
+ On 11 July, Allied leaders met in Potsdam, Germany. They confirmed earlier agreements about Germany,[285] and the American, British and Chinese governments reiterated the demand for unconditional surrender of Japan, specifically stating that "the alternative for Japan is prompt and utter destruction".[286] During this conference, the United Kingdom held its general election, and Clement Attlee replaced Churchill as Prime Minister.[287]
193
+
194
+ The call for unconditional surrender was rejected by the Japanese government, which believed it would be capable of negotiating for more favourable surrender terms.[288] In early August, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki. Between the two bombings, the Soviets, pursuant to the Yalta agreement, invaded Japanese-held Manchuria and quickly defeated the Kwantung Army, which was the largest Japanese fighting force.[289] These two events persuaded previously adamant Imperial Army leaders to accept surrender terms.[290] The Red Army also captured the southern part of Sakhalin Island and the Kuril Islands. On 15 August 1945, Japan surrendered, with the surrender documents finally signed at Tokyo Bay on the deck of the American battleship USS Missouri on 2 September 1945, ending the war.[291]
195
+
196
+ The Allies established occupation administrations in Austria and Germany. The former became a neutral state, non-aligned with any political bloc. The latter was divided into western and eastern occupation zones controlled by the Western Allies and the Soviet Union. A denazification programme in Germany led to the prosecution of Nazi war criminals in the Nuremberg trials and the removal of ex-Nazis from power, although this policy moved towards amnesty and re-integration of ex-Nazis into West German society.[292]
197
+
198
+ Germany lost a quarter of its pre-war (1937) territory. Among the eastern territories, Silesia, Neumark and most of Pomerania were taken over by Poland,[293] and East Prussia was divided between Poland and the Soviet Union, followed by the expulsion to Germany of the nine million Germans from these provinces,[294][295] as well as three million Germans from the Sudetenland in Czechoslovakia. By the 1950s, one-fifth of West Germans were refugees from the east. The Soviet Union also took over the Polish provinces east of the Curzon line,[296] from which 2 million Poles were expelled;[295][297] north-east Romania,[298][299] parts of eastern Finland,[300] and the three Baltic states were incorporated into the Soviet Union.[301][302]
199
+
200
+ In an effort to maintain world peace,[303] the Allies formed the United Nations, which officially came into existence on 24 October 1945,[304] and adopted the Universal Declaration of Human Rights in 1948 as a common standard for all member nations.[305] The great powers that were the victors of the war—France, China, the United Kingdom, the Soviet Union and the United States—became the permanent members of the UN's Security Council.[306] The five permanent members remain so to the present, although there have been two seat changes, between the Republic of China and the People's Republic of China in 1971, and between the Soviet Union and its successor state, the Russian Federation, following the dissolution of the Soviet Union in 1991. The alliance between the Western Allies and the Soviet Union had begun to deteriorate even before the war was over.[307]
201
+
202
+ Germany had been de facto divided, and two independent states, the Federal Republic of Germany (West Germany) and the German Democratic Republic (East Germany),[308] were created within the borders of Allied and Soviet occupation zones. The rest of Europe was also divided into Western and Soviet spheres of influence.[309] Most eastern and central European countries fell into the Soviet sphere, which led to establishment of Communist-led regimes, with full or partial support of the Soviet occupation authorities. As a result, East Germany,[310] Poland, Hungary, Romania, Czechoslovakia, and Albania[311] became Soviet satellite states. Communist Yugoslavia conducted a fully independent policy, causing tension with the Soviet Union.[312]
203
+
204
+ Post-war division of the world was formalised by two international military alliances, the United States-led NATO and the Soviet-led Warsaw Pact.[313] The long period of political tensions and military competition between them, the Cold War, would be accompanied by an unprecedented arms race and proxy wars.[314]
205
+
206
+ In Asia, the United States led the occupation of Japan and administrated Japan's former islands in the Western Pacific, while the Soviets annexed South Sakhalin and the Kuril Islands.[315] Korea, formerly under Japanese rule, was divided and occupied by the Soviet Union in the North and the United States in the South between 1945 and 1948. Separate republics emerged on both sides of the 38th parallel in 1948, each claiming to be the legitimate government for all of Korea, which led ultimately to the Korean War.[316]
207
+
208
+ In China, nationalist and communist forces resumed the civil war in June 1946. Communist forces were victorious and established the People's Republic of China on the mainland, while nationalist forces retreated to Taiwan in 1949.[317] In the Middle East, the Arab rejection of the United Nations Partition Plan for Palestine and the creation of Israel marked the escalation of the Arab–Israeli conflict. While European powers attempted to retain some or all of their colonial empires, their losses of prestige and resources during the war rendered this unsuccessful, leading to decolonisation.[318][319]
209
+
210
+ The global economy suffered heavily from the war, although participating nations were affected differently. The United States emerged much richer than any other nation, leading to a baby boom, and by 1950 its gross domestic product per person was much higher than that of any of the other powers, and it dominated the world economy.[320] The UK and US pursued a policy of industrial disarmament in Western Germany in the years 1945–1948.[321] Because of international trade interdependencies this led to European economic stagnation and delayed European recovery for several years.[322][323]
211
+
212
+ Recovery began with the mid-1948 currency reform in Western Germany, and was sped up by the liberalisation of European economic policy that the Marshall Plan (1948–1951) both directly and indirectly caused.[324][325] The post-1948 West German recovery has been called the German economic miracle.[326] Italy also experienced an economic boom[327] and the French economy rebounded.[328] By contrast, the United Kingdom was in a state of economic ruin,[329] and although receiving a quarter of the total Marshall Plan assistance, more than any other European country,[330] it continued in relative economic decline for decades.[331]
213
+
214
+ The Soviet Union, despite enormous human and material losses, also experienced rapid increase in production in the immediate post-war era.[332] Japan recovered much later.[333] China returned to its pre-war industrial production by 1952.[334]
215
+
216
+ Estimates for the total number of casualties in the war vary, because many deaths went unrecorded.[335] Most suggest that some 60 million people died in the war, including about 20 million military personnel and 40 million civilians.[336][337][338]
217
+ Many of the civilians died because of deliberate genocide, massacres, mass bombings, disease, and starvation.
218
+
219
+ The Soviet Union alone lost around 27 million people during the war,[339] including 8.7 million military and 19 million civilian deaths.[340] A quarter of the people in the Soviet Union were wounded or killed.[341] Germany sustained 5.3 million military losses, mostly on the Eastern Front and during the final battles in Germany.[342]
220
+
221
+ An estimated 11[343] to 17 million[344] civilians died as a direct or as an indirect result of Nazi racist policies, including mass killing of around 6 million Jews, along with Roma, homosexuals, at least 1.9 million ethnic Poles[345][346] and millions of other Slavs (including Russians, Ukrainians and Belarusians), and other ethnic and minority groups.[347][344] Between 1941 and 1945, more than 200,000 ethnic Serbs, along with gypsies and Jews, were persecuted and murdered by the Axis-aligned Croatian Ustaše in Yugoslavia.[348] Also, more than 100,000 Poles were massacred by the Ukrainian Insurgent Army in the Volhynia massacres, between 1943 and 1945.[349] At the same time about 10,000–15,000 Ukrainians were killed by the Polish Home Army and other Polish units, in reprisal attacks.[350]
222
+
223
+ In Asia and the Pacific, between 3 million and more than 10 million civilians, mostly Chinese (estimated at 7.5 million[351]), were killed by the Japanese occupation forces.[352] The most infamous Japanese atrocity was the Nanking Massacre, in which fifty to three hundred thousand Chinese civilians were raped and murdered.[353] Mitsuyoshi Himeta reported that 2.7 million casualties occurred during the Sankō Sakusen. General Yasuji Okamura implemented the policy in Heipei and Shantung.[354]
224
+
225
+ Axis forces employed biological and chemical weapons. The Imperial Japanese Army used a variety of such weapons during its invasion and occupation of China (see Unit 731)[355][356] and in early conflicts against the Soviets.[357] Both the Germans and the Japanese tested such weapons against civilians,[358] and sometimes on prisoners of war.[359]
226
+
227
+ The Soviet Union was responsible for the Katyn massacre of 22,000 Polish officers,[360] and the imprisonment or execution of thousands of political prisoners by the NKVD, along with mass civilian deportations to Siberia, in the Baltic states and eastern Poland annexed by the Red Army.[361]
228
+
229
+ The mass bombing of cities in Europe and Asia has often been called a war crime, although no positive or specific customary international humanitarian law with respect to aerial warfare existed before or during World War II.[362] The USAAF firebombed a total of 67 Japanese cities, killing 393,000 civilians and destroying 65% of built-up areas.[363]
230
+
231
+ Nazi Germany was responsible for the Holocaust (which killed approximately 6 million Jews) as well as for killing 2.7 million ethnic Poles[364] and 4 million others who were deemed "unworthy of life" (including the disabled and mentally ill, Soviet prisoners of war, Romani, homosexuals, Freemasons, and Jehovah's Witnesses) as part of a programme of deliberate extermination, in effect becoming a "genocidal state".[365] Soviet POWs were kept in especially unbearable conditions, and 3.6 million Soviet POWs out of 5.7 died in Nazi camps during the war.[366][367] In addition to concentration camps, death camps were created in Nazi Germany to exterminate people on an industrial scale. Nazi Germany extensively used forced labourers; about 12 million Europeans from German occupied countries were abducted and used as a slave work force in German industry, agriculture and war economy.[368]
232
+
233
+ The Soviet Gulag became a de facto system of deadly camps during 1942–43, when wartime privation and hunger caused numerous deaths of inmates,[369] including foreign citizens of Poland and other countries occupied in 1939–40 by the Soviet Union, as well as Axis POWs.[370] By the end of the war, most Soviet POWs liberated from Nazi camps and many repatriated civilians were detained in special filtration camps where they were subjected to NKVD evaluation, and 226,127 were sent to the Gulag as real or perceived Nazi collaborators.[371]
234
+
235
+ Japanese prisoner-of-war camps, many of which were used as labour camps, also had high death rates. The International Military Tribunal for the Far East found the death rate of Western prisoners was 27 per cent (for American POWs, 37 per cent),[372] seven times that of POWs under the Germans and Italians.[373] While 37,583 prisoners from the UK, 28,500 from the Netherlands, and 14,473 from the United States were released after the surrender of Japan, the number of Chinese released was only 56.[374]
236
+
237
+ At least five million Chinese civilians from northern China and Manchukuo were enslaved between 1935 and 1941 by the East Asia Development Board, or Kōain, for work in mines and war industries. After 1942, the number reached 10 million.[375] In Java, between 4 and 10 million rōmusha (Japanese: "manual labourers"), were forced to work by the Japanese military. About 270,000 of these Javanese labourers were sent to other Japanese-held areas in South East Asia, and only 52,000 were repatriated to Java.[376]
238
+
239
+ In Europe, occupation came under two forms. In Western, Northern, and Central Europe (France, Norway, Denmark, the Low Countries, and the annexed portions of Czechoslovakia) Germany established economic policies through which it collected roughly 69.5 billion reichmarks (27.8 billion US dollars) by the end of the war; this figure does not include the sizeable plunder of industrial products, military equipment, raw materials and other goods.[377] Thus, the income from occupied nations was over 40 per cent of the income Germany collected from taxation, a figure which increased to nearly 40 per cent of total German income as the war went on.[378]
240
+
241
+ In the East, the intended gains of Lebensraum were never attained as fluctuating front-lines and Soviet scorched earth policies denied resources to the German invaders.[379] Unlike in the West, the Nazi racial policy encouraged extreme brutality against what it considered to be the "inferior people" of Slavic descent; most German advances were thus followed by mass executions.[380] Although resistance groups formed in most occupied territories, they did not significantly hamper German operations in either the East[381] or the West[382] until late 1943.
242
+
243
+ In Asia, Japan termed nations under its occupation as being part of the Greater East Asia Co-Prosperity Sphere, essentially a Japanese hegemony which it claimed was for purposes of liberating colonised peoples.[383] Although Japanese forces were sometimes welcomed as liberators from European domination, Japanese war crimes frequently turned local public opinion against them.[384] During Japan's initial conquest it captured 4,000,000 barrels (640,000 m3) of oil (~5.5×105 tonnes) left behind by retreating Allied forces, and by 1943 was able to get production in the Dutch East Indies up to 50 million barrels (~6.8×10^6 t), 76 per cent of its 1940 output rate.[384]
244
+
245
+ In Europe, before the outbreak of the war, the Allies had significant advantages in both population and economics. In 1938, the Western Allies (United Kingdom, France, Poland and the British Dominions) had a 30 per cent larger population and a 30 per cent higher gross domestic product than the European Axis powers (Germany and Italy); if colonies are included, the Allies had more than a 5:1 advantage in population and a nearly 2:1 advantage in GDP.[385] In Asia at the same time, China had roughly six times the population of Japan but only an 89 per cent higher GDP; this is reduced to three times the population and only a 38 per cent higher GDP if Japanese colonies are included.[385]
246
+
247
+ The United States produced about two-thirds of all the munitions used by the Allies in WWII, including warships, transports, warplanes, artillery, tanks, trucks, and ammunition.[386]
248
+ Though the Allies' economic and population advantages were largely mitigated during the initial rapid blitzkrieg attacks of Germany and Japan, they became the decisive factor by 1942, after the United States and Soviet Union joined the Allies, as the war largely settled into one of attrition.[387] While the Allies' ability to out-produce the Axis is often attributed[by whom?] to the Allies having more access to natural resources, other factors, such as Germany and Japan's reluctance to employ women in the labour force,[388] Allied strategic bombing,[389] and Germany's late shift to a war economy[390] contributed significantly. Additionally, neither Germany nor Japan planned to fight a protracted war, and had not equipped themselves to do so.[391] To improve their production, Germany and Japan used millions of slave labourers;[392] Germany used about 12 million people, mostly from Eastern Europe,[368] while Japan used more than 18 million people in Far East Asia.[375][376]
249
+
250
+ Aircraft were used for reconnaissance, as fighters, bombers, and ground-support, and each role was advanced considerably. Innovation included airlift (the capability to quickly move limited high-priority supplies, equipment, and personnel);[393] and of strategic bombing (the bombing of enemy industrial and population centres to destroy the enemy's ability to wage war).[394] Anti-aircraft weaponry also advanced, including defences such as radar and surface-to-air artillery. The use of the jet aircraft was pioneered and, though late introduction meant it had little impact, it led to jets becoming standard in air forces worldwide.[395] Although guided missiles were being developed, they were not advanced enough to reliably target aircraft until some years after the war.
251
+
252
+ Advances were made in nearly every aspect of naval warfare, most notably with aircraft carriers and submarines. Although aeronautical warfare had relatively little success at the start of the war, actions at Taranto, Pearl Harbor, and the Coral Sea established the carrier as the dominant capital ship in place of the battleship.[396][397][398] In the Atlantic, escort carriers proved to be a vital part of Allied convoys, increasing the effective protection radius and helping to close the Mid-Atlantic gap.[399] Carriers were also more economical than battleships because of the relatively low cost of aircraft[400] and their not requiring to be as heavily armoured.[401] Submarines, which had proved to be an effective weapon during the First World War,[402] were anticipated by all sides to be important in the second. The British focused development on anti-submarine weaponry and tactics, such as sonar and convoys, while Germany focused on improving its offensive capability, with designs such as the Type VII submarine and wolfpack tactics.[403][better source needed] Gradually, improving Allied technologies such as the Leigh light, hedgehog, squid, and homing torpedoes proved victorious over the German submarines.[citation needed]
253
+
254
+ Land warfare changed from the static front lines of trench warfare of World War I, which had relied on improved artillery that outmatched the speed of both infantry and cavalry, to increased mobility and combined arms. The tank, which had been used predominantly for infantry support in the First World War, had evolved into the primary weapon.[404] In the late 1930s, tank design was considerably more advanced than it had been during World War I,[405] and advances continued throughout the war with increases in speed, armour and firepower.[citation needed] At the start of the war, most commanders thought enemy tanks should be met by tanks with superior specifications.[406] This idea was challenged by the poor performance of the relatively light early tank guns against armour, and German doctrine of avoiding tank-versus-tank combat. This, along with Germany's use of combined arms, were among the key elements of their highly successful blitzkrieg tactics across Poland and France.[404] Many means of destroying tanks, including indirect artillery, anti-tank guns (both towed and self-propelled), mines, short-ranged infantry antitank weapons, and other tanks were used.[406] Even with large-scale mechanisation, infantry remained the backbone of all forces,[407] and throughout the war, most infantry were equipped similarly to World War I.[408] The portable machine gun spread, a notable example being the German MG34, and various submachine guns which were suited to close combat in urban and jungle settings.[408] The assault rifle, a late war development incorporating many features of the rifle and submachine gun, became the standard postwar infantry weapon for most armed forces.[409]
255
+
256
+ Most major belligerents attempted to solve the problems of complexity and security involved in using large codebooks for cryptography by designing ciphering machines, the most well known being the German Enigma machine.[410] Development of SIGINT (signals intelligence) and cryptanalysis enabled the countering process of decryption. Notable examples were the Allied decryption of Japanese naval codes[411] and British Ultra, a pioneering method for decoding Enigma benefiting from information given to the United Kingdom by the Polish Cipher Bureau, which had been decoding early versions of Enigma before the war.[412] Another aspect of military intelligence was the use of deception, which the Allies used to great effect, such as in operations Mincemeat and Bodyguard.[411][413]
257
+
258
+ Other technological and engineering feats achieved during, or as a result of, the war include the world's first programmable computers (Z3, Colossus, and ENIAC), guided missiles and modern rockets, the Manhattan Project's development of nuclear weapons, operations research and the development of artificial harbours and oil pipelines under the English Channel.[citation needed] Penicillin was first mass-produced and used during the war (see Stabilization and mass production of penicillin).[414]
259
+
en/5326.html.txt ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Asia-Pacific
6
+
7
+ Mediterranean and Middle East
8
+
9
+ Other campaigns
10
+
11
+ Coups
12
+
13
+ World War II (WWII or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945. It involved the vast majority of the world's countries—including all the great powers—forming two opposing military alliances: the Allies and the Axis. In a state of total war, directly involving more than 100 million people from more than 30 countries, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, blurring the distinction between civilian and military resources. World War II was the deadliest conflict in human history, marked by 70 to 85 million fatalities. Tens of millions of people died due to genocides (including the Holocaust), premeditated death from starvation, massacres, and disease. Aircraft played a major role in the conflict, including in the use of strategic bombing of population centres, and the only uses of nuclear weapons in war.
14
+
15
+ World War II is generally considered to have begun on 1 September 1939, with the invasion of Poland by Germany and subsequent declarations of war on Germany by France and the United Kingdom. From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours: Poland, Finland, Romania and the Baltic states. Following the onset of campaigns in North Africa and East Africa, and the Fall of France in mid-1940, the war continued primarily between the European Axis powers and the British Empire, with war in the Balkans, the aerial Battle of Britain, the Blitz, and the Battle of the Atlantic. On 22 June 1941, Germany led the European Axis powers in an invasion of the Soviet Union, opening the largest land theatre of war in history and trapping the Axis, crucially the German Wehrmacht, in a war of attrition.
16
+
17
+ Japan, which aimed to dominate Asia and the Pacific, was at war with the Republic of China by 1937. In December 1941, Japan launched a surprise attack on the United States as well as European colonies in East Asia and the Pacific. Following an immediate US declaration of war against Japan, supported by one from the UK, the European Axis powers declared war on the United States in solidarity with their ally. Japan soon captured much of the Western Pacific, but its advances were halted in 1942 after Japan lost the critical Battle of Midway; later, Germany and Italy were defeated in North Africa and at Stalingrad in the Soviet Union. Key setbacks in 1943—which included a series of German defeats on the Eastern Front, the Allied invasions of Sicily and Italy, and Allied offensives in the Pacific—cost the Axis its initiative and forced it into strategic retreat on all fronts. In 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained its territorial losses and turned towards Germany and its allies. During 1944 and 1945, the Japanese suffered reversals in mainland Asia, while the Allies crippled the Japanese Navy and captured key Western Pacific islands.
18
+
19
+ The war in Europe concluded with an invasion of Germany by the Western Allies and the Soviet Union, culminating in the capture of Berlin by Soviet troops, the suicide of Adolf Hitler and the German unconditional surrender on 8 May 1945. Following the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender on its terms, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki on 6 and 9 August, respectively. Faced with an imminent invasion of the Japanese archipelago, the possibility of additional atomic bombings, and the Soviet entry into the war against Japan and its invasion of Manchuria on 9 August, Japan announced its intention to surrender on 15 August 1945, cementing total victory in Asia for the Allies. In the wake of the war, Germany and Japan were occupied and war crimes tribunals were conducted against German and Japanese leaders.
20
+
21
+ World War II changed the political alignment and social structure of the globe. The United Nations (UN) was established to foster international co-operation and prevent future conflicts, and the victorious great powers—China, France, the Soviet Union, the United Kingdom, and the United States—became the permanent members of its Security Council. The Soviet Union and the United States emerged as rival superpowers, setting the stage for the nearly half-century-long Cold War. In the wake of European devastation, the influence of its great powers waned, triggering the decolonisation of Africa and Asia. Most countries whose industries had been damaged moved towards economic recovery and expansion. Political integration, especially in Europe, began as an effort to forestall future hostilities, end pre-war enmities and forge a sense of common identity.
22
+
23
+ The start of the war in Europe is generally held to be 1 September 1939,[1][2] beginning with the German invasion of Poland; the United Kingdom and France declared war on Germany two days later. The dates for the beginning of war in the Pacific include the start of the Second Sino-Japanese War on 7 July 1937,[3][4] or even the Japanese invasion of Manchuria on 19 September 1931.[5][6][7]
24
+
25
+ Others follow the British historian A. J. P. Taylor, who held that the Sino-Japanese War and war in Europe and its colonies occurred simultaneously, and the two wars merged in 1941. This article uses the conventional dating. Other starting dates sometimes used for World War II include the Italian invasion of Abyssinia on 3 October 1935.[8] The British historian Antony Beevor views the beginning of World War II as the Battles of Khalkhin Gol fought between Japan and the forces of Mongolia and the Soviet Union from May to September 1939.[9]
26
+
27
+ The exact date of the war's end is also not universally agreed upon. It was generally accepted at the time that the war ended with the armistice of 14 August 1945 (V-J Day), rather than the formal surrender of Japan, which was on 2 September 1945 that officially ended the war in Asia. A peace treaty with Japan was signed in 1951.[10] A treaty regarding Germany's future allowed the reunification of East and West Germany to take place in 1990 and resolved most post-World War II issues.[11] No formal peace treaty between Japan and the Soviet Union was ever signed.[12]
28
+
29
+ World War I had radically altered the political European map, with the defeat of the Central Powers—including Austria-Hungary, Germany, Bulgaria and the Ottoman Empire—and the 1917 Bolshevik seizure of power in Russia, which eventually led to the founding of the Soviet Union. Meanwhile, the victorious Allies of World War I, such as France, Belgium, Italy, Romania, and Greece, gained territory, and new nation-states were created out of the collapse of Austria-Hungary and the Ottoman and Russian Empires.
30
+
31
+ To prevent a future world war, the League of Nations was created during the 1919 Paris Peace Conference. The organisation's primary goals were to prevent armed conflict through collective security, military and naval disarmament, and settling international disputes through peaceful negotiations and arbitration.
32
+
33
+ Despite strong pacifist sentiment after World War I,[13] its aftermath still caused irredentist and revanchist nationalism in several European states. These sentiments were especially marked in Germany because of the significant territorial, colonial, and financial losses incurred by the Treaty of Versailles. Under the treaty, Germany lost around 13 percent of its home territory and all its overseas possessions, while German annexation of other states was prohibited, reparations were imposed, and limits were placed on the size and capability of the country's armed forces.[14]
34
+
35
+ The German Empire was dissolved in the German Revolution of 1918–1919, and a democratic government, later known as the Weimar Republic, was created. The interwar period saw strife between supporters of the new republic and hardline opponents on both the right and left. Italy, as an Entente ally, had made some post-war territorial gains; however, Italian nationalists were angered that the promises made by the United Kingdom and France to secure Italian entrance into the war were not fulfilled in the peace settlement. From 1922 to 1925, the Fascist movement led by Benito Mussolini seized power in Italy with a nationalist, totalitarian, and class collaborationist agenda that abolished representative democracy, repressed socialist, left-wing and liberal forces, and pursued an aggressive expansionist foreign policy aimed at making Italy a world power, promising the creation of a "New Roman Empire".[15]
36
+
37
+ Adolf Hitler, after an unsuccessful attempt to overthrow the German government in 1923, eventually became the Chancellor of Germany in 1933. He abolished democracy, espousing a radical, racially motivated revision of the world order, and soon began a massive rearmament campaign.[16] Meanwhile, France, to secure its alliance, allowed Italy a free hand in Ethiopia, which Italy desired as a colonial possession. The situation was aggravated in early 1935 when the Territory of the Saar Basin was legally reunited with Germany and Hitler repudiated the Treaty of Versailles, accelerated his rearmament programme, and introduced conscription.[17]
38
+
39
+ The United Kingdom, France and Italy formed the Stresa Front in April 1935 in order to contain Germany, a key step towards military globalisation; however, that June, the United Kingdom made an independent naval agreement with Germany, easing prior restrictions. The Soviet Union, concerned by Germany's goals of capturing vast areas of Eastern Europe, drafted a treaty of mutual assistance with France. Before taking effect, though, the Franco-Soviet pact was required to go through the bureaucracy of the League of Nations, which rendered it essentially toothless.[18] The United States, concerned with events in Europe and Asia, passed the Neutrality Act in August of the same year.[19]
40
+
41
+ Hitler defied the Versailles and Locarno treaties by remilitarising the Rhineland in March 1936, encountering little opposition due to appeasement.[20] In October 1936, Germany and Italy formed the Rome–Berlin Axis. A month later, Germany and Japan signed the Anti-Comintern Pact, which Italy joined the following year.[21]
42
+
43
+ The Kuomintang (KMT) party in China launched a unification campaign against regional warlords and nominally unified China in the mid-1920s, but was soon embroiled in a civil war against its former Chinese Communist Party allies[22] and new regional warlords. In 1931, an increasingly militaristic Empire of Japan, which had long sought influence in China[23] as the first step of what its government saw as the country's right to rule Asia, staged the Mukden Incident as a pretext to invade Manchuria and establish the puppet state of Manchukuo.[24]
44
+
45
+ China appealed to the League of Nations to stop the Japanese invasion of Manchuria. Japan withdrew from the League of Nations after being condemned for its incursion into Manchuria. The two nations then fought several battles, in Shanghai, Rehe and Hebei, until the Tanggu Truce was signed in 1933. Thereafter, Chinese volunteer forces continued the resistance to Japanese aggression in Manchuria, and Chahar and Suiyuan.[25] After the 1936 Xi'an Incident, the Kuomintang and communist forces agreed on a ceasefire to present a united front to oppose Japan.[26]
46
+
47
+ The Second Italo–Ethiopian War was a brief colonial war that began in October 1935 and ended in May 1936. The war began with the invasion of the Ethiopian Empire (also known as Abyssinia) by the armed forces of the Kingdom of Italy (Regno d'Italia), which was launched from Italian Somaliland and Eritrea.[27] The war resulted in the military occupation of Ethiopia and its annexation into the newly created colony of Italian East Africa (Africa Orientale Italiana, or AOI); in addition it exposed the weakness of the League of Nations as a force to preserve peace. Both Italy and Ethiopia were member nations, but the League did little when the former clearly violated Article X of the League's Covenant.[28] The United Kingdom and France supported imposing sanctions on Italy for the invasion, but they were not fully enforced and failed to end the Italian invasion.[29] Italy subsequently dropped its objections to Germany's goal of absorbing Austria.[30]
48
+
49
+ When civil war broke out in Spain, Hitler and Mussolini lent military support to the Nationalist rebels, led by General Francisco Franco. Italy supported the Nationalists to a greater extent than the Nazis did: altogether Mussolini sent to Spain more than 70,000 ground troops and 6,000 aviation personnel, as well as about 720 aircraft.[31] The Soviet Union supported the existing government, the Spanish Republic. More than 30,000 foreign volunteers, known as the International Brigades, also fought against the Nationalists. Both Germany and the Soviet Union used this proxy war as an opportunity to test in combat their most advanced weapons and tactics. The Nationalists won the civil war in April 1939; Franco, now dictator, remained officially neutral during World War II but generally favoured the Axis.[32] His greatest collaboration with Germany was the sending of volunteers to fight on the Eastern Front.[33]
50
+
51
+ In July 1937, Japan captured the former Chinese imperial capital of Peking after instigating the Marco Polo Bridge Incident, which culminated in the Japanese campaign to invade all of China.[34] The Soviets quickly signed a non-aggression pact with China to lend materiel support, effectively ending China's prior co-operation with Germany. From September to November, the Japanese attacked Taiyuan, engaged the Kuomintang Army around Xinkou,[35] and fought Communist forces in Pingxingguan.[36][37] Generalissimo Chiang Kai-shek deployed his best army to defend Shanghai, but, after three months of fighting, Shanghai fell. The Japanese continued to push the Chinese forces back, capturing the capital Nanking in December 1937. After the fall of Nanking, tens of thousands if not hundreds of thousands of Chinese civilians and disarmed combatants were murdered by the Japanese.[38][39]
52
+
53
+ In March 1938, Nationalist Chinese forces won their first major victory at Taierzhuang but then the city of Xuzhou was taken by Japanese in May.[40] In June 1938, Chinese forces stalled the Japanese advance by flooding the Yellow River; this manoeuvre bought time for the Chinese to prepare their defences at Wuhan, but the city was taken by October.[41] Japanese military victories did not bring about the collapse of Chinese resistance that Japan had hoped to achieve; instead, the Chinese government relocated inland to Chongqing and continued the war.[42][43]
54
+
55
+ In the mid-to-late 1930s, Japanese forces in Manchukuo had sporadic border clashes with the Soviet Union and Mongolia. The Japanese doctrine of Hokushin-ron, which emphasised Japan's expansion northward, was favoured by the Imperial Army during this time. With the Japanese defeat at Khalkin Gol in 1939, the ongoing Second Sino-Japanese War[44] and ally Nazi Germany pursuing neutrality with the Soviets, this policy would prove difficult to maintain. Japan and the Soviet Union eventually signed a Neutrality Pact in April 1941, and Japan adopted the doctrine of Nanshin-ron, promoted by the Navy, which took its focus southward, eventually leading to its war with the United States and the Western Allies.[45][46]
56
+
57
+ In Europe, Germany and Italy were becoming more aggressive. In March 1938, Germany annexed Austria, again provoking little response from other European powers.[47] Encouraged, Hitler began pressing German claims on the Sudetenland, an area of Czechoslovakia with a predominantly ethnic German population. Soon the United Kingdom and France followed the appeasement policy of British Prime Minister Neville Chamberlain and conceded this territory to Germany in the Munich Agreement, which was made against the wishes of the Czechoslovak government, in exchange for a promise of no further territorial demands.[48] Soon afterwards, Germany and Italy forced Czechoslovakia to cede additional territory to Hungary, and Poland annexed Czechoslovakia's Zaolzie region.[49]
58
+
59
+ Although all of Germany's stated demands had been satisfied by the agreement, privately Hitler was furious that British interference had prevented him from seizing all of Czechoslovakia in one operation. In subsequent speeches Hitler attacked British and Jewish "war-mongers" and in January 1939 secretly ordered a major build-up of the German navy to challenge British naval supremacy. In March 1939, Germany invaded the remainder of Czechoslovakia and subsequently split it into the German Protectorate of Bohemia and Moravia and a pro-German client state, the Slovak Republic.[50] Hitler also delivered 20 March 1939 ultimatum to Lithuania, forcing the concession of the Klaipėda Region, formerly the German Memelland.[51]
60
+
61
+ Greatly alarmed and with Hitler making further demands on the Free City of Danzig, the United Kingdom and France guaranteed their support for Polish independence; when Italy conquered Albania in April 1939, the same guarantee was extended to Romania and Greece.[52] Shortly after the Franco-British pledge to Poland, Germany and Italy formalised their own alliance with the Pact of Steel.[53] Hitler accused the United Kingdom and Poland of trying to "encircle" Germany and renounced the Anglo-German Naval Agreement and the German–Polish Non-Aggression Pact.[54]
62
+
63
+ The situation reached a general crisis in late August as German troops continued to mobilise against the Polish border. On 23 August, when tripartite negotiations about a military alliance between France, the United Kingdom and Soviet Union stalled,[55] the Soviet Union signed a non-aggression pact with Germany.[56] This pact had a secret protocol that defined German and Soviet "spheres of influence" (western Poland and Lithuania for Germany; eastern Poland, Finland, Estonia, Latvia and Bessarabia for the Soviet Union), and raised the question of continuing Polish independence.[57] The pact neutralised the possibility of Soviet opposition to a campaign against Poland and assured that Germany would not have to face the prospect of a two-front war, as it had in World War I. Immediately after that, Hitler ordered the attack to proceed on 26 August, but upon hearing that the United Kingdom had concluded a formal mutual assistance pact with Poland, and that Italy would maintain neutrality, he decided to delay it.[58]
64
+
65
+ In response to British requests for direct negotiations to avoid war, Germany made demands on Poland, which only served as a pretext to worsen relations.[59] On 29 August, Hitler demanded that a Polish plenipotentiary immediately travel to Berlin to negotiate the handover of Danzig, and to allow a plebiscite in the Polish Corridor in which the German minority would vote on secession.[59] The Poles refused to comply with the German demands, and on the night of 30–31 August in a stormy meeting with the British ambassador Neville Henderson, Ribbentrop declared that Germany considered its claims rejected.[60]
66
+
67
+ On 1 September 1939, Germany invaded Poland after having staged several false flag border incidents as a pretext to initiate the invasion.[61] The first German attack of the war came against the Polish defenses at Westerplatte.[62] The United Kingdom responded with an ultimatum to Germany to cease military operations, and on 3 September, after the ultimatum was ignored, France and Britain declared war on Germany, followed by Australia, New Zealand, South Africa and Canada. The alliance provided no direct military support to Poland, outside of a cautious French probe into the Saarland.[63] The Western Allies also began a naval blockade of Germany, which aimed to damage the country's economy and the war effort.[64] Germany responded by ordering U-boat warfare against Allied merchant and warships, which would later escalate into the Battle of the Atlantic.[65]
68
+
69
+ On 8 September, German troops reached the suburbs of Warsaw. The Polish counter offensive to the west halted the German advance for several days, but it was outflanked and encircled by the Wehrmacht. Remnants of the Polish army broke through to besieged Warsaw. On 17 September 1939, after signing a cease-fire with Japan, the Soviets invaded Eastern Poland[66] under a pretext that the Polish state had ostensibly ceased to exist.[67] On 27 September, the Warsaw garrison surrendered to the Germans, and the last large operational unit of the Polish Army surrendered on 6 October. Despite the military defeat, Poland never surrendered; instead it formed the Polish government-in-exile and a clandestine state apparatus remained in occupied Poland.[68] A significant part of Polish military personnel evacuated to Romania and the Baltic countries; many of them would fight against the Axis in other theatres of the war.[69]
70
+
71
+ Germany annexed the western and occupied the central part of Poland, and the Soviet Union annexed its eastern part; small shares of Polish territory were transferred to Lithuania and Slovakia. On 6 October, Hitler made a public peace overture to the United Kingdom and France but said that the future of Poland was to be determined exclusively by Germany and the Soviet Union. The proposal was rejected,[60] and Hitler ordered an immediate offensive against France,[70] which would be postponed until the spring of 1940 due to bad weather.[71][72][73]
72
+
73
+ The Soviet Union forced the Baltic countries—Estonia, Latvia and Lithuania, the states that were in the Soviet "sphere of influence" under the Molotov-Ribbentrop pact—to sign "mutual assistance pacts" that stipulated stationing Soviet troops in these countries. Soon after, significant Soviet military contingents were moved there.[74][75][76] Finland refused to sign a similar pact and rejected ceding part of its territory to the Soviet Union. The Soviet Union invaded Finland in November 1939,[77] and the Soviet Union was expelled from the League of Nations.[78] Despite overwhelming numerical superiority, Soviet military success was modest, and the Finno-Soviet war ended in March 1940 with minimal Finnish concessions.[79]
74
+
75
+ In June 1940, the Soviet Union forcibly annexed Estonia, Latvia and Lithuania,[75] and the disputed Romanian regions of Bessarabia, northern Bukovina and Hertza. Meanwhile, Nazi-Soviet political rapprochement and economic co-operation[80][81] gradually stalled,[82][83] and both states began preparations for war.[84]
76
+
77
+ In April 1940, Germany invaded Denmark and Norway to protect shipments of iron ore from Sweden, which the Allies were attempting to cut off.[85] Denmark capitulated after a few hours, and Norway was conquered within two months[86] despite Allied support. British discontent over the Norwegian campaign led to the appointment of Winston Churchill as Prime Minister on 10 May 1940.[87]
78
+
79
+ On the same day, Germany launched an offensive against France. To circumvent the strong Maginot Line fortifications on the Franco-German border, Germany directed its attack at the neutral nations of Belgium, the Netherlands, and Luxembourg.[88] The Germans carried out a flanking manoeuvre through the Ardennes region,[89] which was mistakenly perceived by Allies as an impenetrable natural barrier against armoured vehicles.[90][91] By successfully implementing new blitzkrieg tactics, the Wehrmacht rapidly advanced to the Channel and cut off the Allied forces in Belgium, trapping the bulk of the Allied armies in a cauldron on the Franco-Belgian border near Lille. The United Kingdom was able to evacuate a significant number of Allied troops from the continent by early June, although abandoning almost all their equipment.[92]
80
+
81
+ On 10 June, Italy invaded France, declaring war on both France and the United Kingdom.[93] The Germans turned south against the weakened French army, and Paris fell to them on 14 June. Eight days later France signed an armistice with Germany; it was divided into German and Italian occupation zones,[94] and an unoccupied rump state under the Vichy Regime, which, though officially neutral, was generally aligned with Germany. France kept its fleet, which the United Kingdom attacked on 3 July in an attempt to prevent its seizure by Germany.[95]
82
+
83
+ The Battle of Britain[96] began in early July with Luftwaffe attacks on shipping and harbours.[97] The United Kingdom rejected Hitler's ultimatum,[which?][98] and the German air superiority campaign started in August but failed to defeat RAF Fighter Command, forcing the indefinite postponement of the proposed German invasion of Britain. The German strategic bombing offensive intensified with night attacks on London and other cities in the Blitz, but failed to significantly disrupt the British war effort[97] and largely ended in May 1941.[99]
84
+
85
+ Using newly captured French ports, the German Navy enjoyed success against an over-extended Royal Navy, using U-boats against British shipping in the Atlantic.[100] The British Home Fleet scored a significant victory on 27 May 1941 by sinking the German battleship Bismarck.[101]
86
+
87
+ In November 1939, the United States was taking measures to assist China and the Western Allies, and amended the Neutrality Act to allow "cash and carry" purchases by the Allies.[102] In 1940, following the German capture of Paris, the size of the United States Navy was significantly increased. In September the United States further agreed to a trade of American destroyers for British bases.[103] Still, a large majority of the American public continued to oppose any direct military intervention in the conflict well into 1941.[104] In December 1940 Roosevelt accused Hitler of planning world conquest and ruled out any negotiations as useless, calling for the United States to become an "arsenal of democracy" and promoting Lend-Lease programmes of aid to support the British war effort.[98] The United States started strategic planning to prepare for a full-scale offensive against Germany.[105]
88
+
89
+ At the end of September 1940, the Tripartite Pact formally united Japan, Italy, and Germany as the Axis Powers. The Tripartite Pact stipulated that any country, with the exception of the Soviet Union, which attacked any Axis Power would be forced to go to war against all three.[106] The Axis expanded in November 1940 when Hungary, Slovakia and Romania joined.[107] Romania and Hungary would make major contributions to the Axis war against the Soviet Union, in Romania's case partially to recapture territory ceded to the Soviet Union.[108]
90
+
91
+ In early June 1940 the Italian Regia Aeronautica attacked and besieged Malta, a British possession. In late summer through early autumn Italy conquered British Somaliland and made an incursion into British-held Egypt. In October Italy attacked Greece, but the attack was repulsed with heavy Italian casualties; the campaign ended within months with minor territorial changes.[109] Germany started preparation for an invasion of the Balkans to assist Italy, to prevent the British from gaining a foothold there, which would be a potential threat for Romanian oil fields, and to strike against the British dominance of the Mediterranean.[110]
92
+
93
+ In December 1940, British Empire forces began counter-offensives against Italian forces in Egypt and Italian East Africa.[111] The offensives were highly successful; by early February 1941 Italy had lost control of eastern Libya, and large numbers of Italian troops had been taken, prisoner. The Italian Navy also suffered significant defeats, with the Royal Navy putting three Italian battleships out of commission by a carrier attack at Taranto and neutralising several more warships at the Battle of Cape Matapan.[112]
94
+
95
+ Italian defeats prompted Germany to deploy an expeditionary force to North Africa, and at the end of March 1941 Rommel's Afrika Korps launched an offensive which drove back the Commonwealth forces.[113] In under a month, Axis forces advanced to western Egypt and besieged the port of Tobruk.[114]
96
+
97
+ By late March 1941 Bulgaria and Yugoslavia signed the Tripartite Pact; however, the Yugoslav government was overthrown two days later by pro-British nationalists. Germany responded with simultaneous invasions of both Yugoslavia and Greece, commencing on 6 April 1941; both nations were forced to surrender within the month.[115] The airborne invasion of the Greek island of Crete at the end of May completed the German conquest of the Balkans.[116] Although the Axis victory was swift, bitter and large-scale partisan warfare subsequently broke out against the Axis occupation of Yugoslavia, which continued until the end of the war.[117]
98
+
99
+ In the Middle East, in May Commonwealth forces quashed an uprising in Iraq which had been supported by German aircraft from bases within Vichy-controlled Syria.[118] Between June and July they invaded and occupied the French possessions Syria and Lebanon, with the assistance of the Free French.[119]
100
+
101
+ With the situation in Europe and Asia relatively stable, Germany, Japan, and the Soviet Union made preparations. With the Soviets wary of mounting tensions with Germany and the Japanese planning to take advantage of the European War by seizing resource-rich European possessions in Southeast Asia, the two powers signed the Soviet–Japanese Neutrality Pact in April 1941.[120] By contrast, the Germans were steadily making preparations for an attack on the Soviet Union, massing forces on the Soviet border.[121]
102
+
103
+ Hitler believed that the United Kingdom's refusal to end the war was based on the hope that the United States and the Soviet Union would enter the war against Germany sooner or later.[122] He, therefore, decided to try to strengthen Germany's relations with the Soviets, or failing that to attack and eliminate them as a factor. In November 1940, negotiations took place to determine if the Soviet Union would join the Tripartite Pact. The Soviets showed some interest but asked for concessions from Finland, Bulgaria, Turkey, and Japan that Germany considered unacceptable. On 18 December 1940, Hitler issued the directive to prepare for an invasion of the Soviet Union.[123]
104
+
105
+ On 22 June 1941, Germany, supported by Italy and Romania, invaded the Soviet Union in Operation Barbarossa, with Germany accusing the Soviets of plotting against them. They were joined shortly by Finland and Hungary.[124] The primary targets of this surprise offensive[125] were the Baltic region, Moscow and Ukraine, with the ultimate goal of ending the 1941 campaign near the Arkhangelsk-Astrakhan line, from the Caspian to the White Seas. Hitler's objectives were to eliminate the Soviet Union as a military power, exterminate Communism, generate Lebensraum ("living space")[126] by dispossessing the native population[127] and guarantee access to the strategic resources needed to defeat Germany's remaining rivals.[128]
106
+
107
+ Although the Red Army was preparing for strategic counter-offensives before the war,[129] Barbarossa forced the Soviet supreme command to adopt a strategic defence. During the summer, the Axis made significant gains into Soviet territory, inflicting immense losses in both personnel and materiel. By mid-August, however, the German Army High Command decided to suspend the offensive of a considerably depleted Army Group Centre, and to divert the 2nd Panzer Group to reinforce troops advancing towards central Ukraine and Leningrad.[130] The Kiev offensive was overwhelmingly successful, resulting in encirclement and elimination of four Soviet armies, and made possible further advance into Crimea and industrially developed Eastern Ukraine (the First Battle of Kharkov).[131]
108
+
109
+ The diversion of three quarters of the Axis troops and the majority of their air forces from France and the central Mediterranean to the Eastern Front[132] prompted the United Kingdom to reconsider its grand strategy.[133] In July, the UK and the Soviet Union formed a military alliance against Germany[134] and in August, the United Kingdom and the United States jointly issued the Atlantic Charter, which outlined British and American goals for the postwar world.[135] In late August the British and Soviets invaded neutral Iran to secure the Persian Corridor, Iran's oil fields, and preempt any Axis advances through Iran toward the Baku oil fields or British India.[136]
110
+
111
+ By October Axis operational objectives in Ukraine and the Baltic region were achieved, with only the sieges of Leningrad[137] and Sevastopol continuing.[138] A major offensive against Moscow was renewed; after two months of fierce battles in increasingly harsh weather, the German army almost reached the outer suburbs of Moscow, where the exhausted troops[139] were forced to suspend their offensive.[140] Large territorial gains were made by Axis forces, but their campaign had failed to achieve its main objectives: two key cities remained in Soviet hands, the Soviet capability to resist was not broken, and the Soviet Union retained a considerable part of its military potential. The blitzkrieg phase of the war in Europe had ended.[141]
112
+
113
+ By early December, freshly mobilised reserves[142] allowed the Soviets to achieve numerical parity with Axis troops.[143] This, as well as intelligence data which established that a minimal number of Soviet troops in the East would be sufficient to deter any attack by the Japanese Kwantung Army,[144] allowed the Soviets to begin a massive counter-offensive that started on 5 December all along the front and pushed German troops 100–250 kilometres (62–155 mi) west.[145]
114
+
115
+ Following the Japanese false flag Mukden Incident in 1931, the Japanese shelling of the American gunboat USS Panay in 1937, and the 1937-38 Nanjing Massacre Japanese-American relations deteriorated. In 1939, the United States notified Japan that it would not be extending its trade treaty and American public opinion opposing Japanese expansionism led to a series of economic sanctions, the Export Control Acts, which banned U.S. exports of chemicals, minerals and military parts to Japan and increased economic pressure on the Japanese regime.[98][146][147] During 1939 Japan launched its first attack against Changsha, a strategically important Chinese city, but was repulsed by late September.[148] Despite several offensives by both sides, the war between China and Japan was stalemated by 1940. To increase pressure on China by blocking supply routes, and to better position Japanese forces in the event of a war with the Western powers, Japan invaded and occupied northern Indochina in September, 1940.[149]
116
+
117
+ Chinese nationalist forces launched a large-scale counter-offensive in early 1940. In August, Chinese communists launched an offensive in Central China; in retaliation, Japan instituted harsh measures in occupied areas to reduce human and material resources for the communists.[150] Continued antipathy between Chinese communist and nationalist forces culminated in armed clashes in January 1941, effectively ending their co-operation.[151] In March, the Japanese 11th army attacked the headquarters of the Chinese 19th army but was repulsed during Battle of Shanggao.[152] In September, Japan attempted to take the city of Changsha again and clashed with Chinese nationalist forces.[153]
118
+
119
+ German successes in Europe encouraged Japan to increase pressure on European governments in Southeast Asia. The Dutch government agreed to provide Japan some oil supplies from the Dutch East Indies, but negotiations for additional access to their resources ended in failure in June 1941.[154] In July 1941 Japan sent troops to southern Indochina, thus threatening British and Dutch possessions in the Far East. The United States, United Kingdom, and other Western governments reacted to this move with a freeze on Japanese assets and a total oil embargo.[155][156] At the same time, Japan was planning an invasion of the Soviet Far East, intending to capitalise off the German invasion in the west, but abandoned the operation after the sanctions.[157]
120
+
121
+ Since early 1941 the United States and Japan had been engaged in negotiations in an attempt to improve their strained relations and end the war in China. During these negotiations, Japan advanced a number of proposals which were dismissed by the Americans as inadequate.[158] At the same time the United States, the United Kingdom, and the Netherlands engaged in secret discussions for the joint defence of their territories, in the event of a Japanese attack against any of them.[159] Roosevelt reinforced the Philippines (an American protectorate scheduled for independence in 1946) and warned Japan that the United States would react to Japanese attacks against any "neighboring countries".[159]
122
+
123
+ Frustrated at the lack of progress and feeling the pinch of the American–British–Dutch sanctions, Japan prepared for war. On 20 November, a new government under Hideki Tojo presented an interim proposal as its final offer. It called for the end of American aid to China and for lifting the embargo on the supply of oil and other resources to Japan. In exchange, Japan promised not to launch any attacks in Southeast Asia and to withdraw its forces from southern Indochina.[158] The American counter-proposal of 26 November required that Japan evacuate all of China without conditions and conclude non-aggression pacts with all Pacific powers.[160] That meant Japan was essentially forced to choose between abandoning its ambitions in China, or seizing the natural resources it needed in the Dutch East Indies by force;[161][162] the Japanese military did not consider the former an option, and many officers considered the oil embargo an unspoken declaration of war.[163]
124
+
125
+ Japan planned to rapidly seize European colonies in Asia to create a large defensive perimeter stretching into the Central Pacific. The Japanese would then be free to exploit the resources of Southeast Asia while exhausting the over-stretched Allies by fighting a defensive war.[164][165] To prevent American intervention while securing the perimeter, it was further planned to neutralise the United States Pacific Fleet and the American military presence in the Philippines from the outset.[166] On 7 December 1941 (8 December in Asian time zones), Japan attacked British and American holdings with near-simultaneous offensives against Southeast Asia and the Central Pacific.[167] These included an attack on the American fleets at Pearl Harbor and the Philippines, landings in Malaya,[167], Thailand and the Battle of Hong Kong.[168]
126
+
127
+ The Japanese invasion of Thailand led to Thailand's decision to ally itself with Japan and the other Japanese attacks led the United States, United Kingdom, China, Australia, and several other states to formally declare war on Japan, whereas the Soviet Union, being heavily involved in large-scale hostilities with European Axis countries, maintained its neutrality agreement with Japan.[169] Germany, followed by the other Axis states, declared war on the United States[170] in solidarity with Japan, citing as justification the American attacks on German war vessels that had been ordered by Roosevelt.[124][171]
128
+
129
+ On 1 January 1942, the Allied Big Four[172]—the Soviet Union, China, the United Kingdom and the United States—and 22 smaller or exiled governments issued the Declaration by United Nations, thereby affirming the Atlantic Charter,[173] and agreeing not to sign a separate peace with the Axis powers.[174]
130
+
131
+ During 1942, Allied officials debated on the appropriate grand strategy to pursue. All agreed that defeating Germany was the primary objective. The Americans favoured a straightforward, large-scale attack on Germany through France. The Soviets were also demanding a second front. The British, on the other hand, argued that military operations should target peripheral areas to wear out German strength, leading to increasing demoralisation, and bolster resistance forces. Germany itself would be subject to a heavy bombing campaign. An offensive against Germany would then be launched primarily by Allied armour without using large-scale armies.[175] Eventually, the British persuaded the Americans that a landing in France was infeasible in 1942 and they should instead focus on driving the Axis out of North Africa.[176]
132
+
133
+ At the Casablanca Conference in early 1943, the Allies reiterated the statements issued in the 1942 Declaration, and demanded the unconditional surrender of their enemies. The British and Americans agreed to continue to press the initiative in the Mediterranean by invading Sicily to fully secure the Mediterranean supply routes.[177] Although the British argued for further operations in the Balkans to bring Turkey into the war, in May 1943, the Americans extracted a British commitment to limit Allied operations in the Mediterranean to an invasion of the Italian mainland and to invade France in 1944.[178]
134
+
135
+ By the end of April 1942, Japan and its ally Thailand had almost fully conquered Burma, Malaya, the Dutch East Indies, Singapore, and Rabaul, inflicting severe losses on Allied troops and taking a large number of prisoners.[179] Despite stubborn resistance by Filipino and US forces, the Philippine Commonwealth was eventually captured in May 1942, forcing its government into exile.[180] On 16 April, in Burma, 7,000 British soldiers were encircled by the Japanese 33rd Division during the Battle of Yenangyaung and rescued by the Chinese 38th Division.[181] Japanese forces also achieved naval victories in the South China Sea, Java Sea and Indian Ocean,[182] and bombed the Allied naval base at Darwin, Australia. In January 1942, the only Allied success against Japan was a Chinese victory at Changsha.[183] These easy victories over unprepared US and European opponents left Japan overconfident, as well as overextended.[184]
136
+
137
+ In early May 1942, Japan initiated operations to capture Port Moresby by amphibious assault and thus sever communications and supply lines between the United States and Australia. The planned invasion was thwarted when an Allied task force, centred on two American fleet carriers, fought Japanese naval forces to a draw in the Battle of the Coral Sea.[185] Japan's next plan, motivated by the earlier Doolittle Raid, was to seize Midway Atoll and lure American carriers into battle to be eliminated; as a diversion, Japan would also send forces to occupy the Aleutian Islands in Alaska.[186] In mid-May, Japan started the Zhejiang-Jiangxi Campaign in China, with the goal of inflicting retribution on the Chinese who aided the surviving American airmen in the Doolittle Raid by destroying air bases and fighting against the Chinese 23rd and 32nd Army Groups.[187][188] In early June, Japan put its operations into action, but the Americans, having broken Japanese naval codes in late May, were fully aware of the plans and order of battle, and used this knowledge to achieve a decisive victory at Midway over the Imperial Japanese Navy.[189]
138
+
139
+ With its capacity for aggressive action greatly diminished as a result of the Midway battle, Japan chose to focus on a belated attempt to capture Port Moresby by an overland campaign in the Territory of Papua.[190] The Americans planned a counter-attack against Japanese positions in the southern Solomon Islands, primarily Guadalcanal, as a first step towards capturing Rabaul, the main Japanese base in Southeast Asia.[191]
140
+
141
+ Both plans started in July, but by mid-September, the Battle for Guadalcanal took priority for the Japanese, and troops in New Guinea were ordered to withdraw from the Port Moresby area to the northern part of the island, where they faced Australian and United States troops in the Battle of Buna-Gona.[192] Guadalcanal soon became a focal point for both sides with heavy commitments of troops and ships in the battle for Guadalcanal. By the start of 1943, the Japanese were defeated on the island and withdrew their troops.[193] In Burma, Commonwealth forces mounted two operations. The first, an offensive into the Arakan region in late 1942, went disastrously, forcing a retreat back to India by May 1943.[194] The second was the insertion of irregular forces behind Japanese front-lines in February which, by the end of April, had achieved mixed results.[195]
142
+
143
+ Despite considerable losses, in early 1942 Germany and its allies stopped a major Soviet offensive in central and southern Russia, keeping most territorial gains they had achieved during the previous year.[196] In May the Germans defeated Soviet offensives in the Kerch Peninsula and at Kharkov,[197] and then launched their main summer offensive against southern Russia in June 1942, to seize the oil fields of the Caucasus and occupy Kuban steppe, while maintaining positions on the northern and central areas of the front. The Germans split Army Group South into two groups: Army Group A advanced to the lower Don River and struck south-east to the Caucasus, while Army Group B headed towards the Volga River. The Soviets decided to make their stand at Stalingrad on the Volga.[198]
144
+
145
+ By mid-November, the Germans had nearly taken Stalingrad in bitter street fighting. The Soviets began their second winter counter-offensive, starting with an encirclement of German forces at Stalingrad,[199] and an assault on the Rzhev salient near Moscow, though the latter failed disastrously.[200] By early February 1943, the German Army had taken tremendous losses; German troops at Stalingrad had been defeated,[201] and the front-line had been pushed back beyond its position before the summer offensive. In mid-February, after the Soviet push had tapered off, the Germans launched another attack on Kharkov, creating a salient in their front line around the Soviet city of Kursk.[202]
146
+
147
+ Exploiting poor American naval command decisions, the German navy ravaged Allied shipping off the American Atlantic coast.[203] By November 1941, Commonwealth forces had launched a counter-offensive, Operation Crusader, in North Africa, and reclaimed all the gains the Germans and Italians had made.[204] In North Africa, the Germans launched an offensive in January, pushing the British back to positions at the Gazala Line by early February,[205] followed by a temporary lull in combat which Germany used to prepare for their upcoming offensives.[206] Concerns the Japanese might use bases in Vichy-held Madagascar caused the British to invade the island in early May 1942.[207] An Axis offensive in Libya forced an Allied retreat deep inside Egypt until Axis forces were stopped at El Alamein.[208] On the Continent, raids of Allied commandos on strategic targets, culminating in the disastrous Dieppe Raid,[209] demonstrated the Western Allies' inability to launch an invasion of continental Europe without much better preparation, equipment, and operational security.[210][page needed]
148
+
149
+ In August 1942, the Allies succeeded in repelling a second attack against El Alamein[211] and, at a high cost, managed to deliver desperately needed supplies to the besieged Malta.[212] A few months later, the Allies commenced an attack of their own in Egypt, dislodging the Axis forces and beginning a drive west across Libya.[213] This attack was followed up shortly after by Anglo-American landings in French North Africa, which resulted in the region joining the Allies.[214] Hitler responded to the French colony's defection by ordering the occupation of Vichy France;[214] although Vichy forces did not resist this violation of the armistice, they managed to scuttle their fleet to prevent its capture by German forces.[214][215] The Axis forces in Africa withdrew into Tunisia, which was conquered by the Allies in May 1943.[214][216]
150
+
151
+ In June 1943 the British and Americans began a strategic bombing campaign against Germany with a goal to disrupt the war economy, reduce morale, and "de-house" the civilian population.[217] The firebombing of Hamburg was among the first attacks in this campaign, inflicting significant casualties and considerable losses on infrastructure of this important industrial centre.[218]
152
+
153
+ After the Guadalcanal Campaign, the Allies initiated several operations against Japan in the Pacific. In May 1943, Canadian and US forces were sent to eliminate Japanese forces from the Aleutians.[219] Soon after, the United States, with support from Australia, New Zealand and Pacific Islander forces, began major ground, sea and air operations to isolate Rabaul by capturing surrounding islands, and breach the Japanese Central Pacific perimeter at the Gilbert and Marshall Islands.[220] By the end of March 1944, the Allies had completed both of these objectives and had also neutralised the major Japanese base at Truk in the Caroline Islands. In April, the Allies launched an operation to retake Western New Guinea.[221]
154
+
155
+ In the Soviet Union, both the Germans and the Soviets spent the spring and early summer of 1943 preparing for large offensives in central Russia. On 4 July 1943, Germany attacked Soviet forces around the Kursk Bulge. Within a week, German forces had exhausted themselves against the Soviets' deeply echeloned and well-constructed defences,[222] and for the first time in the war Hitler cancelled the operation before it had achieved tactical or operational success.[223] This decision was partially affected by the Western Allies' invasion of Sicily launched on 9 July, which, combined with previous Italian failures, resulted in the ousting and arrest of Mussolini later that month.[224]
156
+
157
+ On 12 July 1943, the Soviets launched their own counter-offensives, thereby dispelling any chance of German victory or even stalemate in the east. The Soviet victory at Kursk marked the end of German superiority,[225] giving the Soviet Union the initiative on the Eastern Front.[226][227] The Germans tried to stabilise their eastern front along the hastily fortified Panther–Wotan line, but the Soviets broke through it at Smolensk and by the Lower Dnieper Offensives.[228]
158
+
159
+ On 3 September 1943, the Western Allies invaded the Italian mainland, following Italy's armistice with the Allies.[229] Germany with the help of fascists responded by disarming Italian forces that were in many places without superior orders, seizing military control of Italian areas,[230] and creating a series of defensive lines.[231] German special forces then rescued Mussolini, who then soon established a new client state in German-occupied Italy named the Italian Social Republic,[232] causing an Italian civil war. The Western Allies fought through several lines until reaching the main German defensive line in mid-November.[233]
160
+
161
+ German operations in the Atlantic also suffered. By May 1943, as Allied counter-measures became increasingly effective, the resulting sizeable German submarine losses forced a temporary halt of the German Atlantic naval campaign.[234] In November 1943, Franklin D. Roosevelt and Winston Churchill met with Chiang Kai-shek in Cairo and then with Joseph Stalin in Tehran.[235] The former conference determined the post-war return of Japanese territory[236] and the military planning for the Burma Campaign,[237] while the latter included agreement that the Western Allies would invade Europe in 1944 and that the Soviet Union would declare war on Japan within three months of Germany's defeat.[238]
162
+
163
+ From November 1943, during the seven-week Battle of Changde, the Chinese forced Japan to fight a costly war of attrition, while awaiting Allied relief.[239][240][241] In January 1944, the Allies launched a series of attacks in Italy against the line at Monte Cassino and tried to outflank it with landings at Anzio.[242]
164
+
165
+ On 27 January 1944, Soviet troops launched a major offensive that expelled German forces from the Leningrad region, thereby ending the most lethal siege in history.[243] The following Soviet offensive was halted on the pre-war Estonian border by the German Army Group North aided by Estonians hoping to re-establish national independence. This delay slowed subsequent Soviet operations in the Baltic Sea region.[244] By late May 1944, the Soviets had liberated Crimea, largely expelled Axis forces from Ukraine, and made incursions into Romania, which were repulsed by the Axis troops.[245] The Allied offensives in Italy had succeeded and, at the expense of allowing several German divisions to retreat, on 4 June Rome was captured.[246]
166
+
167
+ The Allies had mixed success in mainland Asia. In March 1944, the Japanese launched the first of two invasions, an operation against British positions in Assam, India,[247] and soon besieged Commonwealth positions at Imphal and Kohima.[248] In May 1944, British forces mounted a counter-offensive that drove Japanese troops back to Burma by July,[248] and Chinese forces that had invaded northern Burma in late 1943 besieged Japanese troops in Myitkyina.[249] The second Japanese invasion of China aimed to destroy China's main fighting forces, secure railways between Japanese-held territory and capture Allied airfields.[250] By June, the Japanese had conquered the province of Henan and begun a new attack on Changsha in Hunan province.[251]
168
+
169
+ On 6 June 1944 (known as D-Day), after three years of Soviet pressure,[252] the Western Allies invaded northern France. After reassigning several Allied divisions from Italy, they also attacked southern France.[253] These landings were successful, and led to the defeat of the German Army units in France. Paris was liberated on 25 August by the local resistance assisted by the Free French Forces, both led by General Charles de Gaulle,[254] and the Western Allies continued to push back German forces in western Europe during the latter part of the year. An attempt to advance into northern Germany spearheaded by a major airborne operation in the Netherlands failed.[255] After that, the Western Allies slowly pushed into Germany, but failed to cross the Rur river in a large offensive. In Italy, Allied advance also slowed due to the last major German defensive line.[256]
170
+
171
+ On 22 June, the Soviets launched a strategic offensive in Belarus ("Operation Bagration") that destroyed the German Army Group Centre almost completely.[257] Soon after that, another Soviet strategic offensive forced German troops from Western Ukraine and Eastern Poland. The Soviets formed the Polish Committee of National Liberation to control territory in Poland and combat the Polish Armia Krajowa; The Soviet Red Army remained in the Praga district on the other side of the Vistula and watched passively as the Germans quelled the Warsaw Uprising initiated by the Armia Krajowa.[258] The national uprising in Slovakia was also quelled by the Germans.[259] The Soviet Red Army's strategic offensive in eastern Romania cut off and destroyed the considerable German troops there and triggered a successful coup d'état in Romania and in Bulgaria, followed by those countries' shift to the Allied side.[260]
172
+
173
+ In September 1944, Soviet troops advanced into Yugoslavia and forced the rapid withdrawal of German Army Groups E and F in Greece, Albania and Yugoslavia to rescue them from being cut off.[261] By this point, the Communist-led Partisans under Marshal Josip Broz Tito, who had led an increasingly successful guerrilla campaign against the occupation since 1941, controlled much of the territory of Yugoslavia and engaged in delaying efforts against German forces further south. In northern Serbia, the Soviet Red Army, with limited support from Bulgarian forces, assisted the Partisans in a joint liberation of the capital city of Belgrade on 20 October. A few days later, the Soviets launched a massive assault against German-occupied Hungary that lasted until the fall of Budapest in February 1945.[262] Unlike impressive Soviet victories in the Balkans, bitter Finnish resistance to the Soviet offensive in the Karelian Isthmus denied the Soviets occupation of Finland and led to a Soviet-Finnish armistice on relatively mild conditions,[263] although Finland was forced to fight their former ally Germany.[264][broken footnote]
174
+
175
+ By the start of July 1944, Commonwealth forces in Southeast Asia had repelled the Japanese sieges in Assam, pushing the Japanese back to the Chindwin River[265] while the Chinese captured Myitkyina. In September 1944, Chinese forces captured Mount Song and reopened the Burma Road.[266] In China, the Japanese had more successes, having finally captured Changsha in mid-June and the city of Hengyang by early August.[267] Soon after, they invaded the province of Guangxi, winning major engagements against Chinese forces at Guilin and Liuzhou by the end of November[268] and successfully linking up their forces in China and Indochina by mid-December.[269]
176
+
177
+ In the Pacific, US forces continued to press back the Japanese perimeter. In mid-June 1944, they began their offensive against the Mariana and Palau islands, and decisively defeated Japanese forces in the Battle of the Philippine Sea. These defeats led to the resignation of the Japanese Prime Minister, Hideki Tojo, and provided the United States with air bases to launch intensive heavy bomber attacks on the Japanese home islands. In late October, American forces invaded the Filipino island of Leyte; soon after, Allied naval forces scored another large victory in the Battle of Leyte Gulf, one of the largest naval battles in history.[270]
178
+
179
+ On 16 December 1944, Germany made a last attempt on the Western Front by using most of its remaining reserves to launch a massive counter-offensive in the Ardennes and along with the French-German border to split the Western Allies, encircle large portions of Western Allied troops and capture their primary supply port at Antwerp to prompt a political settlement.[271] By January, the offensive had been repulsed with no strategic objectives fulfilled.[271] In Italy, the Western Allies remained stalemated at the German defensive line. In mid-January 1945, the Soviets and Poles attacked in Poland, pushing from the Vistula to the Oder river in Germany, and overran East Prussia.[272] On 4 February Soviet, British, and US leaders met for the Yalta Conference. They agreed on the occupation of post-war Germany, and on when the Soviet Union would join the war against Japan.[273]
180
+
181
+ In February, the Soviets entered Silesia and Pomerania, while Western Allies entered western Germany and closed to the Rhine river. By March, the Western Allies crossed the Rhine north and south of the Ruhr, encircling the German Army Group B.[274] In early March, in an attempt to protect its last oil reserves in Hungary and to retake Budapest, Germany launched its last major offensive against Soviet troops near Lake Balaton. In two weeks, the offensive had been repulsed, the Soviets advanced to Vienna, and captured the city. In early April, Soviet troops captured Königsberg, while the Western Allies finally pushed forward in Italy and swept across western Germany capturing Hamburg and Nuremberg. American and Soviet forces met at the Elbe river on 25 April, leaving several unoccupied pockets in southern Germany and around Berlin.
182
+
183
+ Soviet and Polish forces stormed and captured Berlin in late April. In Italy, German forces surrendered on 29 April. On 30 April, the Reichstag was captured, signalling the military defeat of Nazi Germany,[275] Berlin garrison surrendered on 2 May.
184
+
185
+ Several changes in leadership occurred during this period. On 12 April, President Roosevelt died and was succeeded by Harry S. Truman. Benito Mussolini was killed by Italian partisans on 28 April.[276] Two days later, Hitler committed suicide in besieged Berlin, and he was succeeded by Grand Admiral Karl Dönitz.[277]
186
+ Total and unconditional surrender in Europe was signed on 7 and 8 May, to be effective by the end of 8 May.[278] German Army Group Centre resisted in Prague until 11 May.[279]
187
+
188
+ In the Pacific theatre, American forces accompanied by the forces of the Philippine Commonwealth advanced in the Philippines, clearing Leyte by the end of April 1945. They landed on Luzon in January 1945 and recaptured Manila in March. Fighting continued on Luzon, Mindanao, and other islands of the Philippines until the end of the war.[280] Meanwhile, the United States Army Air Forces launched a massive firebombing campaign of strategic cities in Japan in an effort to destroy Japanese war industry and civilian morale. A devastating bombing raid on Tokyo of 9–10 March was the deadliest conventional bombing raid in history.[281]
189
+
190
+ In May 1945, Australian troops landed in Borneo, over-running the oilfields there. British, American, and Chinese forces defeated the Japanese in northern Burma in March, and the British pushed on to reach Rangoon by 3 May.[282] Chinese forces started a counterattack in the Battle of West Hunan that occurred between 6 April and 7 June 1945. American naval and amphibious forces also moved towards Japan, taking Iwo Jima by March, and Okinawa by the end of June.[283] At the same time, American submarines cut off Japanese imports, drastically reducing Japan's ability to supply its overseas forces.[284]
191
+
192
+ On 11 July, Allied leaders met in Potsdam, Germany. They confirmed earlier agreements about Germany,[285] and the American, British and Chinese governments reiterated the demand for unconditional surrender of Japan, specifically stating that "the alternative for Japan is prompt and utter destruction".[286] During this conference, the United Kingdom held its general election, and Clement Attlee replaced Churchill as Prime Minister.[287]
193
+
194
+ The call for unconditional surrender was rejected by the Japanese government, which believed it would be capable of negotiating for more favourable surrender terms.[288] In early August, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki. Between the two bombings, the Soviets, pursuant to the Yalta agreement, invaded Japanese-held Manchuria and quickly defeated the Kwantung Army, which was the largest Japanese fighting force.[289] These two events persuaded previously adamant Imperial Army leaders to accept surrender terms.[290] The Red Army also captured the southern part of Sakhalin Island and the Kuril Islands. On 15 August 1945, Japan surrendered, with the surrender documents finally signed at Tokyo Bay on the deck of the American battleship USS Missouri on 2 September 1945, ending the war.[291]
195
+
196
+ The Allies established occupation administrations in Austria and Germany. The former became a neutral state, non-aligned with any political bloc. The latter was divided into western and eastern occupation zones controlled by the Western Allies and the Soviet Union. A denazification programme in Germany led to the prosecution of Nazi war criminals in the Nuremberg trials and the removal of ex-Nazis from power, although this policy moved towards amnesty and re-integration of ex-Nazis into West German society.[292]
197
+
198
+ Germany lost a quarter of its pre-war (1937) territory. Among the eastern territories, Silesia, Neumark and most of Pomerania were taken over by Poland,[293] and East Prussia was divided between Poland and the Soviet Union, followed by the expulsion to Germany of the nine million Germans from these provinces,[294][295] as well as three million Germans from the Sudetenland in Czechoslovakia. By the 1950s, one-fifth of West Germans were refugees from the east. The Soviet Union also took over the Polish provinces east of the Curzon line,[296] from which 2 million Poles were expelled;[295][297] north-east Romania,[298][299] parts of eastern Finland,[300] and the three Baltic states were incorporated into the Soviet Union.[301][302]
199
+
200
+ In an effort to maintain world peace,[303] the Allies formed the United Nations, which officially came into existence on 24 October 1945,[304] and adopted the Universal Declaration of Human Rights in 1948 as a common standard for all member nations.[305] The great powers that were the victors of the war—France, China, the United Kingdom, the Soviet Union and the United States—became the permanent members of the UN's Security Council.[306] The five permanent members remain so to the present, although there have been two seat changes, between the Republic of China and the People's Republic of China in 1971, and between the Soviet Union and its successor state, the Russian Federation, following the dissolution of the Soviet Union in 1991. The alliance between the Western Allies and the Soviet Union had begun to deteriorate even before the war was over.[307]
201
+
202
+ Germany had been de facto divided, and two independent states, the Federal Republic of Germany (West Germany) and the German Democratic Republic (East Germany),[308] were created within the borders of Allied and Soviet occupation zones. The rest of Europe was also divided into Western and Soviet spheres of influence.[309] Most eastern and central European countries fell into the Soviet sphere, which led to establishment of Communist-led regimes, with full or partial support of the Soviet occupation authorities. As a result, East Germany,[310] Poland, Hungary, Romania, Czechoslovakia, and Albania[311] became Soviet satellite states. Communist Yugoslavia conducted a fully independent policy, causing tension with the Soviet Union.[312]
203
+
204
+ Post-war division of the world was formalised by two international military alliances, the United States-led NATO and the Soviet-led Warsaw Pact.[313] The long period of political tensions and military competition between them, the Cold War, would be accompanied by an unprecedented arms race and proxy wars.[314]
205
+
206
+ In Asia, the United States led the occupation of Japan and administrated Japan's former islands in the Western Pacific, while the Soviets annexed South Sakhalin and the Kuril Islands.[315] Korea, formerly under Japanese rule, was divided and occupied by the Soviet Union in the North and the United States in the South between 1945 and 1948. Separate republics emerged on both sides of the 38th parallel in 1948, each claiming to be the legitimate government for all of Korea, which led ultimately to the Korean War.[316]
207
+
208
+ In China, nationalist and communist forces resumed the civil war in June 1946. Communist forces were victorious and established the People's Republic of China on the mainland, while nationalist forces retreated to Taiwan in 1949.[317] In the Middle East, the Arab rejection of the United Nations Partition Plan for Palestine and the creation of Israel marked the escalation of the Arab–Israeli conflict. While European powers attempted to retain some or all of their colonial empires, their losses of prestige and resources during the war rendered this unsuccessful, leading to decolonisation.[318][319]
209
+
210
+ The global economy suffered heavily from the war, although participating nations were affected differently. The United States emerged much richer than any other nation, leading to a baby boom, and by 1950 its gross domestic product per person was much higher than that of any of the other powers, and it dominated the world economy.[320] The UK and US pursued a policy of industrial disarmament in Western Germany in the years 1945–1948.[321] Because of international trade interdependencies this led to European economic stagnation and delayed European recovery for several years.[322][323]
211
+
212
+ Recovery began with the mid-1948 currency reform in Western Germany, and was sped up by the liberalisation of European economic policy that the Marshall Plan (1948–1951) both directly and indirectly caused.[324][325] The post-1948 West German recovery has been called the German economic miracle.[326] Italy also experienced an economic boom[327] and the French economy rebounded.[328] By contrast, the United Kingdom was in a state of economic ruin,[329] and although receiving a quarter of the total Marshall Plan assistance, more than any other European country,[330] it continued in relative economic decline for decades.[331]
213
+
214
+ The Soviet Union, despite enormous human and material losses, also experienced rapid increase in production in the immediate post-war era.[332] Japan recovered much later.[333] China returned to its pre-war industrial production by 1952.[334]
215
+
216
+ Estimates for the total number of casualties in the war vary, because many deaths went unrecorded.[335] Most suggest that some 60 million people died in the war, including about 20 million military personnel and 40 million civilians.[336][337][338]
217
+ Many of the civilians died because of deliberate genocide, massacres, mass bombings, disease, and starvation.
218
+
219
+ The Soviet Union alone lost around 27 million people during the war,[339] including 8.7 million military and 19 million civilian deaths.[340] A quarter of the people in the Soviet Union were wounded or killed.[341] Germany sustained 5.3 million military losses, mostly on the Eastern Front and during the final battles in Germany.[342]
220
+
221
+ An estimated 11[343] to 17 million[344] civilians died as a direct or as an indirect result of Nazi racist policies, including mass killing of around 6 million Jews, along with Roma, homosexuals, at least 1.9 million ethnic Poles[345][346] and millions of other Slavs (including Russians, Ukrainians and Belarusians), and other ethnic and minority groups.[347][344] Between 1941 and 1945, more than 200,000 ethnic Serbs, along with gypsies and Jews, were persecuted and murdered by the Axis-aligned Croatian Ustaše in Yugoslavia.[348] Also, more than 100,000 Poles were massacred by the Ukrainian Insurgent Army in the Volhynia massacres, between 1943 and 1945.[349] At the same time about 10,000–15,000 Ukrainians were killed by the Polish Home Army and other Polish units, in reprisal attacks.[350]
222
+
223
+ In Asia and the Pacific, between 3 million and more than 10 million civilians, mostly Chinese (estimated at 7.5 million[351]), were killed by the Japanese occupation forces.[352] The most infamous Japanese atrocity was the Nanking Massacre, in which fifty to three hundred thousand Chinese civilians were raped and murdered.[353] Mitsuyoshi Himeta reported that 2.7 million casualties occurred during the Sankō Sakusen. General Yasuji Okamura implemented the policy in Heipei and Shantung.[354]
224
+
225
+ Axis forces employed biological and chemical weapons. The Imperial Japanese Army used a variety of such weapons during its invasion and occupation of China (see Unit 731)[355][356] and in early conflicts against the Soviets.[357] Both the Germans and the Japanese tested such weapons against civilians,[358] and sometimes on prisoners of war.[359]
226
+
227
+ The Soviet Union was responsible for the Katyn massacre of 22,000 Polish officers,[360] and the imprisonment or execution of thousands of political prisoners by the NKVD, along with mass civilian deportations to Siberia, in the Baltic states and eastern Poland annexed by the Red Army.[361]
228
+
229
+ The mass bombing of cities in Europe and Asia has often been called a war crime, although no positive or specific customary international humanitarian law with respect to aerial warfare existed before or during World War II.[362] The USAAF firebombed a total of 67 Japanese cities, killing 393,000 civilians and destroying 65% of built-up areas.[363]
230
+
231
+ Nazi Germany was responsible for the Holocaust (which killed approximately 6 million Jews) as well as for killing 2.7 million ethnic Poles[364] and 4 million others who were deemed "unworthy of life" (including the disabled and mentally ill, Soviet prisoners of war, Romani, homosexuals, Freemasons, and Jehovah's Witnesses) as part of a programme of deliberate extermination, in effect becoming a "genocidal state".[365] Soviet POWs were kept in especially unbearable conditions, and 3.6 million Soviet POWs out of 5.7 died in Nazi camps during the war.[366][367] In addition to concentration camps, death camps were created in Nazi Germany to exterminate people on an industrial scale. Nazi Germany extensively used forced labourers; about 12 million Europeans from German occupied countries were abducted and used as a slave work force in German industry, agriculture and war economy.[368]
232
+
233
+ The Soviet Gulag became a de facto system of deadly camps during 1942–43, when wartime privation and hunger caused numerous deaths of inmates,[369] including foreign citizens of Poland and other countries occupied in 1939–40 by the Soviet Union, as well as Axis POWs.[370] By the end of the war, most Soviet POWs liberated from Nazi camps and many repatriated civilians were detained in special filtration camps where they were subjected to NKVD evaluation, and 226,127 were sent to the Gulag as real or perceived Nazi collaborators.[371]
234
+
235
+ Japanese prisoner-of-war camps, many of which were used as labour camps, also had high death rates. The International Military Tribunal for the Far East found the death rate of Western prisoners was 27 per cent (for American POWs, 37 per cent),[372] seven times that of POWs under the Germans and Italians.[373] While 37,583 prisoners from the UK, 28,500 from the Netherlands, and 14,473 from the United States were released after the surrender of Japan, the number of Chinese released was only 56.[374]
236
+
237
+ At least five million Chinese civilians from northern China and Manchukuo were enslaved between 1935 and 1941 by the East Asia Development Board, or Kōain, for work in mines and war industries. After 1942, the number reached 10 million.[375] In Java, between 4 and 10 million rōmusha (Japanese: "manual labourers"), were forced to work by the Japanese military. About 270,000 of these Javanese labourers were sent to other Japanese-held areas in South East Asia, and only 52,000 were repatriated to Java.[376]
238
+
239
+ In Europe, occupation came under two forms. In Western, Northern, and Central Europe (France, Norway, Denmark, the Low Countries, and the annexed portions of Czechoslovakia) Germany established economic policies through which it collected roughly 69.5 billion reichmarks (27.8 billion US dollars) by the end of the war; this figure does not include the sizeable plunder of industrial products, military equipment, raw materials and other goods.[377] Thus, the income from occupied nations was over 40 per cent of the income Germany collected from taxation, a figure which increased to nearly 40 per cent of total German income as the war went on.[378]
240
+
241
+ In the East, the intended gains of Lebensraum were never attained as fluctuating front-lines and Soviet scorched earth policies denied resources to the German invaders.[379] Unlike in the West, the Nazi racial policy encouraged extreme brutality against what it considered to be the "inferior people" of Slavic descent; most German advances were thus followed by mass executions.[380] Although resistance groups formed in most occupied territories, they did not significantly hamper German operations in either the East[381] or the West[382] until late 1943.
242
+
243
+ In Asia, Japan termed nations under its occupation as being part of the Greater East Asia Co-Prosperity Sphere, essentially a Japanese hegemony which it claimed was for purposes of liberating colonised peoples.[383] Although Japanese forces were sometimes welcomed as liberators from European domination, Japanese war crimes frequently turned local public opinion against them.[384] During Japan's initial conquest it captured 4,000,000 barrels (640,000 m3) of oil (~5.5×105 tonnes) left behind by retreating Allied forces, and by 1943 was able to get production in the Dutch East Indies up to 50 million barrels (~6.8×10^6 t), 76 per cent of its 1940 output rate.[384]
244
+
245
+ In Europe, before the outbreak of the war, the Allies had significant advantages in both population and economics. In 1938, the Western Allies (United Kingdom, France, Poland and the British Dominions) had a 30 per cent larger population and a 30 per cent higher gross domestic product than the European Axis powers (Germany and Italy); if colonies are included, the Allies had more than a 5:1 advantage in population and a nearly 2:1 advantage in GDP.[385] In Asia at the same time, China had roughly six times the population of Japan but only an 89 per cent higher GDP; this is reduced to three times the population and only a 38 per cent higher GDP if Japanese colonies are included.[385]
246
+
247
+ The United States produced about two-thirds of all the munitions used by the Allies in WWII, including warships, transports, warplanes, artillery, tanks, trucks, and ammunition.[386]
248
+ Though the Allies' economic and population advantages were largely mitigated during the initial rapid blitzkrieg attacks of Germany and Japan, they became the decisive factor by 1942, after the United States and Soviet Union joined the Allies, as the war largely settled into one of attrition.[387] While the Allies' ability to out-produce the Axis is often attributed[by whom?] to the Allies having more access to natural resources, other factors, such as Germany and Japan's reluctance to employ women in the labour force,[388] Allied strategic bombing,[389] and Germany's late shift to a war economy[390] contributed significantly. Additionally, neither Germany nor Japan planned to fight a protracted war, and had not equipped themselves to do so.[391] To improve their production, Germany and Japan used millions of slave labourers;[392] Germany used about 12 million people, mostly from Eastern Europe,[368] while Japan used more than 18 million people in Far East Asia.[375][376]
249
+
250
+ Aircraft were used for reconnaissance, as fighters, bombers, and ground-support, and each role was advanced considerably. Innovation included airlift (the capability to quickly move limited high-priority supplies, equipment, and personnel);[393] and of strategic bombing (the bombing of enemy industrial and population centres to destroy the enemy's ability to wage war).[394] Anti-aircraft weaponry also advanced, including defences such as radar and surface-to-air artillery. The use of the jet aircraft was pioneered and, though late introduction meant it had little impact, it led to jets becoming standard in air forces worldwide.[395] Although guided missiles were being developed, they were not advanced enough to reliably target aircraft until some years after the war.
251
+
252
+ Advances were made in nearly every aspect of naval warfare, most notably with aircraft carriers and submarines. Although aeronautical warfare had relatively little success at the start of the war, actions at Taranto, Pearl Harbor, and the Coral Sea established the carrier as the dominant capital ship in place of the battleship.[396][397][398] In the Atlantic, escort carriers proved to be a vital part of Allied convoys, increasing the effective protection radius and helping to close the Mid-Atlantic gap.[399] Carriers were also more economical than battleships because of the relatively low cost of aircraft[400] and their not requiring to be as heavily armoured.[401] Submarines, which had proved to be an effective weapon during the First World War,[402] were anticipated by all sides to be important in the second. The British focused development on anti-submarine weaponry and tactics, such as sonar and convoys, while Germany focused on improving its offensive capability, with designs such as the Type VII submarine and wolfpack tactics.[403][better source needed] Gradually, improving Allied technologies such as the Leigh light, hedgehog, squid, and homing torpedoes proved victorious over the German submarines.[citation needed]
253
+
254
+ Land warfare changed from the static front lines of trench warfare of World War I, which had relied on improved artillery that outmatched the speed of both infantry and cavalry, to increased mobility and combined arms. The tank, which had been used predominantly for infantry support in the First World War, had evolved into the primary weapon.[404] In the late 1930s, tank design was considerably more advanced than it had been during World War I,[405] and advances continued throughout the war with increases in speed, armour and firepower.[citation needed] At the start of the war, most commanders thought enemy tanks should be met by tanks with superior specifications.[406] This idea was challenged by the poor performance of the relatively light early tank guns against armour, and German doctrine of avoiding tank-versus-tank combat. This, along with Germany's use of combined arms, were among the key elements of their highly successful blitzkrieg tactics across Poland and France.[404] Many means of destroying tanks, including indirect artillery, anti-tank guns (both towed and self-propelled), mines, short-ranged infantry antitank weapons, and other tanks were used.[406] Even with large-scale mechanisation, infantry remained the backbone of all forces,[407] and throughout the war, most infantry were equipped similarly to World War I.[408] The portable machine gun spread, a notable example being the German MG34, and various submachine guns which were suited to close combat in urban and jungle settings.[408] The assault rifle, a late war development incorporating many features of the rifle and submachine gun, became the standard postwar infantry weapon for most armed forces.[409]
255
+
256
+ Most major belligerents attempted to solve the problems of complexity and security involved in using large codebooks for cryptography by designing ciphering machines, the most well known being the German Enigma machine.[410] Development of SIGINT (signals intelligence) and cryptanalysis enabled the countering process of decryption. Notable examples were the Allied decryption of Japanese naval codes[411] and British Ultra, a pioneering method for decoding Enigma benefiting from information given to the United Kingdom by the Polish Cipher Bureau, which had been decoding early versions of Enigma before the war.[412] Another aspect of military intelligence was the use of deception, which the Allies used to great effect, such as in operations Mincemeat and Bodyguard.[411][413]
257
+
258
+ Other technological and engineering feats achieved during, or as a result of, the war include the world's first programmable computers (Z3, Colossus, and ENIAC), guided missiles and modern rockets, the Manhattan Project's development of nuclear weapons, operations research and the development of artificial harbours and oil pipelines under the English Channel.[citation needed] Penicillin was first mass-produced and used during the war (see Stabilization and mass production of penicillin).[414]
259
+
en/5327.html.txt ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The Punic Wars were a series of three wars fought between Rome and Carthage from 264 BC to 146 BC. The main cause of the Punic Wars was the conflicts of interest between the existing Carthaginian Empire and the expanding Roman Republic. The Romans were initially interested in expansion via Sicily (which at that time was a cultural melting pot), part of which lay under Carthaginian control. At the start of the First Punic War (264–241 BC), Carthage was the dominant power of the Western Mediterranean, with an extensive maritime empire. Rome was a rapidly ascending power in Italy, but it lacked the naval power of Carthage. The Second Punic War (218–201 BC) witnessed Hannibal's crossing of the Alps in 218 BC, followed by a prolonged but ultimately failed campaign of Carthage's Hannibal in mainland Italy. By the end of the Third Punic War (149–146 BC), after more than a hundred years and the loss of many hundreds of thousands of soldiers from both sides, Rome had conquered Carthage's empire, completely destroyed the city, and became the most powerful state of the Western Mediterranean.
6
+
7
+ With the end of the Macedonian Wars – which ran concurrently with the Punic Wars – and the defeat of the Seleucid King Antiochus III the Great in the Roman–Seleucid War (Treaty of Apamea, 188 BC) in the eastern sea, Rome emerged as the dominant Mediterranean power and one of the most powerful cities in classical antiquity. The Roman victories over Carthage in these wars gave Rome a preeminent status it would retain until the 5th century AD.
8
+
9
+ The main source for almost every aspect of the Punic Wars[note 1] is the historian Polybius (c. 200 – c. 118 BC), a Greek sent to Rome in 167 BC as a hostage.[2] His works include a now-lost manual on military tactics,[3] but he is now known for The Histories, written sometime after 146 BC.[4][5] Polybius's work is considered broadly objective and largely neutral as between Carthaginian and Roman points of view.[6][7] Polybius was an analytical historian and wherever possible personally interviewed participants, from both sides, in the events he wrote about.[8][9][10] He accompanied the Roman general Scipio Aemilianus during his campaign in North Africa which resulted in a Roman victory in the Third Punic War.[11]
10
+
11
+ The accuracy of Polybius's account has been much debated over the past 150 years, but the modern consensus is to accept it largely at face value, and the details of the war in modern sources are largely based on interpretations of Polybius's account.[2][12][13] The modern historian Andrew Curry sees Polybius as being "fairly reliable";[14] while Craige Champion describes him as "a remarkably well-informed, industrious, and insightful historian".[15]
12
+
13
+ Other, later, ancient histories of the war exist, although often in fragmentary or summary form.[16] Modern historians usually take into account the writings of various Roman annalists, some contemporary; the Sicilian Greek Diodorus Siculus; the later Roman historians Livy (who relied heavily on Polybius[17]), Plutarch, Appian (whose account of the Third Punic War is especially valuable[18]) and Dio Cassius.[19] The classicist Adrian Goldsworthy states "Polybius' account is usually to be preferred when it differs with any of our other accounts".[note 2][9] Other sources include inscriptions, archaeological evidence and empirical evidence from reconstructions such as the trireme Olympias.[20]
14
+
15
+ The Roman Republic had been aggressively expanding in the southern Italian mainland for a century before the First Punic War.[21] It had conquered peninsular Italy south of the River Arno by 272 BC, when the Greek cities of southern Italy (Magna Graecia) submitted at the conclusion of the Pyrrhic War.[22] During this period Carthage, with its capital in what is now Tunisia, had come to dominate southern Spain, much of the coastal regions of North Africa, the Balearic Islands, Corsica, Sardinia, and the western half of Sicily in a military and commercial empire.[23]
16
+
17
+ Beginning in 480 BC, Carthage had fought a series of inconclusive wars against the Greek city states of Sicily, led by Syracuse.[24] By 264 BC Carthage and Rome were the preeminent powers in the western Mediterranean.[25] The two states had several times asserted their mutual friendship via formal alliances: in 509 BC, 348 BC and around 279 BC. Relationships were good, with strong commercial links. During the Pyrrhic War of 280–275 BC, against a king of Epirus who alternately fought Rome in Italy and Carthage on Sicily, Carthage provided materiel to the Romans and on at least one occasion used its navy to ferry a Roman force.[26][27]
18
+
19
+ The two states had several times asserted their mutual friendship via formal alliances: in 509 BC, 348 BC and around 279 BC. Relationships were good, with strong commercial links. During the Pyrrhic War of 280–275 BC, against a king of Epirus who alternately fought Rome in Italy and Carthage on Sicily, Carthage provided materiel to the Romans and on at least one occasion used its navy to ferry a Roman force.[26][27] Rome's expansion into southern Italy probably made it inevitable that it would eventually clash with Carthage over Sicily on some pretext. The immediate cause of the war was the issue of control of the Sicilian town of Messana (modern Messina).[28] In 264 BC Carthage and Rome went to war, starting the First Punic War.[29]
20
+
21
+ Most male Roman citizens were eligible for military service and would serve as infantry, a better-off minority providing a cavalry component. Traditionally, when at war the Romans would raise two legions, each of 4,200 infantry[note 3] and 300 cavalry. A few infantry served as javelin-armed skirmishers. The balance were equipped as heavy infantry, with body armour, a large shield and short thrusting swords. They were divided into three ranks, of which the front rank also carried two javelins, while the second and third ranks had a thrusting spear instead. Both legionary sub-units and individual legionaries fought in relatively open order. It was the long-standing Roman procedure to elect two men each year, known as consuls, to each lead an army. An army was usually formed by combining a Roman legion with a similarly sized and equipped legion provided by their Latin allies.[32][33]
22
+
23
+ Carthaginian citizens only served in their army if there was a direct threat to the city. When they did they fought as well-armoured heavy infantry armed with long thrusting spears, although they were notoriously ill-trained and ill-disciplined. In most circumstances Carthage recruited foreigners to make up its army. Many would be from North Africa, a majority during the First Punic War, which provided several types of fighters including: close-order infantry equipped with large shields, helmets, short swords and long thrusting spears; javelin-armed light infantry skirmishers; close-order shock cavalry carrying spears; and light cavalry skirmishers who threw javelins from a distance and avoided close combat.[34][35][36] Both Spain and Gaul provided large numbers of experienced infantry, especially during the Second Punic War; they were mostly unarmoured troops who would charge ferociously, but had a reputation for breaking off if a combat was protracted.[37][38][39] The close order Libyan infantry and the citizen-militia would fight in a tightly packed formation known as a phalanx.[35] On occasion some of the infantry would wear captured Roman armour, especially among Hannibal's troops.[40] Slingers were frequently recruited from the Balearic Islands.[41][42] The Carthaginians also employed war elephants; North Africa had indigenous African forest elephants at the time.[note 4][38][44]
24
+
25
+ Garrison duty and land blockades were the most common operations for both armies.[45][46] When armies were campaigning, surprise attacks, ambushes and strategems were common.[35][47] More formal battles were usually preceded by the two armies camping a mile or two apart (2–3 km) for days or weeks; sometimes forming up in battle order each day. If neither commander could see an advantage, both sides might march off without engaging.[48] Forming up in battle order was a complicated and premeditated affair, which took several hours. Infantry were usually positioned in the centre of the battle line, with light infantry skirmishers to their front and cavalry on each flank.[49] Many battles were decided when one side's infantry force was attacked in the flank or rear and they were partially or wholly enveloped.[35]
26
+
27
+ Quinqueremes, meaning "five-oarsmen",[50] provided the workhorses of the Roman and Carthaginian fleets throughout the Punic Wars.[51] So ubiquitous was the type that Polybius uses it as a shorthand for "warship" in general.[52] A quinquereme carried a crew of 300: 280 oarsmen and 20 deck crew and officers.[53] It would also normally carry a complement of 40 marines,[54] if battle was thought to be imminent this would be increased to as many as 120.[55][56] In 260 BC Romans set out to construct a fleet and used a shipwrecked Carthaginian quinquereme as a blueprint for their own.[57]
28
+
29
+ As novice shipwrights, the Romans built copies that were heavier than the Carthaginian vessels, and so slower and less manoeuvrable.[58] Getting the oarsmen to row as a unit, let alone to execute more complex battle manoeuvres, required long and arduous training.[59] At least half of the oarsmen would need to have had some experience if the ship was to be handled effectively.[60] As a result, the Romans were initially at a disadvantage against the more experienced Carthaginians. To counter this, the Romans introduced the corvus, a bridge 1.2 metres (4 feet) wide and 11 metres (36 feet) long, with a heavy spike on the underside, which was designed to pierce and anchor into an enemy ship's deck.[55] This allowed Roman legionaries acting as marines to board enemy ships and capture them, rather than employing the previously traditional tactic of ramming.[61]
30
+
31
+ All warships were equipped with rams, a triple set of 60-centimetre-wide (2 ft) bronze blades weighing up to 270 kilograms (600 lb) positioned at the waterline. In the century prior to the Punic Wars, boarding had become increasingly common and ramming had declined, as the larger and heavier vessels adopted in this period lacked the speed and manoeuvrability necessary to ram, while their sturdier construction reduced the ram's effect even in case of a successful attack. The Roman adaptation of the corvus was a continuation of this trend and compensated for their initial disadvantage in ship-manoeuvring skills. The added weight in the prow compromised both the ship's manoeuvrability and its seaworthiness, and in rough sea conditions the corvus became useless; part way through the First Punic War the Romans ceased using it.[61][62][63]
32
+
33
+ The war began with the Romans gaining a foothold on Sicily at Messana (modern Messina).[64] The Romans then pressed Syracuse, the only significant independent power on the island, into allying with them[65] and laid siege to Carthage's main base at Akragas.[66] A large Carthaginian army attempted to lift the siege in 262 BC, but was heavily defeated at the Battle of Akragas. That night the Carthaginian garrison escaped and the Romans seized the city and its inhabitants, selling 25,000 of them into slavery.[67] The Romans then built a navy to challenge Carthage's,[68] and using the corvus inflicted several defeats.[69][70][71] A Carthaginian base on Corsica was seized, but an attack on Sardinia was repulsed; the base on Corsica was then lost.[72]
34
+
35
+ Taking advantage of their naval victories the Romans launched an invasion of North Africa,[73] which the Carthaginians intercepted. At the Battle of Cape Ecnomus the Carthaginians were again beaten;[74] this was possibly the largest naval battle in history by the number of combatants involved.[75][76][77] The invasion initially went well and in 255 BC the Carthaginians sued for peace; the proposed terms were so harsh they fought on,[78] defeating the invaders.[79] The Romans sent a fleet to evacuate their survivors and the Carthaginians opposed it at the Battle of Cape Hermaeum off Africa; the Carthaginians were heavily defeated.[80] The Roman fleet, in turn, was devastated by a storm while returning to Italy, losing most of its ships and over 100,000 men.[80][81][82]
36
+
37
+ The war continued, with neither side able to gain a decisive advantage.[83] The Carthaginians attacked and recaptured Akragas in 255 BC, but not believing they could hold the city, they razed and abandoned it.[84][85] The Romans rapidly rebuilt their fleet, adding 220 new ships, and captured Panormus (modern Palermo) in 254 BC.[86] The next year they lost another 150 ships to a storm.[87] In 251 BC the Carthaginians attempted to recapture Panormus, but were defeated in a battle outside the walls.[88][89] Slowly the Romans had occupied most of Sicily; in 249 BC they besieged the last two Carthaginian strongholds – in the extreme west.[90] They also launched a surprise attack on the Carthaginian fleet, but were defeated at the Battle of Drepana.[91] The Carthaginians followed up their victory and most of the remaining Roman warships were lost at the Battle of Phintias.[92] After several years of stalemate,[93] the Romans rebuilt their fleet again in 243 BC[94] and effectively blockaded the Carthaginian garrisons[95]. Carthage assembled a fleet which attempted to relieve them, but it was destroyed at the Battle of the Aegates Islands in 241 BC,[96][97] forcing the cut-off Carthaginian troops on Sicily to negotiate for peace.[98][95]
38
+
39
+ A treaty was agreed. By its terms Carthage paid 3,200 talents of silver[note 5][note 6] in reparations and Sicily was annexed as a Roman province.[96] Henceforth Rome considered itself the leading military power in the western Mediterranean, and increasingly the Mediterranean region as a whole. The immense effort of repeatedly building large fleets of galleys during the war laid the foundation for Rome's maritime dominance for 600 years.[100]
40
+
41
+ The Mercenary War began in 241 BC as a dispute over the payment of wages owed to 20,000 foreign soldiers who had fought for Carthage on Sicily during the First Punic War. When a compromise seemed to have been reached the army erupted into full-scale mutiny under the leadership of Spendius and Matho. 70,000 Africans from Carthage's oppressed dependant territories flocked to join them, bringing supplies and finance. War-weary Carthage fared poorly in the initial engagements, especially under the generalship of Hanno. Hamilcar Barca, a veteran of the campaigns in Sicily (and father of Hannibal Barca), was given joint command of the army in 240 BC; and supreme command in 239 BC. He campaigned successfully, initially demonstrating leniency in an attempt to woo the rebels over. To prevent this, in 240 BC Spendius tortured 700 Carthaginian prisoners to death, and henceforth the war was pursued with great brutality.
42
+
43
+ By early 237 BC, after numerous setbacks, the rebels were defeated and their cities brought back under Carthaginian rule. An expedition was prepared to reoccupy Sardinia, where mutinous soldiers had slaughtered all Carthaginians. The Roman Senate stated they considered the preparation of this force an act of war, and demanded Carthage cede Sardinia and Corsica, and pay an additional 1,200-talent indemnity.[101][102][note 7] Weakened by 30 years of war, Carthage agreed rather than again enter into conflict with Rome.[103] Polybius considered this "contrary to all justice"[101] and modern historians have variously described the Romans' behaviour as "unprovoked aggression and treaty-breaking",[101] "shamelessly opportunistic"[104] and an "unscrupulous act".[105] These events fuelled resentment in Carthage, which was not reconciled to Rome's perception of its situation. This breach of the recently signed treaty has been considered to be the single greatest cause of war with Carthage breaking out again in 218 BC in the Second Punic War.[106][107][108]
44
+
45
+ With the suppression of the rebellion, Hamilcar Barca understood that Carthage needed to strengthen its economic and military base if it were to again confront Rome.[110] After the First Punic War, Carthaginian possessions in Iberia (modern Spain and Portugal) were limited to a handful of prosperous coastal cities.[111] Hamilcar took the army which he had led to victory in the Mercenary War and led it to Iberia in 237 BC. He carved out a quasi-monarchial, autonomous state in south-east Iberia.[112] This gave Carthage the silver mines, agricultural wealth, manpower, military facilities such as shipyards and territorial depth to stand up to future Roman demands with confidence.[113][114] Hamilcar ruled as a viceroy and was succeeded by his son-in-law, Hasdrubal, in the early 220s BC and then his son, Hannibal, in 221 BC.[115] In 226 BC the Ebro Treaty was agreed, specifying the Ebro River as the northern boundary of the Carthaginian sphere of influence.[116] A little later Rome made a separate treaty with the city of Saguntum, well south of the Ebro.[117]
46
+
47
+ In 219 BC a Carthaginian army under Hannibal besieged, captured and sacked Saguntum.[106][118] In spring 218 BC Rome declared war on Carthage.[119] There were three main military theatres in the war: Italy, where Hannibal defeated the Roman legions repeatedly, with occasional subsidiary campaigns in Sicily and Greece; Iberia, where Hasdrubal, a younger brother of Hannibal, defended the Carthaginian colonial cities with mixed success until moving into Italy; and Africa, where the war was decided.
48
+
49
+ In 218 BC there was some naval skirmishing in the waters around Sicily. The Romans beat off a Carthaginian attack[120][121] and captured the island of Malta.[122] In Cisalpine Gaul (modern northern Italy), the major Gallic tribes attacked the Roman colonies, causing the Romans to flee to Mutina (modern Modena), which the they besieged. A Roman relief army raised the siege, but was then ambushed and besieged itself.[123] An army had been raised to campaign in Iberia under the brothers Gnaeus and Publius Scipio and the Roman Senate detached one Roman and one allied legion from it tosend to the region. The Scipios had to raise fresh troops to replace these and thus could not set out for Iberia until September.[124]
50
+
51
+ Meanwhile, Hannibal assembled a Carthaginian army in New Carthage (modern Cartagena) and led it northwards along the coast in May or June. It entered Gaul and took an inland route, to avoid the Roman allies along the coast.[125] At the Battle of Rhone Crossing, Hannibal defeated a force of local Allobroges that sought to bar his way.[126] A Roman fleet carrying the Scipio brothers' army landed at Rome's ally Massalia (modern Marseille) at the mouth of the Rhone.[127] Hannibal evaded the Romans and Gnaeus Scipio continued to Iberia with the Roman army;[128][129] Publius returned to Rome.[129] The Carthaginians reached the foot of the Alps by late autumn[125] and crossed them, surmounting the difficulties of climate, terrain[125]> and the guerrilla tactics of the native tribes. The exact route is disputed. Hannibal arrived with 20,000 infantry, 6,000 cavalry, and an unknown number of elephants[65] in what is now Piedmont, northern Italy. The Romans were still in their winter quarters. His surprise entry into the Italian peninsula led to the termination of Rome's planned campaign for the year, an invasion of Africa.[130]
52
+
53
+ Hannibal's first action was to take the chief city of the hostile Taurini (in the area of modern-day Turin). His army then routed the cavalry and light infantry of the Romans under Publius Scipio at the Battle of Ticinus.[131] As a result, most of the Gallic tribes declared for the Carthaginian cause, and Hannibal's army grew to over 40,000 men.[132] A large Roman army under the command of Sempronius Longus was lured into combat by Hannibal at the Battle of the Trebia, encircled and destroyed.[133] Only 10,000 Romans out of 42,000 were able to cut their way to safety. Gauls now joined Hannibal's army in large numbers, bringing it up to 60,000 men.[132] The Romans stationed an army at Arretium and one on the Adriatic coast to block Hannibal's advance into central Italy.[134]
54
+
55
+ In early spring 217 BC, the Carthaginians crossed the Apennines unopposed, taking a difficult but unguarded route.[135] Hannibal attempted without success to draw the main Roman army under Gaius Flaminius into a pitched battle by devastating the area they had been sent to protect.[136] Hannibal then cut off the Roman army from Rome, which provoked Flaminius into a hasty pursuit without proper reconnaissance.[137] Then, in a defile on the shore of Lake Trasimenus, Hannibal set an ambush[137] and in the Battle of Lake Trasimene completely defeated the Roman army and killed Flaminius.[137] 15,000 Romans were killed and 15,000 taken prisoner. 4,000 Roman cavalry from their other army were also engaged and wiped out.[138] The prisoners were sold as slaves if they were Romans, but released if they were from one of Rome's Latin allies.[139] Hannibal hoped that some of these allies could be persuaded to defect, and marched south in the hope of winning over allies among the ethnic Greek and Italic city states.[134]
56
+
57
+ The Romans, panicked by these heavy defeats, appointed Quintus Fabius Maximus as dictator.[139] Fabius invented the Fabian strategy of avoiding open battle with his opponent, but constantly skirmishing with small detachments of the enemy. This was not popular among the soldiers, the Roman public nor the Roman elite, since he avoided battle while Italy was being devastated by the enemy.[134] Hannibal marched through the richest and most fertile provinces of Italy, hoping that the devastation would draw Fabius into battle, but Fabius refused.[140]
58
+
59
+ At the elections of 216 BC the more aggressive minded Gaius Terentius Varro and Lucius Aemilius Paullus were elected as consuls.[141] The Roman Senate authorized the raising of double-sized armies, a force of 86,000 men, the largest in Roman history up to that point.[141] Paullus and Varro marched southward to confront Hannibal, who accepted battle on the open plain near Cannae. In the Battle of Cannae The Roman legions forced their way through Hannibal's deliberately weak centre, but the Libyans on the wings swung around their advance, menacing their flanks.[142] Hasdrubal led Carthaginian cavalry on the left wing and routed the Roman cavalry opposite, then swept around the rear of the Romans to attack the cavalry on the other wing. He then charged into the legions' from behind.[142] As a result, the Roman infantry was surrounded with no means of escape.[142] At least 67,500 Romans were killed or captured.[142].
60
+
61
+ Gnaeus Scipio continued on from Massala in the summer of 218 BC to Iberia (modern Spain and Portugal), where he won support among the local tribes.[128] The Carthaginian commander in the area refused to wait for reinforcements and attacked Scipio at the Battle of Cissa in late 218 BC and was defeated.[128][143] In 217 BC, the Carthaginians moved to engage the combined Roman and Massalian fleet at the Battle of Ebro River. The 40 Carthaginian and Iberian vessels were beaten by 55 Roman and Massalian ships in the second naval engagement of the war, with 29 Carthaginian ships lost. Carthaginian forces retreated, but the Romans remained confined to the area between the Ebro and Pyrenees.[143]
62
+
63
+ The Roman army in Spain was preventing the Carthaginians from sending reinforcements from Iberia to Hannibal or to the insurgent Gauls in northern Italy.[143] Hasdrubal marched into Roman territory in 215 BC, besieged a pro-Roman town and offered battle at Dertosa. After a hard-fought battle, he was defeated although both sides suffered heavy losses.[144] Hasdrubal was now unable to reinforce Hannibal in Italy.[144][128]
64
+
65
+ The Carthaginians suffered a wave of defections of local Celtiberian tribes to Rome.[128] The Scipio brothers captured Saguntum in 212 BC.[144] In 211, they hired 20,000 Celtiberian mercenaries to reinforce their army.[144] Observing that the three Carthaginian armies were deployed apart from each other, the Scipios split their forces.[144] Publius moved to attack Mago Barca near Castulo, while Gnaeus marched on Hasdrubal.[144] This stratagy resulted in the Battle of Castulo and the Battle of Ilorca, usually combined as the Battle of the Upper Baetis.[128][144] Both battles ended in complete defeat for the Romans, with both of the Scipio brothers being killed, as Hasdrubal had bribed the Romans' mercenaries to desert.[128][144] The Romans retreated to their coastal stronghold north of the Ebro, from which the Carthaginians failed to expel them.[144][128] Claudius Nero brought over reinforcements in 210 and stabilized the situation.[144]
66
+
67
+ In 210 BC, Scipio Africanus arrived in Spain with further reinforcements.[145] In a carefully planned assault in 209 BC, he captured the lightly-defended centre of Carthaginian power in Spain, Cartago Nova.[146][145] Scipio had the population slaughtered and a vast booty of gold, silver and siege artillery was taken.[147][145] He liberated the Iberian hostages kept by the Carthaginians to ensure the loyalty of the Iberian tribes,[147][145] although many of them were subsequently to fight against the Romans.[145]
68
+
69
+ In 206 BC, at the Battle of Ilipa, Scipio with 48,000 men, half Italians and half Iberians, defeated a Carthaginian army of 54,500 men and 32 elephants under the command of Mago Barca, Hasdrubal Gisco and Masinissa This sealed the fate of the Carthaginian presence in Iberia.[148][145] It was followed by the Roman capture of Gades after the city rebelled against Carthaginian rule.[149]
70
+
71
+ Later that year a dangerous mutiny broke out among Roman troops at their camp at Sucro. It initially attracted support from Iberian leaders, disappointed that Roman forces had remained in the peninsula after the expulsion of the Carthaginians. It was effectively put down by Scipio Africanus. In 205 BC a last attempt was made by Mago to recapture New Carthage when the Roman occupiers were shaken by another mutiny and an Iberian uprising, but he was repulsed. Mago left Spain for Italy with his remaining forces.[147] In 203 BC Carthage succeeded in recruiting at least 4,000 mercenaries from Iberia, despite Rome's nominal control.[150]
72
+
73
+ In 213 BC Syphax, a powerful Numidian king in North Africa,[144] declared for Rome. Rome sent advisers to train his soldiers[144] and he waged war against the Carthaginian ally Gala.[144] In 206 BC the Carthaginians ended this drain on their resources by dividing several Numidian kingdoms with him. One of those disinherited was the Numidian prince Masinissa, who was thus driven into the arms of Rome.[151]
74
+
75
+ In 205 BC Scipio Africanus was given command of the legions in Sicily and allowed to enroll volunteers for his plan to end the war by an invasion of Africa.[152] After landing in Africa in 204 BC, he was joined by Masinissa and a force of Numidian cavalry.[153] Scipio then besieged and failed to take the city of Utica.[154] When a Carthaginian and Numidian relief army under Hasdrubal Barca and Syphax moved to confront him, he mounted a surprise attack and destroyed it.[155] In 203 BC Scipio confronted a second Carthaginian army and destroyed at the Battle of the Great Plains. King Syphax was pursued and taken prisoner at the Battle of Cirta and Masinissa seized a large part of his kingdom with Roman help.[156]
76
+
77
+ Rome and Carthage entered into peace negotiations, and Carthage recalled Hannibal from Italy.[157] Largely due to mutual mistrust the negotiations came to nothing.[158] Hannibal was placed in command of another army, based his veterans from Italy and newly raised troops from Africa, but with few cavalry.[159] The decisive Battle of Zama followed in October 202 BC.[160] Unlike most battles of the Second Punic War, the Romans had superiority in cavalry and the Carthaginians in infantry.[159] Hannibal attempted to use 80 elephants to break into the Roman infantry formation, but the Romans countered them effectively and they routed back through the Carthaginian ranks.[161] The Roman and Allied Numidian cavalry drove the Carthaginian cavalry from the field. The two sides' infantry fought inconclusively until the Roman cavalry returned and attacked his rear. The Carthaginian formation collapsed; Hannibal was one of the few to escape the field.[160]
78
+
79
+ The peace treaty imposed on the Carthaginians stripped them of all of their overseas territories, and some of their African ones. An indemnity of 10,000 silver talents[note 8] was to be paid over 50 years. Hostages were taken. Carthage was forbidden to possess war elephants and its fleet was restricted to 10 warships. It was prohibited from waging war outside Africa, and in Africa only with Rome's express permission. Many senior Carthaginians wanted to reject it, but Hannibal spoke strongly in its favour and it was accepted in spring 201 BC.[162] Henceforth it was clear that Carthage was politically subordinate to Rome.[163]
80
+
81
+ At the end of the war, Masinissa emerged as by far the most powerful ruler among the Numidians.[164] Over the following 48 years he repeatedly took advantage of Carthage's inability to protect its possessions. Whenever Carthage petioned Rome for redress, or permission to take military action, Rome backed its ally, Masinissa, and refused.[165] Masinissa's seizures of and raids into Carthaginian territory became increasingly flagrant. In 151 BC Carthage raised a large army, the treaty notwithstanding, and counter attacked the Numidians. The campaign ended in disaster and the army surrendered.[166] Carthage had paid off its indemnity and was prospering economically, but was no military threat to Rome.[167][168] Elements in the Roman Senate had long wished to destroy Carthage, and with the breach of the treaty as a casus belli, war was declared in 149 BC.[166]
82
+
83
+ In 149 BC a Roman army of approximately 50,000 men, jointly commanded by both consuls, landed near Utica, 35 kilometres (22 mi) north of Carthage.[169] Rome demanded that if war were to be avoided, the Carthaginians must hand over all of their armaments. Vast amounts of materiel delivered, including 200,000 sets of armour, 2,000 catapults and a large number of warships.[170] This done, the Romans demanded that the Carthaginians burn their city and relocate at least 10 miles (16 km) from the sea; the Carthaginians broke off negotiations and set to recreating their armoury.[171]
84
+
85
+ As well as manning the walls of Carthage, the Carthaginians formed a field army under Hasdrubal, which was based 25 kilometres (16 mi) to the south.[173][174] The Roman army moved to lay siege to Carthage, but its walls were so strong and its citizen-militia so determined that it was unable to make any impact, while the Carthaginians struck back effectively. Their army effectively raided the Roman lines of communication,[174] and in 148 BC Carthaginian fire ships destroyed many Roman vessels. The main Roman camp was in a swamp, which caused an outbreak of disease during the summer.[175] The Romans moved their camp, and their ships, further away; so that they were more blockading than closely besieging the city.[176] The war dragged on into 147 BC.[174]
86
+
87
+ In early 147 BC Scipio Aemilianus, an adopted grandson of Scipio Africanus who had distinguished himself during the previous two years' fighting, was elected consul and took control of the war.[166][177] The Carthaginians continued to resist vigorously: they constructed warships and during the summer twice gave battle to the Roman fleet, losing both times.[177] The Romans launched an assault on the walls; after confused fighting they broke into the city, but lost in the dark, withdrew. Hasdrubal and his army withdrew into the city to reinforce the garrison.[178] Roman prisoners were taken and Hasdrubal had them tortured to death on the walls, in view of the Roman army. He was reinforcing the will to resist in the Carthaginian citizens; from this point there could be no possibility of negotiations. Some members of the city council denounced his actions and Hasdrubal had them too put to death and took over control of the city.[177][179] With no Carthaginian army in the field those cities which had remained loyal went over to the Romans or were captured.[180]
88
+
89
+ Scipio moved back to a close blockade of the city, and built a mole which cut off supply from the sea.[181] In the spring of 146 BC the Roman army managed to secure a foothold on the fortifications near the harbour.[182][183] When the main assault began it quickly captured the cities main square, where the legions camped overnight.[184] The next morning the Romans systematically worked their way through the residential part of the city, killing everyone they encountered and firing the buildings behind them.[182] At times the Romans progressed from rooftop to rooftop, to prevent missiles being hurled down on them.[184] It took six days to clear the city of resistance, and on the last day Scipio agreed to accept prisoners. The last holdouts, including Roman deserters in Carthaginian service, fought on from the Temple of Eshmoun and burnt it down around themselves when all hope was gone.[185] The 50,000 Carthaginians, a small part of the pre-war population, were sold into slavery.[186] The notion that Roman forces then sowed the city with salt to ensure that nothing would grow there again is a 20th-century invention.[187]
90
+
91
+ The remaining Carthaginian territories were annexed by Rome and reconstituted to become the Roman province of Africa. Numerous significant Punic cities, such as those in Mauretania, were taken over and rebuilt by the Romans.[188] Utica, the Punic city which changed loyalties at the beginning of the siege, became the capital of the Roman province of Africa.[189] A century later, the site of Carthage was rebuilt as a Roman city by Julius Caesar, and would become one of the main cities of Roman Africa by the time of the Empire.
92
+
93
+ Rome still exists as the capital of Italy; the ruins of Carthage lie 16 kilometres (10 mi) east of modern Tunis on the North African coast.[190]
94
+
en/5328.html.txt ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The second (symbol: s, abbreviation: sec) is the base unit of time in the International System of Units (SI) (French: Système International d’unités), commonly understood and historically defined as ​1⁄86400 of a day – this factor derived from the division of the day first into 24 hours, then to 60 minutes and finally to 60 seconds each. Analog clocks and watches often have sixty tick marks on their faces, representing seconds (and minutes), and a "second hand" to mark the passage of time in seconds. Digital clocks and watches often have a two-digit seconds counter. The second is also part of several other units of measurement like meters per second for velocity, meters per second per second for acceleration, and cycles per second for frequency.
4
+
5
+ Although the historical definition of the unit was based on this division of the Earth's rotation cycle, the formal definition in the International System of Units (SI) is a much steadier timekeeper: it is defined by taking the fixed numerical value of the caesium frequency ∆νCs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9192631770 when expressed in the unit Hz, which is equal to s−1.[1][2]
6
+ Because the Earth's rotation varies and is also slowing ever so slightly, a leap second is periodically added to clock time[nb 1] to keep clocks in sync with Earth's rotation.
7
+
8
+ Multiples of seconds are usually counted in hours and minutes. Fractions of a second are usually counted in tenths or hundredths. In scientific work, small fractions of a second are counted in milliseconds (thousandths), microseconds (millionths), nanoseconds (billionths), and sometimes smaller units of a second. An everyday experience with small fractions of a second is a 1-gigahertz microprocessor which has a cycle time of 1 nanosecond. Camera shutter speeds are often expressed in fractions of a second, such as ​1⁄30 second or ​1⁄1000 second.
9
+
10
+ Sexagesimal divisions of the day from a calendar based on astronomical observation have existed since the third millennium BC, though they were not seconds as we know them today[citation needed]. Small divisions of time could not be measured back then, so such divisions were mathematically derived. The first timekeepers that could count seconds accurately were pendulum clocks invented in the 17th century. Starting in the 1950s, atomic clocks became better timekeepers than earth's rotation, and they continue to set the standard today.
11
+
12
+ A mechanical clock, one which does not depend on measuring the relative rotational position of the earth, keeps uniform time called mean time, within whatever accuracy is intrinsic to it. That means that every second, minute and every other division of time counted by the clock will be the same duration as any other identical division of time. But a sundial which measures the relative position of the sun in the sky called apparent time, does not keep uniform time. The time kept by a sundial varies by time of year, meaning that seconds, minutes and every other division of time is a different duration at different times of the year. The time of day measured with mean time versus apparent time may differ by as much as 15 minutes, but a single day will differ from the next by only a small amount; 15 minutes is a cumulative difference over a part of the year. The effect is due chiefly to the obliqueness of earth's axis with respect to its orbit around the sun.
13
+
14
+ The difference between apparent solar time and mean time was recognized by astronomers since antiquity, but prior to the invention of accurate mechanical clocks in the mid-17th century, sundials were the only reliable timepieces, and apparent solar time was the generally accepted standard.
15
+
16
+ Fractions of a second are usually denoted in decimal notation, for example 2.01 seconds, or two and one hundredth seconds. Multiples of seconds are usually expressed as minutes and seconds, or hours, minutes and seconds of clock time, separated by colons, such as 11:23:24, or 45:23 (the latter notation can give rise to ambiguity, because the same notation is used to denote hours and minutes). It rarely makes sense to express longer periods of time like hours or days in seconds, because they are awkwardly large numbers. For the metric unit of second, there are decimal prefixes representing 10−24 to 1024 seconds.
17
+
18
+ Some common units of time in seconds are: a minute is 60 seconds; an hour is 3,600 seconds; a day is 86,400 seconds; a week is 604,800 seconds; a year (other than leap years) is 31,536,000 seconds; and a (Gregorian) century averages 3,155,695,200 seconds; with all of the above excluding any possible leap seconds.
19
+
20
+ Some common events in seconds are: a stone falls about 4.9 meters from rest in one second; a pendulum of length about one meter has a swing of one second, so pendulum clocks have pendulums about a meter long; the fastest human sprinters run 10 meters in a second; an ocean wave in deep water travels about 23 meters in one second; sound travels about 343 meters in one second in air; light takes 1.3 seconds to reach Earth from the surface of the Moon, a distance of 384,400 kilometers.
21
+
22
+ A second is part of other units, such as frequency measured in hertz (inverse seconds or second−1), speed (meters per second) and acceleration (meters per second squared). The metric system unit becquerel, a measure of radioactive decay, is measured in inverse seconds. The meter is defined in terms of the speed of light and the second; definitions of the metric base units kilogram, ampere, kelvin, and candela also depend on the second. The only base unit whose definition does not depend on the second is the mole. Of the 22 named derived units of the SI, only two (radian and steradian), do not depend on the second. Many derivative units for everyday things are reported in terms of larger units of time, not seconds, such as clock time in hours and minutes, velocity of a car in kilometers per hour or miles per hour, kilowatt hours of electricity usage, and speed of a turntable in rotations per minute.
23
+
24
+ A set of atomic clocks throughout the world keeps time by consensus: the clocks "vote" on the correct time, and all voting clocks are steered to agree with the consensus, which is called International Atomic Time (TAI). TAI "ticks" atomic seconds.[3]
25
+
26
+ Civil time is defined to agree with the rotation of the earth. The international standard for timekeeping is Coordinated Universal Time (UTC). This time scale "ticks" the same atomic seconds as TAI, but inserts or omits leap seconds as necessary to correct for variations in the rate of rotation of the earth.[4]
27
+
28
+ A time scale in which the seconds are not exactly equal to atomic seconds is UT1, a form of universal time. UT1 is defined by the rotation of the earth with respect to the sun, and does not contain any leap seconds.[5] UT1 always differs from UTC by less than a second.
29
+
30
+ While they are not yet part of any timekeeping standard, optical lattice clocks with frequencies in the visible light spectrum now exist and are the most accurate timekeepers of all. A strontium clock with frequency 430 THz, in the red range of visible light, now holds the accuracy record: it will gain or lose less than a second in 15 billion years, which is longer than the estimated age of the universe. Such a clock can measure a change in its elevation of as little as 2 cm by the change in its rate due to gravitational time dilation.[6]
31
+
32
+ There have only ever been three definitions of the second: as a fraction of the day, as a fraction of an extrapolated year, and as the microwave frequency of a caesium atomic clock, and they have realized a sexagesimal division of the day from ancient astronomical calendars.
33
+
34
+ Civilizations in the classic period and earlier created divisions of the calendar as well as arcs using a sexagesimal system of counting, so at that time the second was a sexagesimal subdivision of the day (ancient second = day/60×60), not of the hour like the modern second (= hour/60×60). Sundials and water clocks were among the earliest timekeeping devices, and units of time were measured in degrees of arc. Conceptual units of time smaller than realizable on sundials were also used.
35
+
36
+ There are references to 'second' as part of a lunar month in the writings of natural philosophers of the Middle Ages, which were mathematical subdivisions that could not be measured mechanically.[nb 2][nb 3]
37
+
38
+ The earliest mechanical clocks which appeared starting in the 14th century had displays that divided the hour into halves, thirds, quarters and sometimes even 12 parts, but never by 60. In fact, the hour was not commonly divided in 60 minutes as it was not uniform in duration. It was not practical for timekeepers to consider minutes until the first mechanical clocks that displayed minutes appeared near the end of the 16th century. Mechanical clocks kept the mean time, as opposed to the apparent time displayed by sundials.
39
+ By that time, sexagesimal divisions of time were well established in Europe.[nb 4]
40
+
41
+ The earliest clocks to display seconds appeared during the last half of the 16th century. The second became accurately measurable with the development of mechanical clocks. The earliest spring-driven timepiece with a second hand which marked seconds is an unsigned clock depicting Orpheus in the Fremersdorf collection, dated between 1560 and 1570.[9]:417–418[10] During the 3rd quarter of the 16th century, Taqi al-Din built a clock with marks every 1/5 minute.[11]
42
+ In 1579, Jost Bürgi built a clock for William of Hesse that marked seconds.[9]:105 In 1581, Tycho Brahe redesigned clocks that had displayed only minutes at his observatory so they also displayed seconds, even though those seconds were not accurate. In 1587, Tycho complained that his four clocks disagreed by plus or minus four seconds.[9]:104
43
+
44
+ In 1656, Dutch scientist Christiaan Huygens invented the first pendulum clock. It had a pendulum length of just under a meter which gave it a swing of one second, and an escapement that ticked every second. It was the first clock that could accurately keep time in seconds. By the 1730s, 80 years later, John Harrison's maritime chronometers could keep time accurate to within one second in 100 days.
45
+
46
+ In 1832, Gauss proposed using the second as the base unit of time in his millimeter-milligram-second system of units. The British Association for the Advancement of Science (BAAS) in 1862 stated that "All men of science are agreed to use the second of mean solar time as the unit of time."[12] BAAS formally proposed the CGS system in 1874, although this system was gradually replaced over the next 70 years by MKS units. Both the CGS and MKS systems used the same second as their base unit of time. MKS was adopted internationally during the 1940s, defining the second as ​1⁄86,400 of a mean solar day.
47
+
48
+ Some time in the late 1940s, quartz crystal oscillator clocks with an operating frequency of ~100 kHz advanced to keep time with accuracy better than 1 part in 108 over an operating period of a day. It became apparent that a consensus of such clocks kept better time than the rotation of the Earth. Metrologists also knew that Earth's orbit around the Sun (a year) was much more stable than earth's rotation. This led to proposals as early as 1950 to define the second as a fraction of a year.
49
+
50
+ The Earth's motion was described in Newcomb's Tables of the Sun (1895), which provided a formula for estimating the motion of the Sun relative to the epoch 1900 based on astronomical observations made between 1750 and 1892.[13] This resulted in adoption of an ephemeris time scale expressed in units of the sidereal year at that epoch by the IAU in 1952.[14] This extrapolated timescale brings the observed positions of the celestial bodies into accord with Newtonian dynamical theories of their motion.[13] In 1955, the tropical year, considered more fundamental than the sidereal year, was chosen by the IAU as the unit of time. The tropical year in the definition was not measured but calculated from a formula describing a mean tropical year that decreased linearly over time.
51
+
52
+ In 1956, the second was redefined in terms of a year relative to that epoch. The second was thus defined as "the fraction ​1⁄31,556,925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time".[13] This definition was adopted as part of the International System of Units in 1960.[15]
53
+
54
+ But even the best mechanical, electric motorized and quartz crystal-based clocks develop discrepancies, and virtually none are good enough to realize an ephemeris second. Far better for timekeeping is the natural and exact "vibration" in an energized atom. The frequency of vibration (i.e., radiation) is very specific depending on the type of atom and how it is excited. Since 1967, the second has been defined as exactly "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" (at a temperature of 0 K). This length of a second was selected to correspond exactly to the length of the ephemeris second previously defined. Atomic clocks use such a frequency to measure seconds by counting cycles per second at that frequency. Radiation of this kind is one of the most stable and reproducible phenomena of nature. The current generation of atomic clocks is accurate to within one second in a few hundred million years.
55
+
56
+ Atomic clocks now set the length of a second and the time standard for the world.[16]
57
+
58
+ SI prefixes are commonly used for times shorter than one second, but rarely for multiples of a second. Instead, certain non-SI units are permitted for use in SI: minutes, hours, days, and in astronomy Julian years.[17]
en/5329.html.txt ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ In macroeconomics, an industry is a sector that produces goods or related services within an economy.[1] The major source of revenue of a group or company is an indicator of what industry it should be classified in.[2] When a large corporate group has multiple sources of revenue generation, it is considered to be working in different industries. The manufacturing industry became a key sector of production and labour in European and North American countries during the Industrial Revolution, upsetting previous mercantile and feudal economies. This came through many successive rapid advances in technology, such as the development of steam power and the production of steel and coal.
4
+
5
+ Following the Industrial Revolution, possibly a third of the economic output came from manufacturing industries. Many developed countries and many developing/semi-developed countries (China, India etc.) depend significantly on manufacturing industry.
6
+
7
+ Slavery, the practice of utilizing forced labor to produce goods[3][failed verification] and services, has occurred since antiquity throughout the world as a means of low-cost production. It typically produces goods for which profit depends on economies of scale, especially those for which labor was simple and easy to supervise.[4] International law has declared slavery illegal.[5]
8
+
9
+ Guilds, associations of artisans and merchants, oversee the production and distribution of a particular good. Guilds have their roots in the Roman Empire as collegia (singular: collegium) Membership in these early guilds was voluntary. The Roman collegia did not survive the fall of Rome.[6] In the early middle ages, guilds once again began to emerge in Europe, reaching a degree of maturity by the beginning of the 14th century.[7][need quotation to verify] While few guilds remain today[update], some modern labor structures resemble those of traditional guilds.[8] Other guilds, such as the SAG-AFTRA act as trade unions rather than as classical guilds. Professor Sheilagh Ogilvie claims that guilds negatively affected quality, skills, and innovation in areas where they were present.[9]
10
+
11
+ The industrial revolution (from the mid-18th century to the mid-19th century) saw the development and popularization of mechanized means of production as a replacement for hand production.[10] The industrial revolution played a role in the abolition of slavery in Europe and in North America.[11]
12
+
13
+ In a process dubbed tertiarization, the economic preponderance of primary and secondary industries has declined in recent centuries relative to the rising importance of tertiary industry,[12][13]
14
+ resulting in the post-industrial economy. Specialization in industry[14]
15
+ and in the classification of industry has also occurred. Thus (for example) a record producer might claim to speak on behalf of the Japanese rock industry, the recording industry, the music industry or the entertainment industry - and any formulation will sound grandiose and weighty.
16
+
17
+ The Industrial Revolution led to the development of factories for large-scale production with consequent changes in society.[15] Originally the factories were steam-powered, but later transitioned to electricity once an electrical grid was developed. The mechanized assembly line was introduced to assemble parts in a repeatable fashion, with individual workers performing specific steps during the process. This led to significant increases in efficiency, lowering the cost of the end process. Later automation was increasingly used to replace human operators. This process has accelerated with the development of the computer and the robot.
18
+
19
+ Historically certain manufacturing industries have gone into a decline due to various economic factors, including the development of replacement technology or the loss of competitive advantage. An example of the former is the decline in carriage manufacturing when the automobile was mass-produced.
20
+
21
+ A recent trend has been the migration of prosperous, industrialized nations towards a post-industrial society. This is manifested by an increase in the service sector at the expense of manufacturing, and the development of an information-based economy, the so-called informational revolution. In a post-industrial society, manufacturers relocate to more profitable locations through a process of off-shoring.
22
+
23
+ Measurements of manufacturing industries outputs and economic effect are not historically stable. Traditionally, success has been measured in the number of jobs created. The reduced number of employees in the manufacturing sector has been assumed to result from a decline in the competitiveness of the sector, or the introduction of the lean manufacturing process.
24
+
25
+ Related to this change is the upgrading of the quality of the product being manufactured. While it is possible to produce a low-technology product with low-skill labour, the ability to manufacture high-technology products well is dependent on a highly skilled staff.
26
+
27
+ An industrial society is a society driven by the use of technology to enable mass production, supporting a large population with a high capacity for division of labour. Today, industry is an important part of most societies and nations. A government must have some kind of industrial policy, regulating industrial placement, industrial pollution, financing and industrial labour.
28
+
29
+ In an industrial society, industry employs a major part of the population. This occurs typically in the manufacturing sector. A labour union is an organization of workers who have banded together to achieve common goals in key areas such as wages, hours, and other working conditions. The trade union, through its leadership, bargains with the employer on behalf of union members (rank and file members) and negotiates labour contracts with employers. This movement first rose among industrial workers.
30
+
31
+ The Industrial Revolution changed warfare, with mass-produced weaponry and supplies, machine-powered transportation, mobilization, the total war concept and weapons of mass destruction. Early instances of industrial warfare were the Crimean War and the American Civil War, but its full potential showed during the world wars. See also military-industrial complex, arms industries, military industry and modern warfare.
32
+
33
+ The twenty largest countries by industrial output (in nominal terms) at peak level as of 2018, according to the IMF and CIA World Factbook
en/533.html.txt ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A ball is a round object (usually spherical, but can sometimes be ovoid)[1] with various uses. It is used in ball games, where the play of the game follows the state of the ball as it is hit, kicked or thrown by players. Balls can also be used for simpler activities, such as catch or juggling. Balls made from hard-wearing materials are used in engineering applications to provide very low friction bearings, known as ball bearings. Black-powder weapons use stone and metal balls as projectiles.
4
+
5
+ Although many types of balls are today made from rubber, this form was unknown outside the Americas until after the voyages of Columbus. The Spanish were the first Europeans to see the bouncing rubber balls (although solid and not inflated) which were employed most notably in the Mesoamerican ballgame. Balls used in various sports in other parts of the world prior to Columbus were made from other materials such as animal bladders or skins, stuffed with various materials.
6
+
7
+ As balls are one of the most familiar spherical objects to humans, the word "ball" may refer to or describe spherical or near-spherical objects.
8
+
9
+ "Ball" is used metaphorically sometimes to denote something spherical or spheroid, e.g., armadillos and human beings curl up into a ball, we make a ball with our fist.
10
+
11
+ The first known use of the word ball in English in the sense of a globular body that is played with was in 1205 in Laȝamon's Brut, or Chronicle of Britain in the phrase, "Summe heo driuen balles wide ȝeond Þa feldes." The word came from the Middle English bal (inflected as ball-e, -es, in turn from Old Norse böllr (pronounced [bɔlːr]; compare Old Swedish baller, and Swedish boll) from Proto-Germanic ballu-z (whence probably Middle High German bal, ball-es, Middle Dutch bal), a cognate with Old High German ballo, pallo, Middle High German balle from Proto-Germanic *ballon (weak masculine), and Old High German ballâ, pallâ, Middle High German balle, Proto-Germanic *ballôn (weak feminine). No Old English representative of any of these is known. (The answering forms in Old English would have been beallu, -a, -e—compare bealluc, ballock.) If ball- was native in Germanic, it may have been a cognate with the Latin foll-is in sense of a "thing blown up or inflated." In the later Middle English spelling balle the word coincided graphically with the French balle "ball" and "bale" which has hence been erroneously assumed to be its source. French balle (but not boule) is assumed to be of Germanic origin, itself, however. In Ancient Greek the word πάλλα (palla) for "ball" is attested[2] besides the word σφαίρα (sfaíra), sphere.[3]
12
+
13
+ A ball, as the essential feature in many forms of gameplay requiring physical exertion, must date from the very earliest times. A rolling object appeals not only to a human baby, but to a kitten and a puppy. Some form of game with a ball is found portrayed on Egyptian monuments, and is played among aboriginal tribes at the present day. In Homer, Nausicaa was playing at ball with her maidens when Odysseus first saw her in the land of the Phaeacians (Od. vi. 100). And Halios and Laodamas performed before Alcinous and Odysseus with ball play, accompanied with dancing (Od. viii. 370).
14
+
15
+ Among the ancient Greeks, games with balls (σφαῖραι) were regarded as a useful subsidiary to the more violent athletic exercises, as a means of keeping the body supple, and rendering it graceful, but were generally left to boys and girls. Of regular rules for the playing of ball games, little trace remains, if there were any such. The names in Greek for various forms, which have come down to us in such works as the Ὀνομαστικόν of Julius Pollux, imply little or nothing of such; thus, ἀπόρραξις (aporraxis) only means the putting of the ball on the ground with the open hand, οὐρανία (ourania), the flinging of the ball in the air to be caught by two or more players; φαινίνδα (phaininda) would seem to be a game of catch played by two or more, where feinting is used as a test of quickness and skill. Pollux (i. x. 104) mentions a game called episkyros (ἐπίσκυρος), which has often been looked on as the origin of football. It seems to have been played by two sides, arranged in lines; how far there was any form of "goal" seems uncertain.[4] It was impossible to produce a ball that was perfectly spherical;[5] children usually made their own balls by inflating pig's bladders and heating them in the ashes of a fire to make them rounder,[5] although Plato (fl. 420s BC – 340s BC) described "balls which have leather coverings in twelve pieces".[6]
16
+
17
+ Among the Romans, ball games were looked upon as an adjunct to the bath, and were graduated to the age and health of the bathers, and usually a place (sphaeristerium) was set apart for them in the baths (thermae). There appear to have been three types or sizes of ball, the pila, or small ball, used in catching games, the paganica, a heavy ball stuffed with feathers, and the follis, a leather ball filled with air, the largest of the three. This was struck from player to player, who wore a kind of gauntlet on the arm. There was a game known as trigon, played by three players standing in the form of a triangle, and played with the follis, and also one known as harpastum, which seems to imply a "scrimmage" among several players for the ball. These games are known to us through the Romans, though the names are Greek.[4]
18
+
19
+ The various modern games played with a ball or balls and subject to rules are treated under their various names, such as polo, cricket, football, etc.[4]
20
+
21
+ Football from association football (soccer).
22
+
23
+ Handball.
24
+
25
+ Bandy ball.
26
+
27
+ Baseball.
28
+
29
+ Basketball.
30
+
31
+ Billiard balls.
32
+
33
+ Bowling ball (and pin).
34
+
35
+ Cricket ball.
36
+
37
+ Golf ball next to a hole.
38
+
39
+ Lacrosse ball.
40
+
41
+ Rinkball.
42
+
43
+ Roller hockey ball.
44
+
45
+ Rubber band ball.
46
+
47
+ Squash ball.
48
+
49
+ Table tennis balls.
50
+
51
+ Tennis ball.
52
+
53
+ Volleyball.
54
+
55
+ Water polo ball.
56
+
57
+ Several sports use a ball in the shape of a prolate spheroid:
58
+
59
+ American football.
60
+
61
+ Australian rules football.
62
+
63
+ Canadian football.
64
+
65
+ Rugby union ball.
en/5330.html.txt ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ In geometry, a line segment is a part of a line that is bounded by two distinct end points, and contains every point on the line between its endpoints. A closed line segment includes both endpoints, while an open line segment excludes both endpoints; a half-open line segment includes exactly one of the endpoints.
2
+
3
+ Examples of line segments include the sides of a triangle or square. More generally, when both of the segment's end points are vertices of a polygon or polyhedron, the line segment is either an edge (of that polygon or polyhedron) if they are adjacent vertices, or otherwise a diagonal. When the end points both lie on a curve such as a circle, a line segment is called a chord (of that curve).
4
+
5
+ If V is a vector space over
6
+
7
+
8
+
9
+
10
+ R
11
+
12
+
13
+
14
+ {\displaystyle \mathbb {R} }
15
+
16
+ or
17
+
18
+
19
+
20
+
21
+ C
22
+
23
+
24
+
25
+ {\displaystyle \mathbb {C} }
26
+
27
+ , and L is a subset of V, then L is a line segment if L can be parameterized as
28
+
29
+ for some vectors
30
+
31
+
32
+
33
+
34
+ u
35
+
36
+ ,
37
+
38
+ v
39
+
40
+
41
+ V
42
+
43
+
44
+
45
+
46
+ {\displaystyle \mathbf {u} ,\mathbf {v} \in V\,\!}
47
+
48
+ , in which case the vectors u and u + v are called the end points of L.
49
+
50
+ Sometimes one needs to distinguish between "open" and "closed" line segments. Then one defines a closed line segment as above, and an open line segment as a subset L that can be parametrized as
51
+
52
+ for some vectors
53
+
54
+
55
+
56
+
57
+ u
58
+
59
+ ,
60
+
61
+ v
62
+
63
+
64
+ V
65
+
66
+
67
+
68
+
69
+ {\displaystyle \mathbf {u} ,\mathbf {v} \in V\,\!}
70
+
71
+ .
72
+
73
+ Equivalently, a line segment is the convex hull of two points. Thus, the line segment can be expressed as a convex combination of the segment's two end points.
74
+
75
+ In geometry, it is sometimes defined that a point B is between two other points A and C, if the distance AB added to the distance BC is equal to the distance AC. Thus in
76
+
77
+
78
+
79
+
80
+
81
+ R
82
+
83
+
84
+ 2
85
+
86
+
87
+
88
+
89
+ {\displaystyle \mathbb {R} ^{2}}
90
+
91
+ the line segment with endpoints A = (ax, ay) and C = (cx, cy) is the following collection of points:
92
+
93
+ In an axiomatic treatment of geometry, the notion of betweenness is either assumed to satisfy a certain number of axioms, or else be defined in terms of an isometry of a line (used as a coordinate system).
94
+
95
+ Segments play an important role in other theories. For example, a set is convex if the segment that joins any two points of the set is contained in the set. This is important because it transforms some of the analysis of convex sets to the analysis of a line segment. The Segment Addition Postulate can be used to add congruent segment or segments with equal lengths and consequently substitute other segments into another statement to make segments congruent.
96
+
97
+ A line segment can be viewed as a degenerate case of an ellipse in which the semiminor axis goes to zero, the foci go to the endpoints, and the eccentricity goes to one. A standard definition of an ellipse is the set of points for which the sum of a point's distances to two foci is a constant; if this constant equals the distance between the foci, the line segment is the result. A complete orbit of this ellipse traverses the line segment twice. As a degenerate orbit this is a radial elliptic trajectory.
98
+
99
+ In addition to appearing as the edges and diagonals of polygons and polyhedra, line segments appear in numerous other locations relative to other geometric shapes.
100
+
101
+ Some very frequently considered segments in a triangle include the three altitudes (each perpendicularly connecting a side or its extension to the opposite vertex), the three medians (each connecting a side's midpoint to the opposite vertex), the perpendicular bisectors of the sides (perpendicularly connecting the midpoint of a side to one of the other sides), and the internal angle bisectors (each connecting a vertex to the opposite side). In each case there are various equalities relating these segment lengths to others (discussed in the articles on the various types of segment) as well as various inequalities.
102
+
103
+ Other segments of interest in a triangle include those connecting various triangle centers to each other, most notably the incenter, the circumcenter, the nine-point center, the centroid, and the orthocenter.
104
+
105
+ In addition to the sides and diagonals of a quadrilateral, some important segments are the two bimedians (connecting the midpoints of opposite sides) and the four maltitudes (each perpendicularly connecting one side to the midpoint of the opposite side).
106
+
107
+ Any straight line segment connecting two points on a circle or ellipse is called a chord. Any chord in a circle which has no longer chord is called a diameter, and any segment connecting the circle's center (the midpoint of a diameter) to a point on the circle is called a radius.
108
+
109
+ In an ellipse, the longest chord, which is also the longest diameter, is called the major axis, and a segment from the midpoint of the major axis (the ellipse's center) to either endpoint of the major axis is called a semi-major axis. Similarly, the shortest diameter of an ellipse is called the minor axis, and the segment from its midpoint (the ellipse's center) to either of its endpoints is called a semi-minor axis. The chords of an ellipse which are perpendicular to the major axis and pass through one of its foci are called the latera recta of the ellipse. The interfocal segment connects the two foci.
110
+
111
+ When a line segment is given an orientation (direction) it suggests a translation or perhaps a force tending to make a translation. The magnitude and direction are indicative of a potential change. This suggestion has been absorbed into mathematical physics through the concept of a Euclidean vector.[1][2] The collection of all directed line segments is usually reduced by making "equivalent" any pair having the same length and orientation.[3] This application of an equivalence relation dates from Giusto Bellavitis’s introduction of the concept of equipollence of directed line segments in 1835.
112
+
113
+ Analogous to straight line segments above, one can define arcs as segments of a curve.
114
+
115
+ This article incorporates material from Line segment on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
en/5331.html.txt ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ In geometry, a line segment is a part of a line that is bounded by two distinct end points, and contains every point on the line between its endpoints. A closed line segment includes both endpoints, while an open line segment excludes both endpoints; a half-open line segment includes exactly one of the endpoints.
2
+
3
+ Examples of line segments include the sides of a triangle or square. More generally, when both of the segment's end points are vertices of a polygon or polyhedron, the line segment is either an edge (of that polygon or polyhedron) if they are adjacent vertices, or otherwise a diagonal. When the end points both lie on a curve such as a circle, a line segment is called a chord (of that curve).
4
+
5
+ If V is a vector space over
6
+
7
+
8
+
9
+
10
+ R
11
+
12
+
13
+
14
+ {\displaystyle \mathbb {R} }
15
+
16
+ or
17
+
18
+
19
+
20
+
21
+ C
22
+
23
+
24
+
25
+ {\displaystyle \mathbb {C} }
26
+
27
+ , and L is a subset of V, then L is a line segment if L can be parameterized as
28
+
29
+ for some vectors
30
+
31
+
32
+
33
+
34
+ u
35
+
36
+ ,
37
+
38
+ v
39
+
40
+
41
+ V
42
+
43
+
44
+
45
+
46
+ {\displaystyle \mathbf {u} ,\mathbf {v} \in V\,\!}
47
+
48
+ , in which case the vectors u and u + v are called the end points of L.
49
+
50
+ Sometimes one needs to distinguish between "open" and "closed" line segments. Then one defines a closed line segment as above, and an open line segment as a subset L that can be parametrized as
51
+
52
+ for some vectors
53
+
54
+
55
+
56
+
57
+ u
58
+
59
+ ,
60
+
61
+ v
62
+
63
+
64
+ V
65
+
66
+
67
+
68
+
69
+ {\displaystyle \mathbf {u} ,\mathbf {v} \in V\,\!}
70
+
71
+ .
72
+
73
+ Equivalently, a line segment is the convex hull of two points. Thus, the line segment can be expressed as a convex combination of the segment's two end points.
74
+
75
+ In geometry, it is sometimes defined that a point B is between two other points A and C, if the distance AB added to the distance BC is equal to the distance AC. Thus in
76
+
77
+
78
+
79
+
80
+
81
+ R
82
+
83
+
84
+ 2
85
+
86
+
87
+
88
+
89
+ {\displaystyle \mathbb {R} ^{2}}
90
+
91
+ the line segment with endpoints A = (ax, ay) and C = (cx, cy) is the following collection of points:
92
+
93
+ In an axiomatic treatment of geometry, the notion of betweenness is either assumed to satisfy a certain number of axioms, or else be defined in terms of an isometry of a line (used as a coordinate system).
94
+
95
+ Segments play an important role in other theories. For example, a set is convex if the segment that joins any two points of the set is contained in the set. This is important because it transforms some of the analysis of convex sets to the analysis of a line segment. The Segment Addition Postulate can be used to add congruent segment or segments with equal lengths and consequently substitute other segments into another statement to make segments congruent.
96
+
97
+ A line segment can be viewed as a degenerate case of an ellipse in which the semiminor axis goes to zero, the foci go to the endpoints, and the eccentricity goes to one. A standard definition of an ellipse is the set of points for which the sum of a point's distances to two foci is a constant; if this constant equals the distance between the foci, the line segment is the result. A complete orbit of this ellipse traverses the line segment twice. As a degenerate orbit this is a radial elliptic trajectory.
98
+
99
+ In addition to appearing as the edges and diagonals of polygons and polyhedra, line segments appear in numerous other locations relative to other geometric shapes.
100
+
101
+ Some very frequently considered segments in a triangle include the three altitudes (each perpendicularly connecting a side or its extension to the opposite vertex), the three medians (each connecting a side's midpoint to the opposite vertex), the perpendicular bisectors of the sides (perpendicularly connecting the midpoint of a side to one of the other sides), and the internal angle bisectors (each connecting a vertex to the opposite side). In each case there are various equalities relating these segment lengths to others (discussed in the articles on the various types of segment) as well as various inequalities.
102
+
103
+ Other segments of interest in a triangle include those connecting various triangle centers to each other, most notably the incenter, the circumcenter, the nine-point center, the centroid, and the orthocenter.
104
+
105
+ In addition to the sides and diagonals of a quadrilateral, some important segments are the two bimedians (connecting the midpoints of opposite sides) and the four maltitudes (each perpendicularly connecting one side to the midpoint of the opposite side).
106
+
107
+ Any straight line segment connecting two points on a circle or ellipse is called a chord. Any chord in a circle which has no longer chord is called a diameter, and any segment connecting the circle's center (the midpoint of a diameter) to a point on the circle is called a radius.
108
+
109
+ In an ellipse, the longest chord, which is also the longest diameter, is called the major axis, and a segment from the midpoint of the major axis (the ellipse's center) to either endpoint of the major axis is called a semi-major axis. Similarly, the shortest diameter of an ellipse is called the minor axis, and the segment from its midpoint (the ellipse's center) to either of its endpoints is called a semi-minor axis. The chords of an ellipse which are perpendicular to the major axis and pass through one of its foci are called the latera recta of the ellipse. The interfocal segment connects the two foci.
110
+
111
+ When a line segment is given an orientation (direction) it suggests a translation or perhaps a force tending to make a translation. The magnitude and direction are indicative of a potential change. This suggestion has been absorbed into mathematical physics through the concept of a Euclidean vector.[1][2] The collection of all directed line segments is usually reduced by making "equivalent" any pair having the same length and orientation.[3] This application of an equivalence relation dates from Giusto Bellavitis’s introduction of the concept of equipollence of directed line segments in 1835.
112
+
113
+ Analogous to straight line segments above, one can define arcs as segments of a curve.
114
+
115
+ This article incorporates material from Line segment on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
en/5332.html.txt ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Lord of the Rings is an epic[1] high-fantasy novel written by English author and scholar J. R. R. Tolkien. The story began as a sequel to Tolkien's 1937 fantasy novel The Hobbit but eventually developed into a much larger work. Written in stages between 1937 and 1949, The Lord of the Rings is one of the best-selling novels ever written, with over 150 million copies sold.[2]
4
+
5
+ The title of the novel refers to the story's main antagonist, the Dark Lord Sauron,[a] who had in an earlier age created the One Ring to rule the other Rings of Power as the ultimate weapon in his campaign to conquer and rule all of Middle-earth. From quiet beginnings in the Shire, a hobbit land not unlike the English countryside, the story ranges across Middle-earth, following the course of the War of the Ring through the eyes of its characters, most notably the hobbits Frodo, Sam, Merry and Pippin.
6
+
7
+ Although generally known to readers as a trilogy, the work was initially intended by Tolkien to be one volume of a two-volume set, the other to be The Silmarillion, but this idea was dismissed by his publisher.[4][5] For economic reasons, The Lord of the Rings was published in three volumes over the course of a year from 29 July 1954 to 20 October 1955.[4][6] The three volumes were titled The Fellowship of the Ring, The Two Towers and The Return of the King. Structurally, the novel is divided internally into six books, two per volume, with several appendices of background material included at the end. Some editions combine the entire work into a single volume, per the author's original intent. The Lord of the Rings has since been reprinted numerous times and translated into 38 languages.
8
+
9
+ Tolkien's work has been the subject of extensive analysis of its themes and origins. Although a major work in itself, the story was only the last movement of a larger epic Tolkien had worked on since 1917,[7] in a process he described as mythopoeia.[b] Influences on this earlier work, and on the story of The Lord of the Rings, include philology, mythology, religion, the architecture of Oxford, England, and the author's distaste for the effects of industrialization, as well as earlier fantasy works and Tolkien's experiences in World War I.[9] The Lord of the Rings in its turn is considered to have had a great effect on modern fantasy; the impact of Tolkien's works is such that the use of the words "Tolkienian" and "Tolkienesque" has been recorded in the Oxford English Dictionary.[10]
10
+
11
+ The enduring popularity of The Lord of the Rings has led to numerous references in popular culture, the founding of many societies by fans of Tolkien's works,[11] and the publication of many books about Tolkien and his works. The Lord of the Rings has inspired, and continues to inspire, artwork, music, films and television, video games, board games, and subsequent literature. Award-winning adaptations of The Lord of the Rings have been made for radio, theatre, and film.[12] In 2003, it was named Britain's best novel of all time in the BBC's The Big Read. In 2015, the BBC ranked The Lord of the Rings 26th on its list of the 100 greatest British novels.[13]
12
+
13
+ The narrative follows on from The Hobbit, in which the hobbit Bilbo Baggins finds the Ring, which had been in the possession of the creature Gollum. The story begins in the Shire, where Frodo Baggins inherits the Ring from Bilbo, his cousin[c] and guardian. Neither hobbit is aware of the Ring's nature, but Gandalf the Grey, a wizard and an old friend of Bilbo, suspects it to be the Ring lost by Sauron, the Dark Lord, long ago. Seventeen years later, after Gandalf confirms this is true, he tells Frodo the history of the Ring and counsels him to take it away from the Shire. Frodo sets out, accompanied by his gardener, servant and friend, Samwise "Sam" Gamgee, and two cousins, Meriadoc "Merry" Brandybuck and Peregrin "Pippin" Took. They are nearly caught by the Nazgûl, but shake off their pursuers by cutting through the Old Forest. There they are aided by Tom Bombadil, a strange and merry fellow who lives with his wife Goldberry in the forest.
14
+
15
+ The hobbits reach the town of Bree, where they encounter a Ranger named Strider, whom Gandalf had mentioned in a letter. Strider persuades the hobbits to take him on as their guide and protector. Together, they leave Bree after another close escape from the Nazgûl. On the hill of Weathertop, they are again attacked by the Nazgûl, who wound Frodo with a cursed blade. Strider fights them off and leads the hobbits towards the Elven refuge of Rivendell. Frodo falls deathly ill from the wound. The Nazgûl nearly capture him at the Ford of Bruinen, but flood waters summoned by Elrond, master of Rivendell, rise up and overwhelm them.
16
+
17
+ Frodo recovers in Rivendell under Elrond's care. The Council of Elrond discusses the history of Sauron and the Ring. Strider is revealed to be Aragorn, Isildur's heir. Gandalf reports that the chief wizard Saruman has betrayed them and is now working to become a power in his own right. The Council decides that the Ring must be destroyed, but that can only be done by sending it to the fire of Mount Doom in Mordor, where it was forged. Frodo takes this task upon himself. Elrond, with the advice of Gandalf, chooses companions for him. The Company of the Ring, also called the Fellowship of the Ring, are nine in number: Frodo, Sam, Merry, Pippin, Aragorn, Gandalf, Gimli the Dwarf, Legolas the Elf, and the Man Boromir, son of Denethor, the Ruling Steward of the land of Gondor.
18
+
19
+ After a failed attempt to cross the Misty Mountains over the Redhorn Pass, the Company take the perilous path through the Mines of Moria. They learn of the fate of Balin and his colony of Dwarves. After surviving an attack, they are pursued by Orcs and by a Balrog, an ancient fire demon. Gandalf faces the Balrog, and both of them fall into the abyss. The others escape and find refuge in the Elven forest of Lothlórien, where they are counselled by its rulers, Galadriel and Celeborn.
20
+
21
+ With boats and gifts from Galadriel, the Company travel down the River Anduin to the hill of Amon Hen. There, Boromir tries to take the Ring from Frodo, but Frodo puts it on and disappears. Frodo chooses to go alone to Mordor, but Sam guesses what he intends and goes with him.
22
+
23
+ Uruk-hai sent by Saruman and other Orcs sent by Sauron kill Boromir and capture Merry and Pippin. Aragorn, Gimli and Legolas debate which pair of hobbits to follow. They decide to pursue the Orcs taking Merry and Pippin to Saruman. In the kingdom of Rohan, the Orcs are slain by a company of Rohirrim. Merry and Pippin escape into Fangorn Forest, where they are befriended by Treebeard, the oldest of the tree-like Ents. Aragorn, Gimli and Legolas track the hobbits to Fangorn. There they unexpectedly meet Gandalf.
24
+
25
+ Gandalf explains that he slew the Balrog. Darkness took him, but he was sent back to Middle-earth to complete his mission. He is clothed in white and is now Gandalf the White, for he has taken Saruman's place as the chief of the wizards. Gandalf assures his friends that Merry and Pippin are safe. Together they ride to Edoras, capital of Rohan. Gandalf frees Théoden, King of Rohan, from the influence of Saruman's spy Gríma Wormtongue. Théoden musters his fighting strength and rides with his men to the ancient fortress of Helm's Deep, while Gandalf departs to seek help from Treebeard.
26
+
27
+ Meanwhile, the Ents, roused by Merry and Pippin from their peaceful ways, attack Isengard, Saruman's stronghold, and trap the wizard in the tower of Orthanc. Gandalf convinces Treebeard to send an army of Huorns to Théoden's aid. Gandalf brings an army of Rohirrim to Helm's Deep, and they defeat the Orcs, who flee into the forest of Huorns, never to be seen again. Gandalf offers Saruman a chance to turn away from evil. When Saruman refuses to listen, Gandalf strips him of his rank and most of his powers.
28
+
29
+ After Saruman crawls back to his prison, Wormtongue drops a sphere to try to kill Gandalf. Pippin picks it up. It is revealed to be a palantír, a seeing-stone that Saruman used to speak with Sauron and through which Saruman was ensnared. Pippin is seen by Sauron. Gandalf rides for Minas Tirith, chief city of Gondor, taking Pippin with him.
30
+
31
+ Frodo and Sam capture Gollum, who has followed them from Moria. They force him to guide them to Mordor. They find that the Black Gate of Mordor is too well guarded, so instead they travel to a secret way Gollum knows. On the way, they encounter Faramir, who, unlike his brother Boromir, resists the temptation to seize the Ring. Gollum – who is torn between his loyalty to Frodo and his desire for the Ring – betrays Frodo by leading him to the great spider Shelob in the tunnels of Cirith Ungol. Frodo falls to Shelob's sting. But with the help of Galadriel's gifts, Sam fights off the spider. Believing Frodo to be dead, Sam takes the Ring to continue the quest alone. Orcs find Frodo; Sam overhears them and learns that Frodo is still alive.
32
+
33
+ Sauron sends a great army against Gondor. Gandalf arrives at Minas Tirith to warn Denethor of the attack, while Théoden musters the Rohirrim to ride to Gondor's aid. Minas Tirith is besieged. Denethor is deceived by Sauron and falls into despair. He burns himself alive on a pyre, nearly taking his son Faramir with him. Aragorn, accompanied by Legolas, Gimli and the Rangers of the North, takes the Paths of the Dead to recruit the Dead Men of Dunharrow, who are bound by a curse which denies them rest until they fulfil their ancient forsworn oath to fight for the King of Gondor.
34
+
35
+ Following Aragorn, the Army of the Dead strikes terror into the Corsairs of Umbar invading southern Gondor. Aragorn defeats the Corsairs and uses their ships to transport the men of southern Gondor up the Anduin, reaching Minas Tirith just in time to turn the tide of battle. Théoden's niece Éowyn, who joined the army in disguise, slays the Lord of the Nazgûl with help from Merry. Together, Gondor and Rohan defeat Sauron's army in the Battle of the Pelennor Fields, though at great cost. Théoden is killed, and Éowyn and Merry are wounded.
36
+
37
+ Meanwhile, Sam rescues Frodo from the tower of Cirith Ungol. They set out across Mordor. Aragorn leads an army of men from Gondor and Rohan to march on the Black Gate to distract Sauron from his true danger. His army is vastly outnumbered by the great might of Sauron. Frodo and Sam reach the edge of the Cracks of Doom, but Frodo cannot resist the Ring any longer. He claims it for himself and puts it on his finger.
38
+
39
+ Gollum suddenly reappears. He struggles with Frodo and bites off Frodo's finger with the Ring still on it. Celebrating wildly, Gollum loses his footing and falls into the Fire, taking the Ring with him. When the Ring is destroyed, Sauron loses his power forever. All he created collapses, the Nazgûl perish, and his armies are thrown into such disarray that Aragorn's forces emerge victorious.
40
+
41
+ Aragorn is crowned King of Arnor and Gondor, and weds Arwen, daughter of Elrond. The four hobbits make their way back to the Shire, only to find that it has been taken over by men directed by one "Sharkey" (whom they later discover to be Saruman). The hobbits raise a rebellion and liberate the Shire, though 19 hobbits are killed and 30 wounded. Frodo stops the hobbits from killing the wizard after Saruman attempts to stab Frodo, but Gríma turns on Saruman and kills him in front of Bag End, Frodo's home. He is slain in turn by hobbit archers, and the War of the Ring comes to its true end on Frodo's very doorstep.
42
+
43
+ Merry and Pippin are celebrated as heroes. Sam marries Rosie Cotton and uses his gifts from Galadriel to help heal the Shire. But Frodo is still wounded in body and spirit, having borne the Ring for so long. A few years later, in the company of Bilbo and Gandalf, Frodo sails from the Grey Havens west over the Sea to the Undying Lands to find peace.
44
+
45
+ In the appendices, Sam gives his daughter Elanor the Red Book of Westmarch, which contains the story of Bilbo's adventures and the War of the Ring as witnessed by the hobbits. Sam is then said to have crossed west over the Sea himself, the last of the Ring-bearers.
46
+
47
+ Tolkien presents The Lord of the Rings within a fictional frame-story where he is not the original author, but merely the translator of part of an ancient document, the Red Book of Westmarch. Various details of the frame-story appear in the Prologue, its 'Note on Shire Records', and in the Appendices, notably Appendix F. In this frame-story, the Red Book is also the source of Tolkien's other works relating to Middle-earth: The Hobbit, The Silmarillion, and The Adventures of Tom Bombadil.[14]
48
+
49
+ The Lord of the Rings started as a sequel to J. R. R. Tolkien's work The Hobbit, published in 1937.[15] The popularity of The Hobbit had led George Allen & Unwin, the publishers, to request a sequel. Tolkien warned them that he wrote quite slowly, and responded with several stories he had already developed. Having rejected his contemporary drafts for The Silmarillion, putting on hold Roverandom, and accepting Farmer Giles of Ham, Allen & Unwin thought more stories about hobbits would be popular.[16] So at the age of 45, Tolkien began writing the story that would become The Lord of the Rings. The story would not be finished until 12 years later, in 1949, and would not be fully published until 1955, when Tolkien was 63 years old.
50
+
51
+ Persuaded by his publishers, he started "a new Hobbit" in December 1937.[15] After several false starts, the story of the One Ring emerged. The idea for the first chapter ("A Long-Expected Party") arrived fully formed, although the reasons behind Bilbo's disappearance, the significance of the Ring, and the title The Lord of the Rings did not arrive until the spring of 1938.[15] Originally, he planned to write a story in which Bilbo had used up all his treasure and was looking for another adventure to gain more; however, he remembered the Ring and its powers and thought that would be a better focus for the new work.[15] As the story progressed, he also brought in elements from The Silmarillion mythology.[17]
52
+
53
+ Writing was slow, because Tolkien had a full-time academic position teaching linguistics (with a focus on languages with linguistic elements he incorporated into his books, such as Old English).[18] "I have spent nearly all the vacation-times of seventeen years examining [...] Writing stories in prose or verse has been stolen, often guiltily, from time already mortgaged..."[19] Tolkien abandoned The Lord of the Rings during most of 1943 and only restarted it in April 1944,[15] as a serial for his son Christopher Tolkien, who was sent chapters as they were written while he was serving in South Africa with the Royal Air Force. Tolkien made another major effort in 1946, and showed the manuscript to his publishers in 1947.[15] The story was effectively finished the next year, but Tolkien did not complete the revision of earlier parts of the work until 1949.[15] The original manuscripts, which total 9,250 pages, now reside in the J. R. R. Tolkien Collection at Marquette University.[20]
54
+
55
+ Unusually for 20th century novels, the prose narrative is supplemented throughout by over 60 pieces of poetry. These include verse and songs of many genres: for wandering, marching to war, drinking, and having a bath; narrating ancient myths, riddles, prophecies, and magical incantations; of praise and lament (elegy). Some, such as riddles, charms, elegies, and narrating heroic actions are found in Old English poetry.[21] Scholars have stated that the poetry is essential for the fiction to work aesthetically and thematically; it adds information not given in the prose; and it brings out characters and their backgrounds.[22][23] The poetry has been judged to be of high technical skill, which Tolkien carried across into his prose, for instance writing much of Tom Bombadil's speech in metre.[24]
56
+
57
+ The influence of the Welsh language, which Tolkien had learned, is summarized in his essay English and Welsh: "If I may once more refer to my work. The Lord of the Rings, in evidence: the names of persons and places in this story were mainly composed on patterns deliberately modelled on those of Welsh (closely similar but not identical). This element in the tale has given perhaps more pleasure to more readers than anything else in it."[25]
58
+
59
+ The Lord of the Rings developed as a personal exploration by Tolkien of his interests in philology, religion (particularly Catholicism[26]), fairy tales, Norse and general Germanic mythology,[27][28] and also Celtic,[29][better source needed] Slavic,[30][31][32] Persian,[33] Greek,[34] and Finnish mythology.[35] Tolkien acknowledged, and external critics have verified, the influences of George MacDonald and William Morris[36] and the Anglo-Saxon poem Beowulf.[37] The question of a direct influence of Wagner's The Nibelung's Ring on Tolkien's work is debated by critics.
60
+
61
+ Tolkien included neither any explicit religion nor cult in his work. Rather the themes, moral philosophy, and cosmology of The Lord of the Rings reflect his Catholic worldview. In one of his letters Tolkien states, "The Lord of the Rings is of course a fundamentally religious and Catholic work; unconsciously so at first, but consciously in the revision. That is why I have not put in, or have cut out, practically all references to anything like 'religion', to cults or practices, in the imaginary world. For the religious element is absorbed into the story and the symbolism."[26]
62
+
63
+ Some locations and characters were inspired by Tolkien's childhood in Birmingham, where he first lived near Sarehole Mill, and later near Edgbaston Reservoir.[39] There are also hints of the Black Country, which is within easy reach of northwest Edgbaston. This shows in such names as "Underhill", and the description of Saruman's industrialization of Isengard and The Shire. It has been suggested that the Shire and its surroundings were based on the countryside around Stonyhurst College in Lancashire where Tolkien frequently stayed during the 1940s, but this claim is disputed by reputable Tolkien scholars.[40][41] The work was influenced by the effects of his military service during World War I, to the point that one critic diagnosed Frodo as suffering from posttraumatic stress disorder, which was called "shell-shock" at the Battle of the Somme, in which Tolkien served.[42]
64
+
65
+ A dispute with his publisher, George Allen & Unwin, led to the book being offered to Collins in 1950. Tolkien intended The Silmarillion (itself largely unrevised at this point) to be published along with The Lord of the Rings, but A&U were unwilling to do this. After Milton Waldman, his contact at Collins, expressed the belief that The Lord of the Rings itself "urgently wanted cutting", Tolkien eventually demanded that they publish the book in 1952.[43] Collins did not; and so Tolkien wrote to Allen and Unwin, saying, "I would gladly consider the publication of any part of the stuff", fearing his work would never see the light of day.[15]
66
+
67
+ For publication, the book was divided into three volumes to minimize any potential financial loss due to the high cost of type-setting and modest anticipated sales: The Fellowship of the Ring (Books I and II), The Two Towers (Books III and IV), and The Return of the King (Books V and VI plus six appendices).[44] Delays in producing appendices, maps and especially an index led to the volumes being published later than originally hoped – on 29 July 1954, on 11 November 1954 and on 20 October 1955 respectively in the United Kingdom. In the United States, Houghton Mifflin published The Fellowship of the Ring on 21 October 1954, The Two Towers on 21 April 1955, and The Return of the King on 5 January 1956.[45]
68
+
69
+ The Return of the King was especially delayed due to Tolkien revising the ending and preparing appendices (some of which had to be left out because of space constraints). Tolkien did not like the title The Return of the King, believing it gave away too much of the storyline, but deferred to his publisher's preference.[46] Tolkien wrote that the title The Two Towers "can be left ambiguous,"[47] but also considered naming the two as Orthanc and Barad-dûr, Minas Tirith and Barad-dûr, or Orthanc and the Tower of Cirith Ungol.[47][48] However, a month later he wrote a note published at the end of The Fellowship of the Ring and later drew a cover illustration, both of which identified the pair as Minas Morgul and Orthanc.[49][50]
70
+
71
+ Tolkien was initially opposed to titles being given to each two-book volume, preferring instead the use of book titles: e.g. The Lord of the Rings: Vol. 1, The Ring Sets Out and The Ring Goes South; Vol. 2, The Treason of Isengard and The Ring Goes East; Vol. 3, The War of the Ring and The End of the Third Age. However these individual book titles were later scrapped, and after pressure from his publishers, Tolkien initially suggested the titles: Vol. 1, The Shadow Grows; Vol. 2, The Ring in the Shadow; Vol. 3, The War of the Ring or The Return of the King.[51][52]
72
+
73
+ Because the three-volume binding was so widely distributed, the work is often referred to as the Lord of the Rings "trilogy". In a letter to the poet W. H. Auden (who famously reviewed the final volume in 1956[53]), Tolkien himself made use of the term "trilogy" for the work[54] though he did at other times consider this incorrect, as it was written and conceived as a single book.[55] It is also often called a novel; however, Tolkien also objected to this term as he viewed it as a heroic romance.[56]
74
+
75
+ The books were published under a profit-sharing arrangement, whereby Tolkien would not receive an advance or royalties until the books had broken even, after which he would take a large share of the profits.[57] It has ultimately become one of the best-selling novels ever written, with 50 million copies sold by 2003[58] and over 150 million copies sold by 2007.[2]
76
+
77
+ The book was published in the UK by Allen & Unwin until 1990 when the publisher and its assets were acquired by HarperCollins.[59][60]
78
+
79
+ In the early 1960s Donald A. Wollheim, science fiction editor of the paperback publisher Ace Books, claimed that The Lord of the Rings was not protected in the United States under American copyright law because Houghton Mifflin, the US hardcover publisher, had neglected to copyright the work in the United States.[61][62] Then, in 1965, Ace Books proceeded to publish an edition, unauthorized by Tolkien and without paying royalties to him. Tolkien took issue with this and quickly notified his fans of this objection.[63] Grass-roots pressure from these fans became so great that Ace Books withdrew their edition and made a nominal payment to Tolkien.[64][65]
80
+
81
+ Authorized editions followed from Ballantine Books and Houghton Mifflin to tremendous commercial success. Tolkien undertook various textual revisions to produce a version of the book that would be published with his consent and establish an unquestioned US copyright. This text became the Second Edition of The Lord of the Rings, published in 1965.[64] The first Ballantine paperback edition was printed in October that year, and sold a quarter of a million copies within ten months. On 4 September 1966, the novel debuted on New York Times' Paperback Bestsellers list as number three, and was number one by 4 December, a position it held for eight weeks.[66] Houghton Mifflin editions after 1994 consolidate variant revisions by Tolkien, and corrections supervised by Christopher Tolkien, which resulted, after some initial glitches, in a computer-based unified text.[67]
82
+
83
+ In 2004, for the 50th Anniversary Edition, Wayne G. Hammond and Christina Scull, under supervision from Christopher Tolkien, studied and revised the text to eliminate as many errors and inconsistencies as possible, some of which had been introduced by well-meaning compositors of the first printing in 1954, and never been corrected.[68] The 2005 edition of the book contained further corrections noticed by the editors and submitted by readers. Further corrections were added to the 60th Anniversary Edition in 2014.[69]
84
+
85
+ Several editions, notably the 50th Anniversary Edition, combine all three books into one volume, with the result that pagination varies widely over the various editions.
86
+
87
+ From 1988 to 1992 Christopher Tolkien published the surviving drafts of The Lord of The Rings, chronicling and illuminating with commentary the stages of the text's development, in volumes 6–9 of his History of Middle-earth series. The four volumes carry the titles The Return of the Shadow, The Treason of Isengard, The War of the Ring, and Sauron Defeated.
88
+
89
+ The novel has been translated, with varying degrees of success, into at least 56 languages.[70] Tolkien, an expert in philology, examined many of these translations, and made comments on each that reflect both the translation process and his work. As he was unhappy with some choices made by early translators, such as the Swedish translation by Åke Ohlmarks,[71] Tolkien wrote a "Guide to the Names in The Lord of the Rings" (1967). Because The Lord of the Rings purports to be a translation of the fictitious Red Book of Westmarch, with the English language representing the Westron of the "original", Tolkien suggested that translators attempt to capture the interplay between English and the invented nomenclature of the English work, and gave several examples along with general guidance.
90
+
91
+ While early reviews for The Lord of the Rings were mixed, reviews in various media have been, on the whole, highly positive and acknowledge Tolkien's literary achievement as a significant one. The initial review in the Sunday Telegraph described it as "among the greatest works of imaginative fiction of the twentieth century".[72] The Sunday Times echoed this sentiment, stating that "the English-speaking world is divided into those who have read The Lord of the Rings and The Hobbit and those who are going to read them."[72] The New York Herald Tribune also seemed to have an idea of how popular the books would become, writing in its review that they were "destined to outlast our time".[73] W. H. Auden, an admirer of Tolkien's writings, regarded The Lord of the Rings as a "masterpiece", further stating that in some cases it outdid the achievement of John Milton's Paradise Lost.[74] Kenneth F Slater [75] wrote in Nebula Science Fiction, April 1955, "... if you don’t read it, you have missed one of the finest books of its type ever to appear" [76]
92
+
93
+ New York Times reviewer Judith Shulevitz criticized the "pedantry" of Tolkien's literary style, saying that he "formulated a high-minded belief in the importance of his mission as a literary preservationist, which turns out to be death to literature itself".[77] Critic Richard Jenkyns, writing in The New Republic, criticized the work for a lack of psychological depth. Both the characters and the work itself are, according to Jenkyns, "anemic, and lacking in fibre".[78] Even within Tolkien's literary group, The Inklings, reviews were mixed. Hugo Dyson complained loudly at its readings.[79][80] However, another Inkling, C. S. Lewis, had very different feelings, writing, "here are beauties which pierce like swords or burn like cold iron. Here is a book which will break your heart." Despite these reviews and its lack of paperback printing until the 1960s, The Lord of the Rings initially sold well in hardback.[7]
94
+
95
+ In 1957, The Lord of the Rings was awarded the International Fantasy Award. Despite its numerous detractors, the publication of the Ace Books and Ballantine paperbacks helped The Lord of the Rings become immensely popular in the United States in the 1960s. The book has remained so ever since, ranking as one of the most popular works of fiction of the twentieth century, judged by both sales and reader surveys.[81] In the 2003 "Big Read" survey conducted in Britain by the BBC, The Lord of the Rings was found to be the "Nation's best-loved book". In similar 2004 polls both Germany[82] and Australia[83] also found The Lord of the Rings to be their favourite book. In a 1999 poll of Amazon.com customers, The Lord of the Rings was judged to be their favourite "book of the millennium".[84]
96
+
97
+ C. S. Lewis observed that the writing is rich in that some of the 'good' characters have darker sides, and likewise some of the villains have "good impulses".[85]
98
+
99
+ Although The Lord of the Rings was published in the 1950s, Tolkien insisted that the One Ring was not an allegory for the atomic bomb,[86] nor were his works a strict allegory of any kind, but were open to interpretation as the reader saw fit.[87][88]
100
+
101
+ A few critics have found what they consider racial elements in the story, which are generally based upon their views of how Tolkien's imagery depicts good and evil, characters' race (e.g. Elf, Dwarf, Hobbit, Southron, Númenórean, Orc), and how the characters' race is seen as determining their behaviour.[89][90][91] On the contrary, counter-arguments note that race-focused critiques often omit relevant textual evidence,[92][93][94] cite imagery from adaptations rather than the work itself,[95] ignore the absence of evidence of racist attitudes or events in the author's personal life,[92][95][96] and claim that the perception of racism is itself a marginal view.[96]
102
+
103
+ The opinions that pit races against each other most likely reflect Tolkien's criticism of war rather than a racist perspective. In The Two Towers, the character Samwise sees a fallen foe, a man of colour, and considers the humanity of this fallen Southron.[97] Director Peter Jackson, in the director's commentary of this scene, argues that Tolkien isn't projecting negativity towards the individual soldier because of his race, but against the evil authority that is driving them.[98] These sentiments, Jackson argues, arose from Tolkien's experience in the Great War and found their way into his writings to show the evils of war itself, not of other races.
104
+
105
+ Critics have also seen social class rather than race as being the determining factor in the portrayal of good and evil.[92] Commentators such as science fiction author David Brin have interpreted the work to hold unquestioning devotion to a traditional elitist social structure.[99] In his essay "Epic Pooh", science fiction and fantasy author Michael Moorcock critiques the world-view displayed by the book as deeply conservative, in both the "paternalism" of the narrative voice and the power-structures in the narrative.[100] Tom Shippey cites the origin of this portrayal of evil as a reflection of the prejudices of European middle-classes during the inter-war years towards the industrial working class.[101]
106
+
107
+ Other observers have described Christian themes in the work, specifically Roman Catholicism.[102]
108
+
109
+ The book has been read as fitting the model of Joseph Campbell's "monomyth".[103]
110
+
111
+ The Lord of the Rings has been adapted for film, radio and stage.
112
+
113
+ The book has been adapted for radio four times. In 1955 and 1956, the BBC broadcast The Lord of the Rings, a 13-part radio adaptation of the story. In the 1960s radio station WBAI produced a short radio adaptation. A 1979 dramatization of The Lord of the Rings was broadcast in the United States and subsequently issued on tape and CD. In 1981, the BBC broadcast The Lord of the Rings, a new dramatization in 26 half-hour instalments. This dramatization of The Lord of the Rings has subsequently been made available on both tape and CD both by the BBC and other publishers. For this purpose it is generally edited into 13 one-hour episodes.
114
+
115
+ Filmmakers who attempted to adapt Tolkien's works include William Snyder, Peter Shaffer, John Boorman, Ralph Bakshi, Peter Jackson and Guillermo del Toro. Other filmmakers and producers who were interested in an adaptation included Walt Disney, Al Brodax, Forrest J Ackerman, Samuel Gelfman, Denis O'Dell and Heinz Edelmann.
116
+
117
+ Following J. R. R. Tolkien's sale of the film rights for The Lord of the Rings to United Artists in 1969, rock band The Beatles considered a corresponding film project. David Lean was approached to direct, and while intrigued, was busy with Ryan's Daughter. The next choice, Stanley Kubrick, had to first familiarize himself with the books, only to then say they were unfilmable due to their immensity.[104][105] Michaelangelo Antonioni was contacted, and Heinz Edelmann even offered doing it in animation, but the project fell apart.[106] British director John Boorman also tried to make an adaptation of The Lord of the Rings for United Artists in 1970. After the script was written, which included many changes to the story and the characters, the production company scrapped the project, thinking it too expensive and too risky.[107]
118
+
119
+ Two film adaptations of the book have been made. The first was J. R. R. Tolkien's The Lord of the Rings (1978), by animator Ralph Bakshi, the first part of what was originally intended to be a two-part adaptation of the story; it covers The Fellowship of the Ring and part of The Two Towers. A three-issue comic book version of the movie was also published in Europe (but not printed in English), with illustrations by Luis Bermejo.
120
+
121
+ The second and more commercially successful adaptation was Peter Jackson's live action The Lord of the Rings film trilogy, produced by New Line Cinema and released in three instalments as The Lord of the Rings: The Fellowship of the Ring (2001), The Lord of the Rings: The Two Towers (2002), and The Lord of the Rings: The Return of the King (2003). All three parts won multiple Academy Awards, including consecutive Best Picture nominations. The final instalment of this trilogy was the second film to break the one-billion-dollar barrier and won a total of 11 Oscars (something only two other films in history, Ben-Hur and Titanic, have accomplished), including Best Picture, Best Director and Best Adapted Screenplay. Jackson later reprised his role as director, writer and producer to make a prequel trilogy based on The Hobbit.
122
+
123
+ The Hunt for Gollum, a fan film based on elements of the appendices to The Lord of the Rings, was released on the internet in May 2009 and has been covered in major media.[108] Born of Hope, written by Paula DiSante, directed by Kate Madison, and released in December 2009, is a fan film based upon the appendices of The Lord of the Rings.[109]
124
+
125
+ Rankin and Bass used a loophole in the publication of The Hobbit and The Lord of the Rings (which made them public domain in the US) to make animated TV specials based on The Hobbit, released in 1977, and a sequel based on the closing chapters of The Return of the King, which came out in 1980.
126
+
127
+ In 2017, Amazon acquired the global television rights to The Lord of the Rings for a multi-season television series of new stories set before The Hobbit and The Lord of the Rings,[110] based on J.R.R. Tolkien's writings about events of the Second Age of Middle-earth.[111] Amazon said the deal included potential for spin-off series as well.[112][113] It was later revealed that the show will apparently be set in the early second age, during the time of the Forging of the Rings,[114] and will allegedly be a prequel to the live-action films.[115]
128
+
129
+ It was projected in 2018 to be the most expensive TV show ever produced.[116] Much of it will be produced in New Zealand.[117][118][119][120] The cast includes Robert Aramayo, Owain Arthur, Nazanin Boniadi, Tom Budge, Morfydd Clark (as Galadriel),[121] Ismael Cruz Córdova, Ema Horvath, Markella Kavenagh, Joseph Mawle, Tyroe Muhafidin, Sophia Nomvete, Megan Richards, Dylan Smith, Charlie Vickers, Daniel Weyman,[122] and Maxim Baldry.[123]
130
+
131
+ In 1990, Recorded Books published an audio version of The Lord of the Rings,[124] with British actor Rob Inglis – who had previously starred in his own one-man stage productions of The Hobbit and The Lord of the Rings – reading. A large-scale musical theatre adaptation, The Lord of the Rings was first staged in Toronto, Ontario, Canada in 2006 and opened in London in June 2007.
132
+
133
+ The enormous popularity of Tolkien's work expanded the demand for fantasy fiction. Largely thanks to The Lord of the Rings, the genre flowered throughout the 1960s, and enjoys popularity to the present day. The opus has spawned many imitators, such as The Sword of Shannara, which Lin Carter called "the single most cold-blooded, complete rip-off of another book that I have ever read".[125]
134
+ Dungeons & Dragons, which popularized the role-playing game (RPG) genre in the 1970s, features many races found in The Lord of the Rings, most notably halflings (another term for hobbits), elves, dwarves, half-elves, orcs, and dragons. However, Gary Gygax, lead designer of the game, maintained that he was influenced very little by The Lord of the Rings, stating that he included these elements as a marketing move to draw on the popularity the work enjoyed at the time he was developing the game.[126]
135
+
136
+ Because D&D has gone on to influence many popular role-playing video games, the influence of The Lord of the Rings extends to many of them as well, with titles such as Dragon Quest,[127][128] the Ultima series, EverQuest, the Warcraft series, and the Elder Scrolls series of games[129] as well as video games set in Middle-earth itself.
137
+
138
+ Research also suggests that some consumers of fantasy games derive their motivation from trying to create an epic fantasy narrative which is influenced by The Lord of the Rings.[130]
139
+
140
+ In 1965, songwriter Donald Swann, who was best known for his collaboration with Michael Flanders as Flanders & Swann, set six poems from The Lord of the Rings and one from The Adventures of Tom Bombadil ("Errantry") to music. When Swann met with Tolkien to play the songs for his approval, Tolkien suggested for "Namárië" (Galadriel's lament) a setting reminiscent of plain chant, which Swann accepted.[131] The songs were published in 1967 as The Road Goes Ever On: A Song Cycle,[132] and a recording of the songs performed by singer William Elvin with Swann on piano was issued that same year by Caedmon Records as Poems and Songs of Middle Earth.[133]
141
+
142
+ Rock bands of the 1970s were musically and lyrically inspired by the fantasy embracing counter-culture of the time; British 70s rock band Led Zeppelin recorded several songs that contain explicit references to The Lord of the Rings, such as mentioning Gollum in "Ramble On", the Misty Mountains in "Misty Mountain Hop", and Ringwraiths in "The Battle of Evermore". In 1970, the Swedish musician Bo Hansson released an instrumental concept album based on the book titled Sagan om ringen (translated as "The Saga of the Ring", which was the title of the Swedish translation of The Lord of the Rings at the time).[134] The album was subsequently released internationally as Music Inspired by Lord of the Rings in 1972.[134]
143
+
144
+ The songs "Rivendell" and "The Necromancer" by the progressive rock band Rush were inspired by Tolkien. Styx also paid homage to Tolkien on their album Pieces of Eight with the song "Lords of the Ring", while Black Sabbath's song, "The Wizard", which appeared on their debut album, was influenced by Tolkien's hero, Gandalf. Progressive rock group Camel paid homage to the text in their lengthy composition "Nimrodel/The Procession/The White Rider", and progressive rock band Barclay James Harvest was inspired by the character Galadriel to write a song by that name, and used "Bombadil", the name of another character, as a pseudonym under which their 1972 single "Breathless"/"When the City Sleeps" was released; there are other references scattered through the BJH oeuvre.
145
+
146
+ Later, from the 1980s to the present day, many heavy metal acts have been influenced by Tolkien. Blind Guardian has written many songs relating to Middle-earth, including the full concept album Nightfall in Middle Earth. Almost the entire discography of Battlelore are Tolkien-themed. Summoning's music is based upon Tolkien and holds the distinction of the being the only artist to have crafted a song entirely in the Black Speech of Mordor. Gorgoroth, Cirith Ungol and Amon Amarth take their names from an area of Mordor, and Burzum take their name from the Black Speech of Mordor. The Finnish metal band Nightwish and the Norwegian metal band Tristania have also incorporated many Tolkien references into their music. American heavy metal band Megadeth released two songs titled "This Day We Fight!" and "How the Story Ends", which were both inspired by The Lord of the Rings.[135] German folk metal band Eichenschild is named for Thorin Oakenshield, a character in The Hobbit, and naturally has a number of Tolkien-themed songs. They are not to be confused with the '70s folk rock band Thorin Eichenschild.
147
+
148
+ In 1988, Dutch composer and trombonist Johan de Meij completed his Symphony No. 1 "The Lord of the Rings", which encompassed 5 movements, titled "Gandalf", "Lothlórien", "Gollum", "Journey in the Dark", and "Hobbits". In 1989 the symphony was awarded the Sudler Composition Award, awarded biennially for best wind band composition. The Danish Tolkien Ensemble have released a number of albums that feature the complete poems and songs of The Lord of the Rings set to music, with some featuring recitation by Christopher Lee. In 2018, de Meij completed his Symphony No. 5 "Return to Middle Earth" in 2018, which has 6 movements titled "Mîri na Fëanor (Fëanor’s Jewels)", "Tinúviel (Nightingale)", "Ancalagon i-môr (Ancalagon, The Black)", "Arwen Undómiel (Evenstar)", "Dagor Delothrin (The War of Wrath)", and "Thuringwethil (Woman of Secret Shadow)".
149
+
150
+ Enya wrote an instrumental piece called "Lothlórien" in 1991, and composed two songs for the film The Lord of the Rings: The Fellowship of the Ring—"May It Be" (sung in English and Quenya) and "Aníron" (sung in Sindarin).
151
+
152
+ The 2020 modern classical album "Music for Piano and Strings" by pianist and composer Holger Skepeneit contains two Lord of the Rings-inspired pieces, "Laced with Ithildin" and "Nimrodel's Voice".
153
+
154
+ The Lord of the Rings has had a profound and wide-ranging impact on popular culture, beginning with its publication in the 1950s, but especially throughout the 1960s and 1970s, during which time young people embraced it as a countercultural saga.[136] "Frodo Lives!" and "Gandalf for President" were two phrases popular amongst United States Tolkien fans during this time.[137]
155
+
156
+ Parodies like the Harvard Lampoon's Bored of the Rings, the VeggieTales episode "Lord of the Beans", the South Park episode "The Return of the Fellowship of the Ring to the Two Towers", the Futurama film Bender's Game, The Adventures of Jimmy Neutron: Boy Genius episode "Lights! Camera! Danger!", The Big Bang Theory episode "The Precious Fragmentation", and the American Dad! episode "The Return of the Bling" are testimony to the work's continual presence in popular culture.
157
+
158
+ In 1969, Tolkien sold the merchandising rights to The Lord of The Rings (and The Hobbit) to United Artists under an agreement stipulating a lump sum payment of £10,000[138] plus a 7.5% royalty after costs,[139] payable to Allen & Unwin and the author.[140] In 1976, three years after the author's death, United Artists sold the rights to Saul Zaentz Company, who now trade as Tolkien Enterprises. Since then all "authorized" merchandise has been signed-off by Tolkien Enterprises, although the intellectual property rights of the specific likenesses of characters and other imagery from various adaptations is generally held by the adaptors.[141]
159
+
160
+ Outside any commercial exploitation from adaptations, from the late 1960s onwards there has been an increasing variety of original licensed merchandise, from posters and calendars created by illustrators such as Barbara Remington[142], Pauline Baynes and the Brothers Hildebrandt, to figurines and miniatures to computer, video, tabletop and role-playing games. Recent examples include the Spiel des Jahres award-winning (for "best use of literature in a game") board game The Lord of the Rings by Reiner Knizia and the Golden Joystick award-winning massively multiplayer online role-playing game, The Lord of the Rings Online: Shadows of Angmar by Turbine, Inc..
161
+
162
+ The Lord of the Rings has been mentioned in numerous songs including "The Ballad of Bilbo Baggins" by Leonard Nimoy, Led Zeppelin's "Misty Mountain Hop", "Over the Hills and Far Away", "Ramble On", and "The Battle of Evermore", Genesis' song "Stagnation" (from Trespass, 1970) was about Gollum, Rush included the song "Rivendell" on their second studio album Fly by Night, and Argent included the song "Lothlorien" on the 1971 album Ring of Hands.
163
+
164
+ Steve Peregrin Took (born Stephen Ross Porter) of British rock band T. Rex took his name from the hobbit Peregrin Took (better known as Pippin). Took later recorded under the pseudonym 'Shagrat the Vagrant', before forming a band called Shagrat in 1970.
165
+
166
+ On 5 November 2019, the BBC News listed The Lord of the Rings on its list of the 100 most influential novels.[143]
en/5333.html.txt ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Lord of the Rings is an epic[1] high-fantasy novel written by English author and scholar J. R. R. Tolkien. The story began as a sequel to Tolkien's 1937 fantasy novel The Hobbit but eventually developed into a much larger work. Written in stages between 1937 and 1949, The Lord of the Rings is one of the best-selling novels ever written, with over 150 million copies sold.[2]
4
+
5
+ The title of the novel refers to the story's main antagonist, the Dark Lord Sauron,[a] who had in an earlier age created the One Ring to rule the other Rings of Power as the ultimate weapon in his campaign to conquer and rule all of Middle-earth. From quiet beginnings in the Shire, a hobbit land not unlike the English countryside, the story ranges across Middle-earth, following the course of the War of the Ring through the eyes of its characters, most notably the hobbits Frodo, Sam, Merry and Pippin.
6
+
7
+ Although generally known to readers as a trilogy, the work was initially intended by Tolkien to be one volume of a two-volume set, the other to be The Silmarillion, but this idea was dismissed by his publisher.[4][5] For economic reasons, The Lord of the Rings was published in three volumes over the course of a year from 29 July 1954 to 20 October 1955.[4][6] The three volumes were titled The Fellowship of the Ring, The Two Towers and The Return of the King. Structurally, the novel is divided internally into six books, two per volume, with several appendices of background material included at the end. Some editions combine the entire work into a single volume, per the author's original intent. The Lord of the Rings has since been reprinted numerous times and translated into 38 languages.
8
+
9
+ Tolkien's work has been the subject of extensive analysis of its themes and origins. Although a major work in itself, the story was only the last movement of a larger epic Tolkien had worked on since 1917,[7] in a process he described as mythopoeia.[b] Influences on this earlier work, and on the story of The Lord of the Rings, include philology, mythology, religion, the architecture of Oxford, England, and the author's distaste for the effects of industrialization, as well as earlier fantasy works and Tolkien's experiences in World War I.[9] The Lord of the Rings in its turn is considered to have had a great effect on modern fantasy; the impact of Tolkien's works is such that the use of the words "Tolkienian" and "Tolkienesque" has been recorded in the Oxford English Dictionary.[10]
10
+
11
+ The enduring popularity of The Lord of the Rings has led to numerous references in popular culture, the founding of many societies by fans of Tolkien's works,[11] and the publication of many books about Tolkien and his works. The Lord of the Rings has inspired, and continues to inspire, artwork, music, films and television, video games, board games, and subsequent literature. Award-winning adaptations of The Lord of the Rings have been made for radio, theatre, and film.[12] In 2003, it was named Britain's best novel of all time in the BBC's The Big Read. In 2015, the BBC ranked The Lord of the Rings 26th on its list of the 100 greatest British novels.[13]
12
+
13
+ The narrative follows on from The Hobbit, in which the hobbit Bilbo Baggins finds the Ring, which had been in the possession of the creature Gollum. The story begins in the Shire, where Frodo Baggins inherits the Ring from Bilbo, his cousin[c] and guardian. Neither hobbit is aware of the Ring's nature, but Gandalf the Grey, a wizard and an old friend of Bilbo, suspects it to be the Ring lost by Sauron, the Dark Lord, long ago. Seventeen years later, after Gandalf confirms this is true, he tells Frodo the history of the Ring and counsels him to take it away from the Shire. Frodo sets out, accompanied by his gardener, servant and friend, Samwise "Sam" Gamgee, and two cousins, Meriadoc "Merry" Brandybuck and Peregrin "Pippin" Took. They are nearly caught by the Nazgûl, but shake off their pursuers by cutting through the Old Forest. There they are aided by Tom Bombadil, a strange and merry fellow who lives with his wife Goldberry in the forest.
14
+
15
+ The hobbits reach the town of Bree, where they encounter a Ranger named Strider, whom Gandalf had mentioned in a letter. Strider persuades the hobbits to take him on as their guide and protector. Together, they leave Bree after another close escape from the Nazgûl. On the hill of Weathertop, they are again attacked by the Nazgûl, who wound Frodo with a cursed blade. Strider fights them off and leads the hobbits towards the Elven refuge of Rivendell. Frodo falls deathly ill from the wound. The Nazgûl nearly capture him at the Ford of Bruinen, but flood waters summoned by Elrond, master of Rivendell, rise up and overwhelm them.
16
+
17
+ Frodo recovers in Rivendell under Elrond's care. The Council of Elrond discusses the history of Sauron and the Ring. Strider is revealed to be Aragorn, Isildur's heir. Gandalf reports that the chief wizard Saruman has betrayed them and is now working to become a power in his own right. The Council decides that the Ring must be destroyed, but that can only be done by sending it to the fire of Mount Doom in Mordor, where it was forged. Frodo takes this task upon himself. Elrond, with the advice of Gandalf, chooses companions for him. The Company of the Ring, also called the Fellowship of the Ring, are nine in number: Frodo, Sam, Merry, Pippin, Aragorn, Gandalf, Gimli the Dwarf, Legolas the Elf, and the Man Boromir, son of Denethor, the Ruling Steward of the land of Gondor.
18
+
19
+ After a failed attempt to cross the Misty Mountains over the Redhorn Pass, the Company take the perilous path through the Mines of Moria. They learn of the fate of Balin and his colony of Dwarves. After surviving an attack, they are pursued by Orcs and by a Balrog, an ancient fire demon. Gandalf faces the Balrog, and both of them fall into the abyss. The others escape and find refuge in the Elven forest of Lothlórien, where they are counselled by its rulers, Galadriel and Celeborn.
20
+
21
+ With boats and gifts from Galadriel, the Company travel down the River Anduin to the hill of Amon Hen. There, Boromir tries to take the Ring from Frodo, but Frodo puts it on and disappears. Frodo chooses to go alone to Mordor, but Sam guesses what he intends and goes with him.
22
+
23
+ Uruk-hai sent by Saruman and other Orcs sent by Sauron kill Boromir and capture Merry and Pippin. Aragorn, Gimli and Legolas debate which pair of hobbits to follow. They decide to pursue the Orcs taking Merry and Pippin to Saruman. In the kingdom of Rohan, the Orcs are slain by a company of Rohirrim. Merry and Pippin escape into Fangorn Forest, where they are befriended by Treebeard, the oldest of the tree-like Ents. Aragorn, Gimli and Legolas track the hobbits to Fangorn. There they unexpectedly meet Gandalf.
24
+
25
+ Gandalf explains that he slew the Balrog. Darkness took him, but he was sent back to Middle-earth to complete his mission. He is clothed in white and is now Gandalf the White, for he has taken Saruman's place as the chief of the wizards. Gandalf assures his friends that Merry and Pippin are safe. Together they ride to Edoras, capital of Rohan. Gandalf frees Théoden, King of Rohan, from the influence of Saruman's spy Gríma Wormtongue. Théoden musters his fighting strength and rides with his men to the ancient fortress of Helm's Deep, while Gandalf departs to seek help from Treebeard.
26
+
27
+ Meanwhile, the Ents, roused by Merry and Pippin from their peaceful ways, attack Isengard, Saruman's stronghold, and trap the wizard in the tower of Orthanc. Gandalf convinces Treebeard to send an army of Huorns to Théoden's aid. Gandalf brings an army of Rohirrim to Helm's Deep, and they defeat the Orcs, who flee into the forest of Huorns, never to be seen again. Gandalf offers Saruman a chance to turn away from evil. When Saruman refuses to listen, Gandalf strips him of his rank and most of his powers.
28
+
29
+ After Saruman crawls back to his prison, Wormtongue drops a sphere to try to kill Gandalf. Pippin picks it up. It is revealed to be a palantír, a seeing-stone that Saruman used to speak with Sauron and through which Saruman was ensnared. Pippin is seen by Sauron. Gandalf rides for Minas Tirith, chief city of Gondor, taking Pippin with him.
30
+
31
+ Frodo and Sam capture Gollum, who has followed them from Moria. They force him to guide them to Mordor. They find that the Black Gate of Mordor is too well guarded, so instead they travel to a secret way Gollum knows. On the way, they encounter Faramir, who, unlike his brother Boromir, resists the temptation to seize the Ring. Gollum – who is torn between his loyalty to Frodo and his desire for the Ring – betrays Frodo by leading him to the great spider Shelob in the tunnels of Cirith Ungol. Frodo falls to Shelob's sting. But with the help of Galadriel's gifts, Sam fights off the spider. Believing Frodo to be dead, Sam takes the Ring to continue the quest alone. Orcs find Frodo; Sam overhears them and learns that Frodo is still alive.
32
+
33
+ Sauron sends a great army against Gondor. Gandalf arrives at Minas Tirith to warn Denethor of the attack, while Théoden musters the Rohirrim to ride to Gondor's aid. Minas Tirith is besieged. Denethor is deceived by Sauron and falls into despair. He burns himself alive on a pyre, nearly taking his son Faramir with him. Aragorn, accompanied by Legolas, Gimli and the Rangers of the North, takes the Paths of the Dead to recruit the Dead Men of Dunharrow, who are bound by a curse which denies them rest until they fulfil their ancient forsworn oath to fight for the King of Gondor.
34
+
35
+ Following Aragorn, the Army of the Dead strikes terror into the Corsairs of Umbar invading southern Gondor. Aragorn defeats the Corsairs and uses their ships to transport the men of southern Gondor up the Anduin, reaching Minas Tirith just in time to turn the tide of battle. Théoden's niece Éowyn, who joined the army in disguise, slays the Lord of the Nazgûl with help from Merry. Together, Gondor and Rohan defeat Sauron's army in the Battle of the Pelennor Fields, though at great cost. Théoden is killed, and Éowyn and Merry are wounded.
36
+
37
+ Meanwhile, Sam rescues Frodo from the tower of Cirith Ungol. They set out across Mordor. Aragorn leads an army of men from Gondor and Rohan to march on the Black Gate to distract Sauron from his true danger. His army is vastly outnumbered by the great might of Sauron. Frodo and Sam reach the edge of the Cracks of Doom, but Frodo cannot resist the Ring any longer. He claims it for himself and puts it on his finger.
38
+
39
+ Gollum suddenly reappears. He struggles with Frodo and bites off Frodo's finger with the Ring still on it. Celebrating wildly, Gollum loses his footing and falls into the Fire, taking the Ring with him. When the Ring is destroyed, Sauron loses his power forever. All he created collapses, the Nazgûl perish, and his armies are thrown into such disarray that Aragorn's forces emerge victorious.
40
+
41
+ Aragorn is crowned King of Arnor and Gondor, and weds Arwen, daughter of Elrond. The four hobbits make their way back to the Shire, only to find that it has been taken over by men directed by one "Sharkey" (whom they later discover to be Saruman). The hobbits raise a rebellion and liberate the Shire, though 19 hobbits are killed and 30 wounded. Frodo stops the hobbits from killing the wizard after Saruman attempts to stab Frodo, but Gríma turns on Saruman and kills him in front of Bag End, Frodo's home. He is slain in turn by hobbit archers, and the War of the Ring comes to its true end on Frodo's very doorstep.
42
+
43
+ Merry and Pippin are celebrated as heroes. Sam marries Rosie Cotton and uses his gifts from Galadriel to help heal the Shire. But Frodo is still wounded in body and spirit, having borne the Ring for so long. A few years later, in the company of Bilbo and Gandalf, Frodo sails from the Grey Havens west over the Sea to the Undying Lands to find peace.
44
+
45
+ In the appendices, Sam gives his daughter Elanor the Red Book of Westmarch, which contains the story of Bilbo's adventures and the War of the Ring as witnessed by the hobbits. Sam is then said to have crossed west over the Sea himself, the last of the Ring-bearers.
46
+
47
+ Tolkien presents The Lord of the Rings within a fictional frame-story where he is not the original author, but merely the translator of part of an ancient document, the Red Book of Westmarch. Various details of the frame-story appear in the Prologue, its 'Note on Shire Records', and in the Appendices, notably Appendix F. In this frame-story, the Red Book is also the source of Tolkien's other works relating to Middle-earth: The Hobbit, The Silmarillion, and The Adventures of Tom Bombadil.[14]
48
+
49
+ The Lord of the Rings started as a sequel to J. R. R. Tolkien's work The Hobbit, published in 1937.[15] The popularity of The Hobbit had led George Allen & Unwin, the publishers, to request a sequel. Tolkien warned them that he wrote quite slowly, and responded with several stories he had already developed. Having rejected his contemporary drafts for The Silmarillion, putting on hold Roverandom, and accepting Farmer Giles of Ham, Allen & Unwin thought more stories about hobbits would be popular.[16] So at the age of 45, Tolkien began writing the story that would become The Lord of the Rings. The story would not be finished until 12 years later, in 1949, and would not be fully published until 1955, when Tolkien was 63 years old.
50
+
51
+ Persuaded by his publishers, he started "a new Hobbit" in December 1937.[15] After several false starts, the story of the One Ring emerged. The idea for the first chapter ("A Long-Expected Party") arrived fully formed, although the reasons behind Bilbo's disappearance, the significance of the Ring, and the title The Lord of the Rings did not arrive until the spring of 1938.[15] Originally, he planned to write a story in which Bilbo had used up all his treasure and was looking for another adventure to gain more; however, he remembered the Ring and its powers and thought that would be a better focus for the new work.[15] As the story progressed, he also brought in elements from The Silmarillion mythology.[17]
52
+
53
+ Writing was slow, because Tolkien had a full-time academic position teaching linguistics (with a focus on languages with linguistic elements he incorporated into his books, such as Old English).[18] "I have spent nearly all the vacation-times of seventeen years examining [...] Writing stories in prose or verse has been stolen, often guiltily, from time already mortgaged..."[19] Tolkien abandoned The Lord of the Rings during most of 1943 and only restarted it in April 1944,[15] as a serial for his son Christopher Tolkien, who was sent chapters as they were written while he was serving in South Africa with the Royal Air Force. Tolkien made another major effort in 1946, and showed the manuscript to his publishers in 1947.[15] The story was effectively finished the next year, but Tolkien did not complete the revision of earlier parts of the work until 1949.[15] The original manuscripts, which total 9,250 pages, now reside in the J. R. R. Tolkien Collection at Marquette University.[20]
54
+
55
+ Unusually for 20th century novels, the prose narrative is supplemented throughout by over 60 pieces of poetry. These include verse and songs of many genres: for wandering, marching to war, drinking, and having a bath; narrating ancient myths, riddles, prophecies, and magical incantations; of praise and lament (elegy). Some, such as riddles, charms, elegies, and narrating heroic actions are found in Old English poetry.[21] Scholars have stated that the poetry is essential for the fiction to work aesthetically and thematically; it adds information not given in the prose; and it brings out characters and their backgrounds.[22][23] The poetry has been judged to be of high technical skill, which Tolkien carried across into his prose, for instance writing much of Tom Bombadil's speech in metre.[24]
56
+
57
+ The influence of the Welsh language, which Tolkien had learned, is summarized in his essay English and Welsh: "If I may once more refer to my work. The Lord of the Rings, in evidence: the names of persons and places in this story were mainly composed on patterns deliberately modelled on those of Welsh (closely similar but not identical). This element in the tale has given perhaps more pleasure to more readers than anything else in it."[25]
58
+
59
+ The Lord of the Rings developed as a personal exploration by Tolkien of his interests in philology, religion (particularly Catholicism[26]), fairy tales, Norse and general Germanic mythology,[27][28] and also Celtic,[29][better source needed] Slavic,[30][31][32] Persian,[33] Greek,[34] and Finnish mythology.[35] Tolkien acknowledged, and external critics have verified, the influences of George MacDonald and William Morris[36] and the Anglo-Saxon poem Beowulf.[37] The question of a direct influence of Wagner's The Nibelung's Ring on Tolkien's work is debated by critics.
60
+
61
+ Tolkien included neither any explicit religion nor cult in his work. Rather the themes, moral philosophy, and cosmology of The Lord of the Rings reflect his Catholic worldview. In one of his letters Tolkien states, "The Lord of the Rings is of course a fundamentally religious and Catholic work; unconsciously so at first, but consciously in the revision. That is why I have not put in, or have cut out, practically all references to anything like 'religion', to cults or practices, in the imaginary world. For the religious element is absorbed into the story and the symbolism."[26]
62
+
63
+ Some locations and characters were inspired by Tolkien's childhood in Birmingham, where he first lived near Sarehole Mill, and later near Edgbaston Reservoir.[39] There are also hints of the Black Country, which is within easy reach of northwest Edgbaston. This shows in such names as "Underhill", and the description of Saruman's industrialization of Isengard and The Shire. It has been suggested that the Shire and its surroundings were based on the countryside around Stonyhurst College in Lancashire where Tolkien frequently stayed during the 1940s, but this claim is disputed by reputable Tolkien scholars.[40][41] The work was influenced by the effects of his military service during World War I, to the point that one critic diagnosed Frodo as suffering from posttraumatic stress disorder, which was called "shell-shock" at the Battle of the Somme, in which Tolkien served.[42]
64
+
65
+ A dispute with his publisher, George Allen & Unwin, led to the book being offered to Collins in 1950. Tolkien intended The Silmarillion (itself largely unrevised at this point) to be published along with The Lord of the Rings, but A&U were unwilling to do this. After Milton Waldman, his contact at Collins, expressed the belief that The Lord of the Rings itself "urgently wanted cutting", Tolkien eventually demanded that they publish the book in 1952.[43] Collins did not; and so Tolkien wrote to Allen and Unwin, saying, "I would gladly consider the publication of any part of the stuff", fearing his work would never see the light of day.[15]
66
+
67
+ For publication, the book was divided into three volumes to minimize any potential financial loss due to the high cost of type-setting and modest anticipated sales: The Fellowship of the Ring (Books I and II), The Two Towers (Books III and IV), and The Return of the King (Books V and VI plus six appendices).[44] Delays in producing appendices, maps and especially an index led to the volumes being published later than originally hoped – on 29 July 1954, on 11 November 1954 and on 20 October 1955 respectively in the United Kingdom. In the United States, Houghton Mifflin published The Fellowship of the Ring on 21 October 1954, The Two Towers on 21 April 1955, and The Return of the King on 5 January 1956.[45]
68
+
69
+ The Return of the King was especially delayed due to Tolkien revising the ending and preparing appendices (some of which had to be left out because of space constraints). Tolkien did not like the title The Return of the King, believing it gave away too much of the storyline, but deferred to his publisher's preference.[46] Tolkien wrote that the title The Two Towers "can be left ambiguous,"[47] but also considered naming the two as Orthanc and Barad-dûr, Minas Tirith and Barad-dûr, or Orthanc and the Tower of Cirith Ungol.[47][48] However, a month later he wrote a note published at the end of The Fellowship of the Ring and later drew a cover illustration, both of which identified the pair as Minas Morgul and Orthanc.[49][50]
70
+
71
+ Tolkien was initially opposed to titles being given to each two-book volume, preferring instead the use of book titles: e.g. The Lord of the Rings: Vol. 1, The Ring Sets Out and The Ring Goes South; Vol. 2, The Treason of Isengard and The Ring Goes East; Vol. 3, The War of the Ring and The End of the Third Age. However these individual book titles were later scrapped, and after pressure from his publishers, Tolkien initially suggested the titles: Vol. 1, The Shadow Grows; Vol. 2, The Ring in the Shadow; Vol. 3, The War of the Ring or The Return of the King.[51][52]
72
+
73
+ Because the three-volume binding was so widely distributed, the work is often referred to as the Lord of the Rings "trilogy". In a letter to the poet W. H. Auden (who famously reviewed the final volume in 1956[53]), Tolkien himself made use of the term "trilogy" for the work[54] though he did at other times consider this incorrect, as it was written and conceived as a single book.[55] It is also often called a novel; however, Tolkien also objected to this term as he viewed it as a heroic romance.[56]
74
+
75
+ The books were published under a profit-sharing arrangement, whereby Tolkien would not receive an advance or royalties until the books had broken even, after which he would take a large share of the profits.[57] It has ultimately become one of the best-selling novels ever written, with 50 million copies sold by 2003[58] and over 150 million copies sold by 2007.[2]
76
+
77
+ The book was published in the UK by Allen & Unwin until 1990 when the publisher and its assets were acquired by HarperCollins.[59][60]
78
+
79
+ In the early 1960s Donald A. Wollheim, science fiction editor of the paperback publisher Ace Books, claimed that The Lord of the Rings was not protected in the United States under American copyright law because Houghton Mifflin, the US hardcover publisher, had neglected to copyright the work in the United States.[61][62] Then, in 1965, Ace Books proceeded to publish an edition, unauthorized by Tolkien and without paying royalties to him. Tolkien took issue with this and quickly notified his fans of this objection.[63] Grass-roots pressure from these fans became so great that Ace Books withdrew their edition and made a nominal payment to Tolkien.[64][65]
80
+
81
+ Authorized editions followed from Ballantine Books and Houghton Mifflin to tremendous commercial success. Tolkien undertook various textual revisions to produce a version of the book that would be published with his consent and establish an unquestioned US copyright. This text became the Second Edition of The Lord of the Rings, published in 1965.[64] The first Ballantine paperback edition was printed in October that year, and sold a quarter of a million copies within ten months. On 4 September 1966, the novel debuted on New York Times' Paperback Bestsellers list as number three, and was number one by 4 December, a position it held for eight weeks.[66] Houghton Mifflin editions after 1994 consolidate variant revisions by Tolkien, and corrections supervised by Christopher Tolkien, which resulted, after some initial glitches, in a computer-based unified text.[67]
82
+
83
+ In 2004, for the 50th Anniversary Edition, Wayne G. Hammond and Christina Scull, under supervision from Christopher Tolkien, studied and revised the text to eliminate as many errors and inconsistencies as possible, some of which had been introduced by well-meaning compositors of the first printing in 1954, and never been corrected.[68] The 2005 edition of the book contained further corrections noticed by the editors and submitted by readers. Further corrections were added to the 60th Anniversary Edition in 2014.[69]
84
+
85
+ Several editions, notably the 50th Anniversary Edition, combine all three books into one volume, with the result that pagination varies widely over the various editions.
86
+
87
+ From 1988 to 1992 Christopher Tolkien published the surviving drafts of The Lord of The Rings, chronicling and illuminating with commentary the stages of the text's development, in volumes 6–9 of his History of Middle-earth series. The four volumes carry the titles The Return of the Shadow, The Treason of Isengard, The War of the Ring, and Sauron Defeated.
88
+
89
+ The novel has been translated, with varying degrees of success, into at least 56 languages.[70] Tolkien, an expert in philology, examined many of these translations, and made comments on each that reflect both the translation process and his work. As he was unhappy with some choices made by early translators, such as the Swedish translation by Åke Ohlmarks,[71] Tolkien wrote a "Guide to the Names in The Lord of the Rings" (1967). Because The Lord of the Rings purports to be a translation of the fictitious Red Book of Westmarch, with the English language representing the Westron of the "original", Tolkien suggested that translators attempt to capture the interplay between English and the invented nomenclature of the English work, and gave several examples along with general guidance.
90
+
91
+ While early reviews for The Lord of the Rings were mixed, reviews in various media have been, on the whole, highly positive and acknowledge Tolkien's literary achievement as a significant one. The initial review in the Sunday Telegraph described it as "among the greatest works of imaginative fiction of the twentieth century".[72] The Sunday Times echoed this sentiment, stating that "the English-speaking world is divided into those who have read The Lord of the Rings and The Hobbit and those who are going to read them."[72] The New York Herald Tribune also seemed to have an idea of how popular the books would become, writing in its review that they were "destined to outlast our time".[73] W. H. Auden, an admirer of Tolkien's writings, regarded The Lord of the Rings as a "masterpiece", further stating that in some cases it outdid the achievement of John Milton's Paradise Lost.[74] Kenneth F Slater [75] wrote in Nebula Science Fiction, April 1955, "... if you don’t read it, you have missed one of the finest books of its type ever to appear" [76]
92
+
93
+ New York Times reviewer Judith Shulevitz criticized the "pedantry" of Tolkien's literary style, saying that he "formulated a high-minded belief in the importance of his mission as a literary preservationist, which turns out to be death to literature itself".[77] Critic Richard Jenkyns, writing in The New Republic, criticized the work for a lack of psychological depth. Both the characters and the work itself are, according to Jenkyns, "anemic, and lacking in fibre".[78] Even within Tolkien's literary group, The Inklings, reviews were mixed. Hugo Dyson complained loudly at its readings.[79][80] However, another Inkling, C. S. Lewis, had very different feelings, writing, "here are beauties which pierce like swords or burn like cold iron. Here is a book which will break your heart." Despite these reviews and its lack of paperback printing until the 1960s, The Lord of the Rings initially sold well in hardback.[7]
94
+
95
+ In 1957, The Lord of the Rings was awarded the International Fantasy Award. Despite its numerous detractors, the publication of the Ace Books and Ballantine paperbacks helped The Lord of the Rings become immensely popular in the United States in the 1960s. The book has remained so ever since, ranking as one of the most popular works of fiction of the twentieth century, judged by both sales and reader surveys.[81] In the 2003 "Big Read" survey conducted in Britain by the BBC, The Lord of the Rings was found to be the "Nation's best-loved book". In similar 2004 polls both Germany[82] and Australia[83] also found The Lord of the Rings to be their favourite book. In a 1999 poll of Amazon.com customers, The Lord of the Rings was judged to be their favourite "book of the millennium".[84]
96
+
97
+ C. S. Lewis observed that the writing is rich in that some of the 'good' characters have darker sides, and likewise some of the villains have "good impulses".[85]
98
+
99
+ Although The Lord of the Rings was published in the 1950s, Tolkien insisted that the One Ring was not an allegory for the atomic bomb,[86] nor were his works a strict allegory of any kind, but were open to interpretation as the reader saw fit.[87][88]
100
+
101
+ A few critics have found what they consider racial elements in the story, which are generally based upon their views of how Tolkien's imagery depicts good and evil, characters' race (e.g. Elf, Dwarf, Hobbit, Southron, Númenórean, Orc), and how the characters' race is seen as determining their behaviour.[89][90][91] On the contrary, counter-arguments note that race-focused critiques often omit relevant textual evidence,[92][93][94] cite imagery from adaptations rather than the work itself,[95] ignore the absence of evidence of racist attitudes or events in the author's personal life,[92][95][96] and claim that the perception of racism is itself a marginal view.[96]
102
+
103
+ The opinions that pit races against each other most likely reflect Tolkien's criticism of war rather than a racist perspective. In The Two Towers, the character Samwise sees a fallen foe, a man of colour, and considers the humanity of this fallen Southron.[97] Director Peter Jackson, in the director's commentary of this scene, argues that Tolkien isn't projecting negativity towards the individual soldier because of his race, but against the evil authority that is driving them.[98] These sentiments, Jackson argues, arose from Tolkien's experience in the Great War and found their way into his writings to show the evils of war itself, not of other races.
104
+
105
+ Critics have also seen social class rather than race as being the determining factor in the portrayal of good and evil.[92] Commentators such as science fiction author David Brin have interpreted the work to hold unquestioning devotion to a traditional elitist social structure.[99] In his essay "Epic Pooh", science fiction and fantasy author Michael Moorcock critiques the world-view displayed by the book as deeply conservative, in both the "paternalism" of the narrative voice and the power-structures in the narrative.[100] Tom Shippey cites the origin of this portrayal of evil as a reflection of the prejudices of European middle-classes during the inter-war years towards the industrial working class.[101]
106
+
107
+ Other observers have described Christian themes in the work, specifically Roman Catholicism.[102]
108
+
109
+ The book has been read as fitting the model of Joseph Campbell's "monomyth".[103]
110
+
111
+ The Lord of the Rings has been adapted for film, radio and stage.
112
+
113
+ The book has been adapted for radio four times. In 1955 and 1956, the BBC broadcast The Lord of the Rings, a 13-part radio adaptation of the story. In the 1960s radio station WBAI produced a short radio adaptation. A 1979 dramatization of The Lord of the Rings was broadcast in the United States and subsequently issued on tape and CD. In 1981, the BBC broadcast The Lord of the Rings, a new dramatization in 26 half-hour instalments. This dramatization of The Lord of the Rings has subsequently been made available on both tape and CD both by the BBC and other publishers. For this purpose it is generally edited into 13 one-hour episodes.
114
+
115
+ Filmmakers who attempted to adapt Tolkien's works include William Snyder, Peter Shaffer, John Boorman, Ralph Bakshi, Peter Jackson and Guillermo del Toro. Other filmmakers and producers who were interested in an adaptation included Walt Disney, Al Brodax, Forrest J Ackerman, Samuel Gelfman, Denis O'Dell and Heinz Edelmann.
116
+
117
+ Following J. R. R. Tolkien's sale of the film rights for The Lord of the Rings to United Artists in 1969, rock band The Beatles considered a corresponding film project. David Lean was approached to direct, and while intrigued, was busy with Ryan's Daughter. The next choice, Stanley Kubrick, had to first familiarize himself with the books, only to then say they were unfilmable due to their immensity.[104][105] Michaelangelo Antonioni was contacted, and Heinz Edelmann even offered doing it in animation, but the project fell apart.[106] British director John Boorman also tried to make an adaptation of The Lord of the Rings for United Artists in 1970. After the script was written, which included many changes to the story and the characters, the production company scrapped the project, thinking it too expensive and too risky.[107]
118
+
119
+ Two film adaptations of the book have been made. The first was J. R. R. Tolkien's The Lord of the Rings (1978), by animator Ralph Bakshi, the first part of what was originally intended to be a two-part adaptation of the story; it covers The Fellowship of the Ring and part of The Two Towers. A three-issue comic book version of the movie was also published in Europe (but not printed in English), with illustrations by Luis Bermejo.
120
+
121
+ The second and more commercially successful adaptation was Peter Jackson's live action The Lord of the Rings film trilogy, produced by New Line Cinema and released in three instalments as The Lord of the Rings: The Fellowship of the Ring (2001), The Lord of the Rings: The Two Towers (2002), and The Lord of the Rings: The Return of the King (2003). All three parts won multiple Academy Awards, including consecutive Best Picture nominations. The final instalment of this trilogy was the second film to break the one-billion-dollar barrier and won a total of 11 Oscars (something only two other films in history, Ben-Hur and Titanic, have accomplished), including Best Picture, Best Director and Best Adapted Screenplay. Jackson later reprised his role as director, writer and producer to make a prequel trilogy based on The Hobbit.
122
+
123
+ The Hunt for Gollum, a fan film based on elements of the appendices to The Lord of the Rings, was released on the internet in May 2009 and has been covered in major media.[108] Born of Hope, written by Paula DiSante, directed by Kate Madison, and released in December 2009, is a fan film based upon the appendices of The Lord of the Rings.[109]
124
+
125
+ Rankin and Bass used a loophole in the publication of The Hobbit and The Lord of the Rings (which made them public domain in the US) to make animated TV specials based on The Hobbit, released in 1977, and a sequel based on the closing chapters of The Return of the King, which came out in 1980.
126
+
127
+ In 2017, Amazon acquired the global television rights to The Lord of the Rings for a multi-season television series of new stories set before The Hobbit and The Lord of the Rings,[110] based on J.R.R. Tolkien's writings about events of the Second Age of Middle-earth.[111] Amazon said the deal included potential for spin-off series as well.[112][113] It was later revealed that the show will apparently be set in the early second age, during the time of the Forging of the Rings,[114] and will allegedly be a prequel to the live-action films.[115]
128
+
129
+ It was projected in 2018 to be the most expensive TV show ever produced.[116] Much of it will be produced in New Zealand.[117][118][119][120] The cast includes Robert Aramayo, Owain Arthur, Nazanin Boniadi, Tom Budge, Morfydd Clark (as Galadriel),[121] Ismael Cruz Córdova, Ema Horvath, Markella Kavenagh, Joseph Mawle, Tyroe Muhafidin, Sophia Nomvete, Megan Richards, Dylan Smith, Charlie Vickers, Daniel Weyman,[122] and Maxim Baldry.[123]
130
+
131
+ In 1990, Recorded Books published an audio version of The Lord of the Rings,[124] with British actor Rob Inglis – who had previously starred in his own one-man stage productions of The Hobbit and The Lord of the Rings – reading. A large-scale musical theatre adaptation, The Lord of the Rings was first staged in Toronto, Ontario, Canada in 2006 and opened in London in June 2007.
132
+
133
+ The enormous popularity of Tolkien's work expanded the demand for fantasy fiction. Largely thanks to The Lord of the Rings, the genre flowered throughout the 1960s, and enjoys popularity to the present day. The opus has spawned many imitators, such as The Sword of Shannara, which Lin Carter called "the single most cold-blooded, complete rip-off of another book that I have ever read".[125]
134
+ Dungeons & Dragons, which popularized the role-playing game (RPG) genre in the 1970s, features many races found in The Lord of the Rings, most notably halflings (another term for hobbits), elves, dwarves, half-elves, orcs, and dragons. However, Gary Gygax, lead designer of the game, maintained that he was influenced very little by The Lord of the Rings, stating that he included these elements as a marketing move to draw on the popularity the work enjoyed at the time he was developing the game.[126]
135
+
136
+ Because D&D has gone on to influence many popular role-playing video games, the influence of The Lord of the Rings extends to many of them as well, with titles such as Dragon Quest,[127][128] the Ultima series, EverQuest, the Warcraft series, and the Elder Scrolls series of games[129] as well as video games set in Middle-earth itself.
137
+
138
+ Research also suggests that some consumers of fantasy games derive their motivation from trying to create an epic fantasy narrative which is influenced by The Lord of the Rings.[130]
139
+
140
+ In 1965, songwriter Donald Swann, who was best known for his collaboration with Michael Flanders as Flanders & Swann, set six poems from The Lord of the Rings and one from The Adventures of Tom Bombadil ("Errantry") to music. When Swann met with Tolkien to play the songs for his approval, Tolkien suggested for "Namárië" (Galadriel's lament) a setting reminiscent of plain chant, which Swann accepted.[131] The songs were published in 1967 as The Road Goes Ever On: A Song Cycle,[132] and a recording of the songs performed by singer William Elvin with Swann on piano was issued that same year by Caedmon Records as Poems and Songs of Middle Earth.[133]
141
+
142
+ Rock bands of the 1970s were musically and lyrically inspired by the fantasy embracing counter-culture of the time; British 70s rock band Led Zeppelin recorded several songs that contain explicit references to The Lord of the Rings, such as mentioning Gollum in "Ramble On", the Misty Mountains in "Misty Mountain Hop", and Ringwraiths in "The Battle of Evermore". In 1970, the Swedish musician Bo Hansson released an instrumental concept album based on the book titled Sagan om ringen (translated as "The Saga of the Ring", which was the title of the Swedish translation of The Lord of the Rings at the time).[134] The album was subsequently released internationally as Music Inspired by Lord of the Rings in 1972.[134]
143
+
144
+ The songs "Rivendell" and "The Necromancer" by the progressive rock band Rush were inspired by Tolkien. Styx also paid homage to Tolkien on their album Pieces of Eight with the song "Lords of the Ring", while Black Sabbath's song, "The Wizard", which appeared on their debut album, was influenced by Tolkien's hero, Gandalf. Progressive rock group Camel paid homage to the text in their lengthy composition "Nimrodel/The Procession/The White Rider", and progressive rock band Barclay James Harvest was inspired by the character Galadriel to write a song by that name, and used "Bombadil", the name of another character, as a pseudonym under which their 1972 single "Breathless"/"When the City Sleeps" was released; there are other references scattered through the BJH oeuvre.
145
+
146
+ Later, from the 1980s to the present day, many heavy metal acts have been influenced by Tolkien. Blind Guardian has written many songs relating to Middle-earth, including the full concept album Nightfall in Middle Earth. Almost the entire discography of Battlelore are Tolkien-themed. Summoning's music is based upon Tolkien and holds the distinction of the being the only artist to have crafted a song entirely in the Black Speech of Mordor. Gorgoroth, Cirith Ungol and Amon Amarth take their names from an area of Mordor, and Burzum take their name from the Black Speech of Mordor. The Finnish metal band Nightwish and the Norwegian metal band Tristania have also incorporated many Tolkien references into their music. American heavy metal band Megadeth released two songs titled "This Day We Fight!" and "How the Story Ends", which were both inspired by The Lord of the Rings.[135] German folk metal band Eichenschild is named for Thorin Oakenshield, a character in The Hobbit, and naturally has a number of Tolkien-themed songs. They are not to be confused with the '70s folk rock band Thorin Eichenschild.
147
+
148
+ In 1988, Dutch composer and trombonist Johan de Meij completed his Symphony No. 1 "The Lord of the Rings", which encompassed 5 movements, titled "Gandalf", "Lothlórien", "Gollum", "Journey in the Dark", and "Hobbits". In 1989 the symphony was awarded the Sudler Composition Award, awarded biennially for best wind band composition. The Danish Tolkien Ensemble have released a number of albums that feature the complete poems and songs of The Lord of the Rings set to music, with some featuring recitation by Christopher Lee. In 2018, de Meij completed his Symphony No. 5 "Return to Middle Earth" in 2018, which has 6 movements titled "Mîri na Fëanor (Fëanor’s Jewels)", "Tinúviel (Nightingale)", "Ancalagon i-môr (Ancalagon, The Black)", "Arwen Undómiel (Evenstar)", "Dagor Delothrin (The War of Wrath)", and "Thuringwethil (Woman of Secret Shadow)".
149
+
150
+ Enya wrote an instrumental piece called "Lothlórien" in 1991, and composed two songs for the film The Lord of the Rings: The Fellowship of the Ring—"May It Be" (sung in English and Quenya) and "Aníron" (sung in Sindarin).
151
+
152
+ The 2020 modern classical album "Music for Piano and Strings" by pianist and composer Holger Skepeneit contains two Lord of the Rings-inspired pieces, "Laced with Ithildin" and "Nimrodel's Voice".
153
+
154
+ The Lord of the Rings has had a profound and wide-ranging impact on popular culture, beginning with its publication in the 1950s, but especially throughout the 1960s and 1970s, during which time young people embraced it as a countercultural saga.[136] "Frodo Lives!" and "Gandalf for President" were two phrases popular amongst United States Tolkien fans during this time.[137]
155
+
156
+ Parodies like the Harvard Lampoon's Bored of the Rings, the VeggieTales episode "Lord of the Beans", the South Park episode "The Return of the Fellowship of the Ring to the Two Towers", the Futurama film Bender's Game, The Adventures of Jimmy Neutron: Boy Genius episode "Lights! Camera! Danger!", The Big Bang Theory episode "The Precious Fragmentation", and the American Dad! episode "The Return of the Bling" are testimony to the work's continual presence in popular culture.
157
+
158
+ In 1969, Tolkien sold the merchandising rights to The Lord of The Rings (and The Hobbit) to United Artists under an agreement stipulating a lump sum payment of £10,000[138] plus a 7.5% royalty after costs,[139] payable to Allen & Unwin and the author.[140] In 1976, three years after the author's death, United Artists sold the rights to Saul Zaentz Company, who now trade as Tolkien Enterprises. Since then all "authorized" merchandise has been signed-off by Tolkien Enterprises, although the intellectual property rights of the specific likenesses of characters and other imagery from various adaptations is generally held by the adaptors.[141]
159
+
160
+ Outside any commercial exploitation from adaptations, from the late 1960s onwards there has been an increasing variety of original licensed merchandise, from posters and calendars created by illustrators such as Barbara Remington[142], Pauline Baynes and the Brothers Hildebrandt, to figurines and miniatures to computer, video, tabletop and role-playing games. Recent examples include the Spiel des Jahres award-winning (for "best use of literature in a game") board game The Lord of the Rings by Reiner Knizia and the Golden Joystick award-winning massively multiplayer online role-playing game, The Lord of the Rings Online: Shadows of Angmar by Turbine, Inc..
161
+
162
+ The Lord of the Rings has been mentioned in numerous songs including "The Ballad of Bilbo Baggins" by Leonard Nimoy, Led Zeppelin's "Misty Mountain Hop", "Over the Hills and Far Away", "Ramble On", and "The Battle of Evermore", Genesis' song "Stagnation" (from Trespass, 1970) was about Gollum, Rush included the song "Rivendell" on their second studio album Fly by Night, and Argent included the song "Lothlorien" on the 1971 album Ring of Hands.
163
+
164
+ Steve Peregrin Took (born Stephen Ross Porter) of British rock band T. Rex took his name from the hobbit Peregrin Took (better known as Pippin). Took later recorded under the pseudonym 'Shagrat the Vagrant', before forming a band called Shagrat in 1970.
165
+
166
+ On 5 November 2019, the BBC News listed The Lord of the Rings on its list of the 100 most influential novels.[143]
en/5334.html.txt ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Seine (/seɪn/ SAYN, /sɛn/ SEN;[1] French: [sɛn] (listen)) is a 777-kilometre-long (483 mi) river and an important commercial waterway within the Paris Basin in the north of France. It rises at Source-Seine, 30 kilometres (19 mi) northwest of Dijon in northeastern France in the Langres plateau, flowing through Paris and into the English Channel at Le Havre (and Honfleur on the left bank).[2] It is navigable by ocean-going vessels as far as Rouen, 120 kilometres (75 mi) from the sea. Over 60 percent of its length, as far as Burgundy, is negotiable by commercial riverboats, and nearly its whole length is available for recreational boating; excursion boats offer sightseeing tours of the river banks in Paris, lined with top monuments including the cathedral of Notre-Dame, the Eiffel Tower, the Louvre Museum and Musée d'Orsay.[3]
4
+
5
+ There are 37 bridges within Paris and dozens more spanning the river outside the city. Examples in Paris include the Pont Alexandre III and Pont Neuf, the latter of which dates back to 1607. Outside the city, examples include the Pont de Normandie, one of the longest cable-stayed bridges in the world, which links Le Havre to Honfleur.
6
+
7
+ The Seine rises in the commune of Source-Seine, about 30 kilometres (19 mi) northwest of Dijon.
8
+ The source has been owned by the city of Paris since 1864. A number of closely associated small ditches or depressions provide the source waters, with an artificial grotto laid out to highlight and contain a deemed main source. The grotto includes a statue of a nymph, a dog, and a dragon. On the same site are the buried remains of a Gallo-Roman temple. Small statues of the dea Sequana "Seine goddess" and other ex-votos found at the same place are now exhibited in the Dijon archaeological museum.
9
+
10
+ The Seine can artificially be divided into five parts:
11
+
12
+ The Seine is dredged and ocean-going vessels can dock at Rouen, 120 kilometres (75 mi) from the sea. Commercial craft (barges and push-tows) can use the river from Marcilly-sur-Seine, 516 kilometres (321 mi) to its mouth.[4]
13
+
14
+ At Paris, there are 37 bridges. The river is only 24 metres (79 ft) above sea level 446 kilometres (277 mi) from its mouth, making it slow flowing and thus easily navigable.
15
+
16
+ The Seine Maritime, 123 kilometres (76 mi) from the English Channel at Le Havre to Rouen, is the only portion of the Seine used by ocean-going craft.[5] The tidal section of the Seine Maritime is followed by a canalized section (Basse Seine) with four large multiple locks until the mouth of the Oise at Conflans-Sainte-Honorine (170 km [110 mi]). Smaller locks at Bougival and at Suresnes lift the vessels to the level of the river in Paris, where the junction with the Canal Saint-Martin is located. The distance from the mouth of the Oise is 72 km (45 mi).[6]
17
+
18
+ The Haute Seine, from Paris to Montereau-Fault-Yonne, is 98 km (61 mi) long and has 8 locks.[7] At Charenton-le-Pont is the mouth of the Marne. Upstream from Paris seven locks ensure navigation to Saint Mammès, where the Loing mouth is situated. Through an eighth lock the river Yonne is reached at Montereau-Fault-Yonne. From the mouth of the Yonne, larger ships can continue upstream to Nogent-sur-Seine (48 km [30 mi], 7 locks).[8] From there on, the river is navigable only by small craft to Marcilly-sur-Seine (19 km [12 mi], 4 locks).[9] At Marcilly-sur-Seine the ancient Canal de la Haute-Seine used to allow vessels to continue all the way to Troyes. This canal has been abandoned since 1957.[10]
19
+
20
+ The average depth of the Seine today at Paris is about 9.5 metres (31 ft). Until locks were installed to raise the level in the 1800s, the river was much shallower within the city most of the time, and consisted of a small channel of continuous flow bordered by sandy banks (depicted in many illustrations of the period). Today the depth is tightly controlled and the entire width of the river between the built-up banks on either side is normally filled with water. The average flow of the river is very low, only a few cubic metres per second, but much higher flows are possible during periods of heavy runoff.
21
+
22
+ Four large storage reservoirs have been built since 1950 on the Seine as well as its tributaries Yonne, Marne, and Aube. These help in maintaining a constant level for the river through the city, but cannot prevent significant increases in river level during periods of extreme runoff. The dams are Lac d’Orient, Lac des Settons, Lake Der-Chantecoq, and Auzon-Temple and Amance, respectively.[11]
23
+
24
+ A very severe period of high water in January 1910 resulted in extensive flooding throughout the city. The Seine again rose to threatening levels in 1924, 1955, 1982, 1999–2000, June 2016, and January 2018.[12][13] After a first-level flood alert in 2003, about 100,000 works of art were moved out of Paris, the largest relocation of art since World War II. Much of the art in Paris is kept in underground storage rooms that would have been flooded.[14] A 2002 report by the French government stated the worst-case Seine flood scenario would cost 10 billion euros and cut telephone service for a million Parisians, leaving 200,000 without electricity and 100,000 without gas.[15]
25
+
26
+ In January 2018 the Seine again flooded, reaching a flood level of 5.84 metres (19 ft 2 in) on 29 January.[16] An official warning was issued on 24 January that heavy rainfall was likely to cause the river to flood.[17] By 27 January, the river was rising.[18] The Deputy Mayor of Paris, Colombe Brossel, warned that the heavy rain was caused by climate change, and that "We have to understand that climatic change is not a word, it's a reality."[19]
27
+
28
+ The basin area, including a part of Belgium, is 78,910 square kilometres (30,470 sq mi),[20] 2 percent of which is forest and 78 percent cultivated land. In addition to Paris, three other cities with a population over 100,000 are in the Seine watershed: Le Havre at the estuary, Rouen in the Seine valley and Reims at the northern limit—with an annual urban growth rate of 0.2 percent.[20] The population density is 201 per square kilometer.
29
+
30
+ Periodically the sewerage systems of Paris experience a failure known as sanitary sewer overflow, often in periods of high rainfall. Under these conditions untreated sewage is discharged into the Seine.[21] The resulting oxygen deficit is principally caused by allochthonous bacteria larger than one micrometre in size. The specific activity of these sewage bacteria is typically three to four times greater than that of the autochthonous (background) bacterial population. Heavy metal concentrations in the Seine are relatively high.[22] The pH level of the Seine at Pont Neuf has been measured to be 8.46. Despite this, the water quality has improved significantly over what several historians at various times in the past called an "open sewer".[23]
31
+
32
+ In 2009, it was announced that Atlantic salmon had returned to the Seine.[24]
33
+
34
+ The name Seine comes from the Latin Sēquana, sometimes associated with the Gallo-Roman goddess of the river. The word seems to derive from the same root as Latin sequor (I follow) and English sequence, namely Proto-Indo-European *seikw-, signifying 'to flow' or 'to pour forth'.[25]
35
+
36
+ On 28 or 29 March, 845, an army of Vikings led by a chieftain named Reginherus, which is possibly another name for Ragnar Lothbrok, sailed up the River Seine with siege towers and sacked Paris.
37
+
38
+ On 25 November, 885, another Viking expedition led by Rollo was sent up the River Seine to attack Paris again.
39
+
40
+ In March, 1314, King Philip IV of France had Jacques de Molay, last Grand Master of the Knights Templar, burned on a scaffold on an island in the River Seine in front of Notre Dame de Paris.[26]
41
+
42
+ After the burning at the stake of Joan of Arc in 1431, her ashes were thrown into the Seine from the medieval stone Mathilde Bridge at Rouen, though unserious counter-claims persist.[27]
43
+
44
+ According to his will, Napoleon, who died in 1821, wished to be buried on the banks of the Seine. His request was not granted.
45
+
46
+ At the 1900 Summer Olympics, the river hosted the rowing, swimming, and water polo events.[28] Twenty-four years later, it hosted the rowing events again at Bassin d'Argenteuil, along the Seine north of Paris.[29]
47
+
48
+ Until the 1930s, a towing system using a chain on the bed of the river existed to facilitate movement of barges upriver.[citation needed] Listed in World Canals by Charles Hadfield, David and Charles 1986.
49
+
50
+ The Seine was one of the original objectives of Operation Overlord in 1944. The Allies' intention was to reach the Seine by 90 days after D-Day. That objective was met. An anticipated assault crossing of the river never materialized as German resistance in France crumbled by early September 1944. However, the First Canadian Army did encounter resistance immediately west of the Seine and fighting occurred in the Forêt de la Londe as Allied troops attempted to cut off the escape across the river of parts of the German 7th Army in the closing phases of the Battle of Normandy.
51
+
52
+ Some of the Algerian victims of the Paris massacre of 1961 drowned in the Seine after being thrown by French policemen from the Pont Saint-Michel and other locations in Paris.
53
+
54
+ Dredging in the 1960s mostly eliminated tidal bores on the lower river, known in French as "le mascaret."
55
+
56
+ In 1991 UNESCO added the banks of the Seine in Paris—the Rive Gauche and Rive Droite—to its list of World Heritage Sites in Europe.[30]
57
+
58
+ Since 2002 Paris-Plages has been held every summer on the Paris banks of the Seine: a transformation of the paved banks into a beach with sand and facilities for sunbathing and entertainment.
59
+
60
+ In 2007, 55 bodies were retrieved from its waters; in February 2008, the body of supermodel-turned-activist Katoucha Niane was found there.[31]
61
+
62
+ The Seine was the river that Javert, the primary antagonist of Victor Hugo's 1862 novel Les Misérables, drowned himself in.
63
+
64
+ In Ludwig Bemelmans' 1953 children's book "Madeline's Rescue" and the 1998 live-action adaptation of Madeline, Madeline accidentally falls into the Seine after standing on the ledge of a bridge. The notable difference between the two is that in the book, Madeline fell over after playing on the ledge, whereas in the film, she fell over trying to justify her actions towards Pepito that got all the girls in trouble.
65
+
66
+ In the 2016 film La La Land, Mia, the female protagonist, sang about her aunt who jumped into The Seine without looking and how it is similar to all the dreamers in the world who keeps on dreaming, in her final audition “Audition (The Fools Who Dream)”. The song was nominated for Best Original Song in the 89th Academy Awards.
67
+
68
+ During the 19th and the 20th centuries in particular the Seine inspired many artists, including:
69
+
70
+ A song 'La Seine' by Flavien Monod and Guy Lafarge was written in 1948.
71
+
72
+ Josephine Baker recorded a song 'La Seine'[32]
73
+
74
+ A song 'La seine' by Vanessa Paradis feat. Matthieu Chedid was originally written as a soundtrack for the movie 'A Monster in Paris'
75
+
76
+ Georges Seurat's Sunday Afternoon on the Island of La Grande Jatte (1884–1886) is set on an island in the Seine.
77
+
78
+ Carl Fredrik Hill, Seine-Landschaft bei Bois-Le-Roi (Seine Landscape in Bois-Le-Roi) (1877).
79
+
80
+ Alfred Sisley, The Terrace at Saint-Germain, Spring (1875) in the Walters Art Museum gives a panoramic view of the Seine river valley.
81
+
82
+ "Washhouses on Seine" (1937) by Andrus Johani
en/5335.html.txt ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Seine (/seɪn/ SAYN, /sɛn/ SEN;[1] French: [sɛn] (listen)) is a 777-kilometre-long (483 mi) river and an important commercial waterway within the Paris Basin in the north of France. It rises at Source-Seine, 30 kilometres (19 mi) northwest of Dijon in northeastern France in the Langres plateau, flowing through Paris and into the English Channel at Le Havre (and Honfleur on the left bank).[2] It is navigable by ocean-going vessels as far as Rouen, 120 kilometres (75 mi) from the sea. Over 60 percent of its length, as far as Burgundy, is negotiable by commercial riverboats, and nearly its whole length is available for recreational boating; excursion boats offer sightseeing tours of the river banks in Paris, lined with top monuments including the cathedral of Notre-Dame, the Eiffel Tower, the Louvre Museum and Musée d'Orsay.[3]
4
+
5
+ There are 37 bridges within Paris and dozens more spanning the river outside the city. Examples in Paris include the Pont Alexandre III and Pont Neuf, the latter of which dates back to 1607. Outside the city, examples include the Pont de Normandie, one of the longest cable-stayed bridges in the world, which links Le Havre to Honfleur.
6
+
7
+ The Seine rises in the commune of Source-Seine, about 30 kilometres (19 mi) northwest of Dijon.
8
+ The source has been owned by the city of Paris since 1864. A number of closely associated small ditches or depressions provide the source waters, with an artificial grotto laid out to highlight and contain a deemed main source. The grotto includes a statue of a nymph, a dog, and a dragon. On the same site are the buried remains of a Gallo-Roman temple. Small statues of the dea Sequana "Seine goddess" and other ex-votos found at the same place are now exhibited in the Dijon archaeological museum.
9
+
10
+ The Seine can artificially be divided into five parts:
11
+
12
+ The Seine is dredged and ocean-going vessels can dock at Rouen, 120 kilometres (75 mi) from the sea. Commercial craft (barges and push-tows) can use the river from Marcilly-sur-Seine, 516 kilometres (321 mi) to its mouth.[4]
13
+
14
+ At Paris, there are 37 bridges. The river is only 24 metres (79 ft) above sea level 446 kilometres (277 mi) from its mouth, making it slow flowing and thus easily navigable.
15
+
16
+ The Seine Maritime, 123 kilometres (76 mi) from the English Channel at Le Havre to Rouen, is the only portion of the Seine used by ocean-going craft.[5] The tidal section of the Seine Maritime is followed by a canalized section (Basse Seine) with four large multiple locks until the mouth of the Oise at Conflans-Sainte-Honorine (170 km [110 mi]). Smaller locks at Bougival and at Suresnes lift the vessels to the level of the river in Paris, where the junction with the Canal Saint-Martin is located. The distance from the mouth of the Oise is 72 km (45 mi).[6]
17
+
18
+ The Haute Seine, from Paris to Montereau-Fault-Yonne, is 98 km (61 mi) long and has 8 locks.[7] At Charenton-le-Pont is the mouth of the Marne. Upstream from Paris seven locks ensure navigation to Saint Mammès, where the Loing mouth is situated. Through an eighth lock the river Yonne is reached at Montereau-Fault-Yonne. From the mouth of the Yonne, larger ships can continue upstream to Nogent-sur-Seine (48 km [30 mi], 7 locks).[8] From there on, the river is navigable only by small craft to Marcilly-sur-Seine (19 km [12 mi], 4 locks).[9] At Marcilly-sur-Seine the ancient Canal de la Haute-Seine used to allow vessels to continue all the way to Troyes. This canal has been abandoned since 1957.[10]
19
+
20
+ The average depth of the Seine today at Paris is about 9.5 metres (31 ft). Until locks were installed to raise the level in the 1800s, the river was much shallower within the city most of the time, and consisted of a small channel of continuous flow bordered by sandy banks (depicted in many illustrations of the period). Today the depth is tightly controlled and the entire width of the river between the built-up banks on either side is normally filled with water. The average flow of the river is very low, only a few cubic metres per second, but much higher flows are possible during periods of heavy runoff.
21
+
22
+ Four large storage reservoirs have been built since 1950 on the Seine as well as its tributaries Yonne, Marne, and Aube. These help in maintaining a constant level for the river through the city, but cannot prevent significant increases in river level during periods of extreme runoff. The dams are Lac d’Orient, Lac des Settons, Lake Der-Chantecoq, and Auzon-Temple and Amance, respectively.[11]
23
+
24
+ A very severe period of high water in January 1910 resulted in extensive flooding throughout the city. The Seine again rose to threatening levels in 1924, 1955, 1982, 1999–2000, June 2016, and January 2018.[12][13] After a first-level flood alert in 2003, about 100,000 works of art were moved out of Paris, the largest relocation of art since World War II. Much of the art in Paris is kept in underground storage rooms that would have been flooded.[14] A 2002 report by the French government stated the worst-case Seine flood scenario would cost 10 billion euros and cut telephone service for a million Parisians, leaving 200,000 without electricity and 100,000 without gas.[15]
25
+
26
+ In January 2018 the Seine again flooded, reaching a flood level of 5.84 metres (19 ft 2 in) on 29 January.[16] An official warning was issued on 24 January that heavy rainfall was likely to cause the river to flood.[17] By 27 January, the river was rising.[18] The Deputy Mayor of Paris, Colombe Brossel, warned that the heavy rain was caused by climate change, and that "We have to understand that climatic change is not a word, it's a reality."[19]
27
+
28
+ The basin area, including a part of Belgium, is 78,910 square kilometres (30,470 sq mi),[20] 2 percent of which is forest and 78 percent cultivated land. In addition to Paris, three other cities with a population over 100,000 are in the Seine watershed: Le Havre at the estuary, Rouen in the Seine valley and Reims at the northern limit—with an annual urban growth rate of 0.2 percent.[20] The population density is 201 per square kilometer.
29
+
30
+ Periodically the sewerage systems of Paris experience a failure known as sanitary sewer overflow, often in periods of high rainfall. Under these conditions untreated sewage is discharged into the Seine.[21] The resulting oxygen deficit is principally caused by allochthonous bacteria larger than one micrometre in size. The specific activity of these sewage bacteria is typically three to four times greater than that of the autochthonous (background) bacterial population. Heavy metal concentrations in the Seine are relatively high.[22] The pH level of the Seine at Pont Neuf has been measured to be 8.46. Despite this, the water quality has improved significantly over what several historians at various times in the past called an "open sewer".[23]
31
+
32
+ In 2009, it was announced that Atlantic salmon had returned to the Seine.[24]
33
+
34
+ The name Seine comes from the Latin Sēquana, sometimes associated with the Gallo-Roman goddess of the river. The word seems to derive from the same root as Latin sequor (I follow) and English sequence, namely Proto-Indo-European *seikw-, signifying 'to flow' or 'to pour forth'.[25]
35
+
36
+ On 28 or 29 March, 845, an army of Vikings led by a chieftain named Reginherus, which is possibly another name for Ragnar Lothbrok, sailed up the River Seine with siege towers and sacked Paris.
37
+
38
+ On 25 November, 885, another Viking expedition led by Rollo was sent up the River Seine to attack Paris again.
39
+
40
+ In March, 1314, King Philip IV of France had Jacques de Molay, last Grand Master of the Knights Templar, burned on a scaffold on an island in the River Seine in front of Notre Dame de Paris.[26]
41
+
42
+ After the burning at the stake of Joan of Arc in 1431, her ashes were thrown into the Seine from the medieval stone Mathilde Bridge at Rouen, though unserious counter-claims persist.[27]
43
+
44
+ According to his will, Napoleon, who died in 1821, wished to be buried on the banks of the Seine. His request was not granted.
45
+
46
+ At the 1900 Summer Olympics, the river hosted the rowing, swimming, and water polo events.[28] Twenty-four years later, it hosted the rowing events again at Bassin d'Argenteuil, along the Seine north of Paris.[29]
47
+
48
+ Until the 1930s, a towing system using a chain on the bed of the river existed to facilitate movement of barges upriver.[citation needed] Listed in World Canals by Charles Hadfield, David and Charles 1986.
49
+
50
+ The Seine was one of the original objectives of Operation Overlord in 1944. The Allies' intention was to reach the Seine by 90 days after D-Day. That objective was met. An anticipated assault crossing of the river never materialized as German resistance in France crumbled by early September 1944. However, the First Canadian Army did encounter resistance immediately west of the Seine and fighting occurred in the Forêt de la Londe as Allied troops attempted to cut off the escape across the river of parts of the German 7th Army in the closing phases of the Battle of Normandy.
51
+
52
+ Some of the Algerian victims of the Paris massacre of 1961 drowned in the Seine after being thrown by French policemen from the Pont Saint-Michel and other locations in Paris.
53
+
54
+ Dredging in the 1960s mostly eliminated tidal bores on the lower river, known in French as "le mascaret."
55
+
56
+ In 1991 UNESCO added the banks of the Seine in Paris—the Rive Gauche and Rive Droite—to its list of World Heritage Sites in Europe.[30]
57
+
58
+ Since 2002 Paris-Plages has been held every summer on the Paris banks of the Seine: a transformation of the paved banks into a beach with sand and facilities for sunbathing and entertainment.
59
+
60
+ In 2007, 55 bodies were retrieved from its waters; in February 2008, the body of supermodel-turned-activist Katoucha Niane was found there.[31]
61
+
62
+ The Seine was the river that Javert, the primary antagonist of Victor Hugo's 1862 novel Les Misérables, drowned himself in.
63
+
64
+ In Ludwig Bemelmans' 1953 children's book "Madeline's Rescue" and the 1998 live-action adaptation of Madeline, Madeline accidentally falls into the Seine after standing on the ledge of a bridge. The notable difference between the two is that in the book, Madeline fell over after playing on the ledge, whereas in the film, she fell over trying to justify her actions towards Pepito that got all the girls in trouble.
65
+
66
+ In the 2016 film La La Land, Mia, the female protagonist, sang about her aunt who jumped into The Seine without looking and how it is similar to all the dreamers in the world who keeps on dreaming, in her final audition “Audition (The Fools Who Dream)”. The song was nominated for Best Original Song in the 89th Academy Awards.
67
+
68
+ During the 19th and the 20th centuries in particular the Seine inspired many artists, including:
69
+
70
+ A song 'La Seine' by Flavien Monod and Guy Lafarge was written in 1948.
71
+
72
+ Josephine Baker recorded a song 'La Seine'[32]
73
+
74
+ A song 'La seine' by Vanessa Paradis feat. Matthieu Chedid was originally written as a soundtrack for the movie 'A Monster in Paris'
75
+
76
+ Georges Seurat's Sunday Afternoon on the Island of La Grande Jatte (1884–1886) is set on an island in the Seine.
77
+
78
+ Carl Fredrik Hill, Seine-Landschaft bei Bois-Le-Roi (Seine Landscape in Bois-Le-Roi) (1877).
79
+
80
+ Alfred Sisley, The Terrace at Saint-Germain, Spring (1875) in the Walters Art Museum gives a panoramic view of the Seine river valley.
81
+
82
+ "Washhouses on Seine" (1937) by Andrus Johani
en/5336.html.txt ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ An earthquake (also known as a quake, tremor or temblor) is the shaking of the surface of the Earth resulting from a sudden release of energy in the Earth's lithosphere that creates seismic waves. Earthquakes can range in size from those that are so weak that they cannot be felt to those violent enough to propel objects and people into the air, and wreak destruction across entire cities. The seismicity, or seismic activity, of an area is the frequency, type, and size of earthquakes experienced over a period of time. The word tremor is also used for non-earthquake seismic rumbling.
6
+
7
+ At the Earth's surface, earthquakes manifest themselves by shaking and displacing or disrupting the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides and occasionally, volcanic activity.
8
+
9
+ In its most general sense, the word earthquake is used to describe any seismic event—whether natural or caused by humans—that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake's point of initial rupture is called its hypocenter or focus. The epicenter is the point at ground level directly above the hypocenter.
10
+
11
+ Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increase the frictional resistance. Most fault surfaces do have such asperities, which leads to a form of stick-slip behavior. Once the fault has locked, continued relative motion between the plates leads to increasing stress and therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy.[1] This energy is released as a combination of radiated elastic strain seismic waves,[2] frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior.[3]
12
+
13
+ There are three main types of fault, all of which may cause an interplate earthquake: normal, reverse (thrust), and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and where movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip.
14
+
15
+ Reverse faults, particularly those along convergent plate boundaries, are associated with the most powerful earthquakes, megathrust earthquakes, including almost all of those of magnitude 8 or more. Strike-slip faults, particularly continental transforms, can produce major earthquakes up to about magnitude 8. Earthquakes associated with normal faults are generally less than magnitude 7. For every unit increase in magnitude, there is a roughly thirtyfold increase in the energy released. For instance, an earthquake of magnitude 6.0 releases approximately 32 times more energy than a 5.0 magnitude earthquake and a 7.0 magnitude earthquake releases 1,000 times more energy than a 5.0 magnitude of earthquake. An 8.6 magnitude earthquake releases the same amount of energy as 10,000 atomic bombs like those used in World War II.[4]
16
+
17
+ This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures[5] and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending down into the hot mantle, are the only parts of our planet that can store elastic energy and release it in fault ruptures. Rocks hotter than about 300 °C (572 °F) flow in response to stress; they do not rupture in earthquakes.[6][7] The maximum observed lengths of ruptures and mapped faults (which may break in a single rupture) are approximately 1,000 km (620 mi). Examples are the earthquakes in Alaska (1957), Chile (1960), and Sumatra (2004), all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939), and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter.
18
+
19
+ The most important parameter controlling the maximum earthquake magnitude on a fault, however, is not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees.[8] Thus, the width of the plane within the top brittle crust of the Earth can become 50–100 km (31–62 mi) (Japan, 2011; Alaska, 1964), making the most powerful earthquakes possible.
20
+
21
+ Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of 10 km (6.2 mi) within the brittle crust.[9] Thus, earthquakes with magnitudes much larger than 8 are not possible. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about six kilometres (3.7 mi).[10][11]
22
+
23
+ In addition, there exists a hierarchy of stress level in the three fault types. Thrust faults are generated by the highest, strike-slip by intermediate, and normal faults by the lowest stress levels.[12] This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that "pushes" the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass "escapes" in the direction of the least principal stress, namely upward, lifting the rock mass up, and thus, the overburden equals the least principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions.
24
+
25
+ Where plate boundaries occur within the continental lithosphere, deformation is spread out over a much larger area than the plate boundary itself. In the case of the San Andreas fault continental transform, many earthquakes occur away from the plate boundary and are related to strains developed within the broader zone of deformation caused by major irregularities in the fault trace (e.g., the "Big bend" region). The Northridge earthquake was associated with movement on a blind thrust within such a zone. Another example is the strongly oblique convergent plate boundary between the Arabian and Eurasian plates where it runs through the northwestern part of the Zagros Mountains. The deformation associated with this plate boundary is partitioned into nearly pure thrust sense movements perpendicular to the boundary over a wide zone to the southwest and nearly pure strike-slip motion along the Main Recent Fault close to the actual plate boundary itself. This is demonstrated by earthquake focal mechanisms.[13]
26
+
27
+ All tectonic plates have internal stress fields caused by their interactions with neighboring plates and sedimentary loading or unloading (e.g., deglaciation).[14] These stresses may be sufficient to cause failure along existing fault planes, giving rise to intraplate earthquakes.[15]
28
+
29
+ The majority of tectonic earthquakes originate at the ring of fire in depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than 70 km (43 mi) are classified as "shallow-focus" earthquakes, while those with a focal-depth between 70 and 300 km (43 and 186 mi) are commonly termed "mid-focus" or "intermediate-depth" earthquakes. In subduction zones, where older and colder oceanic crust descends beneath another tectonic plate, deep-focus earthquakes may occur at much greater depths (ranging from 300 to 700 km (190 to 430 mi)).[16] These seismically active areas of subduction are known as Wadati–Benioff zones. Deep-focus earthquakes occur at a depth where the subducted lithosphere should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep-focus earthquakes is faulting caused by olivine undergoing a phase transition into a spinel structure.[17]
30
+
31
+ Earthquakes often occur in volcanic regions and are caused there, both by tectonic faults and the movement of magma in volcanoes. Such earthquakes can serve as an early warning of volcanic eruptions, as during the 1980 eruption of Mount St. Helens.[18] Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers and tiltmeters (a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions.[19]
32
+
33
+ A tectonic earthquake begins by an initial rupture at a point on the fault surface, a process known as nucleation. The scale of the nucleation zone is uncertain, with some evidence, such as the rupture dimensions of the smallest earthquakes, suggesting that it is smaller than 100 m (330 ft) while other evidence, such as a slow component revealed by low-frequency spectra of some earthquakes, suggest that it is larger. The possibility that the nucleation involves some sort of preparation process is supported by the observation that about 40% of earthquakes are preceded by foreshocks. Once the rupture has initiated, it begins to propagate along the fault surface. The mechanics of this process are poorly understood, partly because it is difficult to recreate the high sliding velocities in a laboratory. Also the effects of strong ground motion make it very difficult to record information close to a nucleation zone.[20]
34
+
35
+ Rupture propagation is generally modeled using a fracture mechanics approach, likening the rupture to a propagating mixed mode shear crack. The rupture velocity is a function of the fracture energy in the volume around the crack tip, increasing with decreasing fracture energy. The velocity of rupture propagation is orders of magnitude faster than the displacement velocity across the fault. Earthquake ruptures typically propagate at velocities that are in the range 70–90% of the S-wave velocity, which is independent of earthquake size. A small subset of earthquake ruptures appear to have propagated at speeds greater than the S-wave velocity. These supershear earthquakes have all been observed during large strike-slip events. The unusually wide zone of coseismic damage caused by the 2001 Kunlun earthquake has been attributed to the effects of the sonic boom developed in such earthquakes. Some earthquake ruptures travel at unusually low velocities and are referred to as slow earthquakes. A particularly dangerous form of slow earthquake is the tsunami earthquake, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighboring coast, as in the 1896 Sanriku earthquake.[20]
36
+
37
+ Tides may induce some seismicity. See tidal triggering of earthquakes for details.
38
+
39
+ Most earthquakes form part of a sequence, related to each other in terms of location and time.[21] Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern.[22]
40
+
41
+ An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. An aftershock is in the same region of the main shock but always of a smaller magnitude. If an aftershock is larger than the main shock, the aftershock is redesignated as the main shock and the original main shock is redesignated as a foreshock. Aftershocks are formed as the crust around the displaced fault plane adjusts to the effects of the main shock.[21]
42
+
43
+ Earthquake swarms are sequences of earthquakes striking in a specific area within a short period of time. They are different from earthquakes followed by a series of aftershocks by the fact that no single earthquake in the sequence is obviously the main shock, so none has a notable higher magnitude than another. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park.[23] In August 2012, a swarm of earthquakes shook Southern California's Imperial Valley, showing the most recorded activity in the area since the 1970s.[24]
44
+
45
+ Sometimes a series of earthquakes occur in what has been called an earthquake storm, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution of the previous earthquakes. Similar to aftershocks but on adjacent segments of fault, these storms occur over the course of years, and with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East.[25][26]
46
+
47
+ Quaking or shaking of the earth is a common phenomenon undoubtedly known to humans from earliest times. Prior to the development of strong-motion accelerometers that can measure peak ground speed and acceleration directly, the intensity of the earth-shaking was estimated on the basis of the observed effects, as categorized on various seismic intensity scales. Only in the last century has the source of such shaking been identified as ruptures in the Earth's crust, with the intensity of shaking at any locality dependent not only on the local ground conditions but also on the strength or magnitude of the rupture, and on its distance.[27]
48
+
49
+ The first scale for measuring earthquake magnitudes was developed by Charles F. Richter in 1935. Subsequent scales (see seismic magnitude scales) have retained a key feature, where each unit represents a ten-fold difference in the amplitude of the ground shaking and a 32-fold difference in energy. Subsequent scales are also adjusted to have approximately the same numeric value within the limits of the scale.[28]
50
+
51
+ Although the mass media commonly reports earthquake magnitudes as "Richter magnitude" or "Richter scale", standard practice by most seismological authorities is to express an earthquake's strength on the moment magnitude scale, which is based on the actual energy released by an earthquake.[29]
52
+
53
+ It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt.[30][31] Minor earthquakes occur nearly constantly around the world in places like California and Alaska in the U.S., as well as in El Salvador, Mexico, Guatemala, Chile, Peru, Indonesia, Philippines, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, India, Nepal and Japan.[32] Larger earthquakes occur less frequently, the relationship being exponential; for example, roughly ten times as many earthquakes larger than magnitude 4 occur in a particular time period than earthquakes larger than magnitude 5.[33] In the (low seismicity) United Kingdom, for example, it has been calculated that the average recurrences are:
54
+ an earthquake of 3.7–4.6 every year, an earthquake of 4.7–5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years.[34] This is an example of the Gutenberg–Richter law.
55
+
56
+ The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0–7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable.[36] In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend.[37] More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey (USGS).[38]
57
+ A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case.[39]
58
+
59
+ Most of the world's earthquakes (90%, and 81% of the largest) take place in the 40,000-kilometre-long (25,000 mi), horseshoe-shaped zone called the circum-Pacific seismic belt, known as the Pacific Ring of Fire, which for the most part bounds the Pacific Plate.[40][41] Massive earthquakes tend to occur along other plate boundaries too, such as along the Himalayan Mountains.[42]
60
+
61
+ With the rapid growth of mega-cities such as Mexico City, Tokyo and Tehran in areas of high seismic risk, some seismologists are warning that a single quake may claim the lives of up to three million people.[43]
62
+
63
+ While most earthquakes are caused by movement of the Earth's tectonic plates, human activity can also produce earthquakes. Activities both above ground and below may change the stresses and strains on the crust, including building reservoirs, extracting resources such as coal or oil, and injecting fluids underground for waste disposal or fracking.[44] Most of these earthquakes have small magnitudes. The 5.7 magnitude 2011 Oklahoma earthquake is thought to have been caused by disposing wastewater from oil production into injection wells,[45] and studies point to the state's oil industry as the cause of other earthquakes in the past century.[46] A Columbia University paper suggested that the 8.0 magnitude 2008 Sichuan earthquake was induced by loading from the Zipingpu Dam, though the link has not been conclusively proved.[47]
64
+
65
+ The instrumental scales used to describe the size of an earthquake began with the Richter magnitude scale in the 1930s. It is a relatively simple measurement of an event's amplitude, and its use has become minimal in the 21st century. Seismic waves travel through the Earth's interior and can be recorded by seismometers at great distances. The surface wave magnitude was developed in the 1950s as a means to measure remote earthquakes and to improve the accuracy for larger events. The moment magnitude scale not only measures the amplitude of the shock but also takes into account the seismic moment (total rupture area, average slip of the fault, and rigidity of the rock). The Japan Meteorological Agency seismic intensity scale, the Medvedev–Sponheuer–Karnik scale, and the Mercalli intensity scale are based on the observed effects and are related to the intensity of shaking.
66
+
67
+ Every tremor produces different types of seismic waves, which travel through rock with different velocities:
68
+
69
+ Propagation velocity of the seismic waves through solid rock ranges from approx. 3 km/s (1.9 mi/s) up to 13 km/s (8.1 mi/s), depending on the density and elasticity of the medium. In the Earth's interior, the shock- or P-waves travel much faster than the S-waves (approx. relation 1.7:1). The differences in travel time from the epicenter to the observatory are a measure of the distance and can be used to image both sources of quakes and structures within the Earth. Also, the depth of the hypocenter can be computed roughly.
70
+
71
+ In the upper crust, P-waves travel in the range 2–3 km (1.2–1.9 mi) per second (or lower) in soils and unconsolidated sediments, increasing to 3–6 km (1.9–3.7 mi) per second in solid rock. In the lower crust, they travel at about 6–7 km (3.7–4.3 mi) per second; the velocity increases within the deep mantle to about 13 km (8.1 mi) per second. The velocity of S-waves ranges from 2–3 km (1.2–1.9 mi) per second in light sediments and 4–5 km (2.5–3.1 mi) per second in the Earth's crust up to 7 km (4.3 mi) per second in the deep mantle. As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth's mantle.
72
+
73
+ On average, the kilometer distance to the earthquake is the number of seconds between the P- and S-wave times 8.[48] Slight deviations are caused by inhomogeneities of subsurface structure. By such analyses of seismograms the Earth's core was located in 1913 by Beno Gutenberg.
74
+
75
+ S-waves and later arriving surface waves do most of the damage compared to P-waves. P-waves squeeze and expand material in the same direction they are traveling, whereas S-waves shake the ground up and down and back and forth.[49]
76
+
77
+ Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn–Engdahl regions (F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions.
78
+
79
+ Standard reporting of earthquakes includes its magnitude, date and time of occurrence, geographic coordinates of its epicenter, depth of the epicenter, geographical region, distances to population centers, location uncertainty, a number of parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID.[50]
80
+
81
+ Although relatively slow seismic waves have traditionally been used to detect earthquakes, scientists realized in 2016 that gravitational measurements could provide instantaneous detection of earthquakes, and confirmed this by analyzing gravitational records associated with the 2011 Tohoku-Oki ("Fukushima") earthquake.[51][52]
82
+
83
+ The effects of earthquakes include, but are not limited to, the following:
84
+
85
+ Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation.[53] The ground-shaking is measured by ground acceleration.
86
+
87
+ Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits.
88
+
89
+ Ground rupture is a visible breaking and displacement of the Earth's surface along the trace of the fault, which may be of the order of several meters in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges, and nuclear power stations and requires careful mapping of existing faults to identify any that are likely to break the ground surface within the life of the structure.[54]
90
+
91
+ Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves.[55]
92
+
93
+ An earthquake may cause injury and loss of life, road and bridge damage, general property damage, and collapse or destabilization (potentially leading to future collapse) of buildings. The aftermath may bring disease, lack of basic necessities, mental consequences such as panic attacks, depression to survivors,[56] and higher insurance premiums.
94
+
95
+ Earthquakes can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue.[57]
96
+
97
+ Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself.[58]
98
+
99
+ Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water—including when an earthquake occurs at sea. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600–800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them.[59]
100
+
101
+ Ordinarily, subduction earthquakes under magnitude 7.5 do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more.[59]
102
+
103
+ Floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods.[60]
104
+
105
+ The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flooding if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people.[61]
106
+
107
+ One of the most devastating earthquakes in recorded history was the 1556 Shaanxi earthquake, which occurred on 23 January 1556 in Shaanxi province, China. More than 830,000 people died.[63] Most houses in the area were yaodongs—dwellings carved out of loess hillsides—and many victims were killed when these structures collapsed. The 1976 Tangshan earthquake, which killed between 240,000 and 655,000 people, was the deadliest of the 20th century.[64]
108
+
109
+ The 1960 Chilean earthquake is the largest earthquake that has been measured on a seismograph, reaching 9.5 magnitude on 22 May 1960.[30][31] Its epicenter was near Cañete, Chile. The energy released was approximately twice that of the next most powerful earthquake, the Good Friday earthquake (March 27, 1964), which was centered in Prince William Sound, Alaska.[65][66] The ten largest recorded earthquakes have all been megathrust earthquakes; however, of these ten, only the 2004 Indian Ocean earthquake is simultaneously one of the deadliest earthquakes in history.
110
+
111
+ Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes.
112
+
113
+ Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated limits.[67] Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists, scientifically reproducible predictions cannot yet be made to a specific day or month.[68]
114
+
115
+ While forecasting is usually considered to be a type of prediction, earthquake forecasting is often differentiated from earthquake prediction. Earthquake forecasting is concerned with the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades.[69] For well-understood faults the probability that a segment may rupture during the next few decades can be estimated.[70][71]
116
+
117
+ Earthquake warning systems have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system's range to seek shelter before the earthquake's impact is felt.
118
+
119
+ The objective of earthquake engineering is to foresee the impact of earthquakes on buildings and other structures and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting to improve their resistance to earthquakes. Earthquake insurance can provide building owners with financial protection against losses resulting from earthquakes Emergency management strategies can be employed by a government or organization to mitigate risks and prepare for consequences.
120
+
121
+ Individuals can also take preparedness steps like securing water heaters and heavy items that could injure someone, locating shutoffs for utilities, and being educated about what to do when shaking starts. For areas near large bodies of water, earthquake preparedness encompasses the possibility of a tsunami caused by a large quake.
122
+
123
+ From the lifetime of the Greek philosopher Anaxagoras in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth."[72] Thales of Miletus (625–547 BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water.[72] Other theories existed, including the Greek philosopher Anaxamines' (585–526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460–371 BCE) blamed water in general for earthquakes.[72] Pliny the Elder called earthquakes "underground thunderstorms".[72]
124
+
125
+ In recent studies, geologists claim that global warming is one of the reasons for increased seismic activity. According to these studies, melting glaciers and rising sea levels disturb the balance of pressure on Earth's tectonic plates, thus causing an increase in the frequency and intensity of earthquakes.[73]
126
+
127
+ In Norse mythology, earthquakes were explained as the violent struggling of the god Loki. When Loki, god of mischief and strife, murdered Baldr, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki's wife Sigyn stood by him with a bowl to catch the poison, but whenever she had to empty the bowl the poison dripped on Loki's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble.[74]
128
+
129
+ In Greek mythology, Poseidon was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge.[75]
130
+
131
+ In Japanese mythology, Namazu (鯰) is a giant catfish who causes earthquakes. Namazu lives in the mud beneath the earth, and is guarded by the god Kashima who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes.[76]
132
+
133
+ In modern popular culture, the portrayal of earthquakes is shaped by the memory of great cities laid waste, such as Kobe in 1995 or San Francisco in 1906.[77] Fictional earthquakes tend to strike suddenly and without warning.[77] For this reason, stories about earthquakes generally begin with the disaster and focus on its immediate aftermath, as in Short Walk to Daylight (1972), The Ragged Edge (1968) or Aftershock: Earthquake in New York (1999).[77] A notable example is Heinrich von Kleist's classic novella, The Earthquake in Chile, which describes the destruction of Santiago in 1647. Haruki Murakami's short fiction collection After the Quake depicts the consequences of the Kobe earthquake of 1995.
134
+
135
+ The most popular single earthquake in fiction is the hypothetical "Big One" expected of California's San Andreas Fault someday, as depicted in the novels Richter 10 (1996), Goodbye California (1977), 2012 (2009) and San Andreas (2015) among other works.[77] Jacob M. Appel's widely anthologized short story, A Comparative Seismology, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent.[78]
136
+
137
+ Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones.[79] Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, loss of essential supplies and services to maintain survival.[80][81] Particularly for children, the clear availability of caregiving adults who are able to protect, nourish, and clothe them in the aftermath of the earthquake, and to help them make sense of what has befallen them has been shown even more important to their emotional and physical health than the simple giving of provisions.[82] As was observed after other disasters involving destruction and loss of life and their media depictions, recently observed in the 2010 Haiti earthquake, it is also important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate these reactions, to support constructive problem-solving and reflection as to how one might improve the conditions of those affected.[83]
en/5337.html.txt ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ An earthquake (also known as a quake, tremor or temblor) is the shaking of the surface of the Earth resulting from a sudden release of energy in the Earth's lithosphere that creates seismic waves. Earthquakes can range in size from those that are so weak that they cannot be felt to those violent enough to propel objects and people into the air, and wreak destruction across entire cities. The seismicity, or seismic activity, of an area is the frequency, type, and size of earthquakes experienced over a period of time. The word tremor is also used for non-earthquake seismic rumbling.
6
+
7
+ At the Earth's surface, earthquakes manifest themselves by shaking and displacing or disrupting the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides and occasionally, volcanic activity.
8
+
9
+ In its most general sense, the word earthquake is used to describe any seismic event—whether natural or caused by humans—that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake's point of initial rupture is called its hypocenter or focus. The epicenter is the point at ground level directly above the hypocenter.
10
+
11
+ Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increase the frictional resistance. Most fault surfaces do have such asperities, which leads to a form of stick-slip behavior. Once the fault has locked, continued relative motion between the plates leads to increasing stress and therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy.[1] This energy is released as a combination of radiated elastic strain seismic waves,[2] frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior.[3]
12
+
13
+ There are three main types of fault, all of which may cause an interplate earthquake: normal, reverse (thrust), and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and where movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip.
14
+
15
+ Reverse faults, particularly those along convergent plate boundaries, are associated with the most powerful earthquakes, megathrust earthquakes, including almost all of those of magnitude 8 or more. Strike-slip faults, particularly continental transforms, can produce major earthquakes up to about magnitude 8. Earthquakes associated with normal faults are generally less than magnitude 7. For every unit increase in magnitude, there is a roughly thirtyfold increase in the energy released. For instance, an earthquake of magnitude 6.0 releases approximately 32 times more energy than a 5.0 magnitude earthquake and a 7.0 magnitude earthquake releases 1,000 times more energy than a 5.0 magnitude of earthquake. An 8.6 magnitude earthquake releases the same amount of energy as 10,000 atomic bombs like those used in World War II.[4]
16
+
17
+ This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures[5] and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending down into the hot mantle, are the only parts of our planet that can store elastic energy and release it in fault ruptures. Rocks hotter than about 300 °C (572 °F) flow in response to stress; they do not rupture in earthquakes.[6][7] The maximum observed lengths of ruptures and mapped faults (which may break in a single rupture) are approximately 1,000 km (620 mi). Examples are the earthquakes in Alaska (1957), Chile (1960), and Sumatra (2004), all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939), and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter.
18
+
19
+ The most important parameter controlling the maximum earthquake magnitude on a fault, however, is not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees.[8] Thus, the width of the plane within the top brittle crust of the Earth can become 50–100 km (31–62 mi) (Japan, 2011; Alaska, 1964), making the most powerful earthquakes possible.
20
+
21
+ Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of 10 km (6.2 mi) within the brittle crust.[9] Thus, earthquakes with magnitudes much larger than 8 are not possible. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about six kilometres (3.7 mi).[10][11]
22
+
23
+ In addition, there exists a hierarchy of stress level in the three fault types. Thrust faults are generated by the highest, strike-slip by intermediate, and normal faults by the lowest stress levels.[12] This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that "pushes" the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass "escapes" in the direction of the least principal stress, namely upward, lifting the rock mass up, and thus, the overburden equals the least principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions.
24
+
25
+ Where plate boundaries occur within the continental lithosphere, deformation is spread out over a much larger area than the plate boundary itself. In the case of the San Andreas fault continental transform, many earthquakes occur away from the plate boundary and are related to strains developed within the broader zone of deformation caused by major irregularities in the fault trace (e.g., the "Big bend" region). The Northridge earthquake was associated with movement on a blind thrust within such a zone. Another example is the strongly oblique convergent plate boundary between the Arabian and Eurasian plates where it runs through the northwestern part of the Zagros Mountains. The deformation associated with this plate boundary is partitioned into nearly pure thrust sense movements perpendicular to the boundary over a wide zone to the southwest and nearly pure strike-slip motion along the Main Recent Fault close to the actual plate boundary itself. This is demonstrated by earthquake focal mechanisms.[13]
26
+
27
+ All tectonic plates have internal stress fields caused by their interactions with neighboring plates and sedimentary loading or unloading (e.g., deglaciation).[14] These stresses may be sufficient to cause failure along existing fault planes, giving rise to intraplate earthquakes.[15]
28
+
29
+ The majority of tectonic earthquakes originate at the ring of fire in depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than 70 km (43 mi) are classified as "shallow-focus" earthquakes, while those with a focal-depth between 70 and 300 km (43 and 186 mi) are commonly termed "mid-focus" or "intermediate-depth" earthquakes. In subduction zones, where older and colder oceanic crust descends beneath another tectonic plate, deep-focus earthquakes may occur at much greater depths (ranging from 300 to 700 km (190 to 430 mi)).[16] These seismically active areas of subduction are known as Wadati–Benioff zones. Deep-focus earthquakes occur at a depth where the subducted lithosphere should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep-focus earthquakes is faulting caused by olivine undergoing a phase transition into a spinel structure.[17]
30
+
31
+ Earthquakes often occur in volcanic regions and are caused there, both by tectonic faults and the movement of magma in volcanoes. Such earthquakes can serve as an early warning of volcanic eruptions, as during the 1980 eruption of Mount St. Helens.[18] Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers and tiltmeters (a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions.[19]
32
+
33
+ A tectonic earthquake begins by an initial rupture at a point on the fault surface, a process known as nucleation. The scale of the nucleation zone is uncertain, with some evidence, such as the rupture dimensions of the smallest earthquakes, suggesting that it is smaller than 100 m (330 ft) while other evidence, such as a slow component revealed by low-frequency spectra of some earthquakes, suggest that it is larger. The possibility that the nucleation involves some sort of preparation process is supported by the observation that about 40% of earthquakes are preceded by foreshocks. Once the rupture has initiated, it begins to propagate along the fault surface. The mechanics of this process are poorly understood, partly because it is difficult to recreate the high sliding velocities in a laboratory. Also the effects of strong ground motion make it very difficult to record information close to a nucleation zone.[20]
34
+
35
+ Rupture propagation is generally modeled using a fracture mechanics approach, likening the rupture to a propagating mixed mode shear crack. The rupture velocity is a function of the fracture energy in the volume around the crack tip, increasing with decreasing fracture energy. The velocity of rupture propagation is orders of magnitude faster than the displacement velocity across the fault. Earthquake ruptures typically propagate at velocities that are in the range 70–90% of the S-wave velocity, which is independent of earthquake size. A small subset of earthquake ruptures appear to have propagated at speeds greater than the S-wave velocity. These supershear earthquakes have all been observed during large strike-slip events. The unusually wide zone of coseismic damage caused by the 2001 Kunlun earthquake has been attributed to the effects of the sonic boom developed in such earthquakes. Some earthquake ruptures travel at unusually low velocities and are referred to as slow earthquakes. A particularly dangerous form of slow earthquake is the tsunami earthquake, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighboring coast, as in the 1896 Sanriku earthquake.[20]
36
+
37
+ Tides may induce some seismicity. See tidal triggering of earthquakes for details.
38
+
39
+ Most earthquakes form part of a sequence, related to each other in terms of location and time.[21] Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern.[22]
40
+
41
+ An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. An aftershock is in the same region of the main shock but always of a smaller magnitude. If an aftershock is larger than the main shock, the aftershock is redesignated as the main shock and the original main shock is redesignated as a foreshock. Aftershocks are formed as the crust around the displaced fault plane adjusts to the effects of the main shock.[21]
42
+
43
+ Earthquake swarms are sequences of earthquakes striking in a specific area within a short period of time. They are different from earthquakes followed by a series of aftershocks by the fact that no single earthquake in the sequence is obviously the main shock, so none has a notable higher magnitude than another. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park.[23] In August 2012, a swarm of earthquakes shook Southern California's Imperial Valley, showing the most recorded activity in the area since the 1970s.[24]
44
+
45
+ Sometimes a series of earthquakes occur in what has been called an earthquake storm, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution of the previous earthquakes. Similar to aftershocks but on adjacent segments of fault, these storms occur over the course of years, and with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East.[25][26]
46
+
47
+ Quaking or shaking of the earth is a common phenomenon undoubtedly known to humans from earliest times. Prior to the development of strong-motion accelerometers that can measure peak ground speed and acceleration directly, the intensity of the earth-shaking was estimated on the basis of the observed effects, as categorized on various seismic intensity scales. Only in the last century has the source of such shaking been identified as ruptures in the Earth's crust, with the intensity of shaking at any locality dependent not only on the local ground conditions but also on the strength or magnitude of the rupture, and on its distance.[27]
48
+
49
+ The first scale for measuring earthquake magnitudes was developed by Charles F. Richter in 1935. Subsequent scales (see seismic magnitude scales) have retained a key feature, where each unit represents a ten-fold difference in the amplitude of the ground shaking and a 32-fold difference in energy. Subsequent scales are also adjusted to have approximately the same numeric value within the limits of the scale.[28]
50
+
51
+ Although the mass media commonly reports earthquake magnitudes as "Richter magnitude" or "Richter scale", standard practice by most seismological authorities is to express an earthquake's strength on the moment magnitude scale, which is based on the actual energy released by an earthquake.[29]
52
+
53
+ It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt.[30][31] Minor earthquakes occur nearly constantly around the world in places like California and Alaska in the U.S., as well as in El Salvador, Mexico, Guatemala, Chile, Peru, Indonesia, Philippines, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, India, Nepal and Japan.[32] Larger earthquakes occur less frequently, the relationship being exponential; for example, roughly ten times as many earthquakes larger than magnitude 4 occur in a particular time period than earthquakes larger than magnitude 5.[33] In the (low seismicity) United Kingdom, for example, it has been calculated that the average recurrences are:
54
+ an earthquake of 3.7–4.6 every year, an earthquake of 4.7–5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years.[34] This is an example of the Gutenberg–Richter law.
55
+
56
+ The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0–7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable.[36] In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend.[37] More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey (USGS).[38]
57
+ A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case.[39]
58
+
59
+ Most of the world's earthquakes (90%, and 81% of the largest) take place in the 40,000-kilometre-long (25,000 mi), horseshoe-shaped zone called the circum-Pacific seismic belt, known as the Pacific Ring of Fire, which for the most part bounds the Pacific Plate.[40][41] Massive earthquakes tend to occur along other plate boundaries too, such as along the Himalayan Mountains.[42]
60
+
61
+ With the rapid growth of mega-cities such as Mexico City, Tokyo and Tehran in areas of high seismic risk, some seismologists are warning that a single quake may claim the lives of up to three million people.[43]
62
+
63
+ While most earthquakes are caused by movement of the Earth's tectonic plates, human activity can also produce earthquakes. Activities both above ground and below may change the stresses and strains on the crust, including building reservoirs, extracting resources such as coal or oil, and injecting fluids underground for waste disposal or fracking.[44] Most of these earthquakes have small magnitudes. The 5.7 magnitude 2011 Oklahoma earthquake is thought to have been caused by disposing wastewater from oil production into injection wells,[45] and studies point to the state's oil industry as the cause of other earthquakes in the past century.[46] A Columbia University paper suggested that the 8.0 magnitude 2008 Sichuan earthquake was induced by loading from the Zipingpu Dam, though the link has not been conclusively proved.[47]
64
+
65
+ The instrumental scales used to describe the size of an earthquake began with the Richter magnitude scale in the 1930s. It is a relatively simple measurement of an event's amplitude, and its use has become minimal in the 21st century. Seismic waves travel through the Earth's interior and can be recorded by seismometers at great distances. The surface wave magnitude was developed in the 1950s as a means to measure remote earthquakes and to improve the accuracy for larger events. The moment magnitude scale not only measures the amplitude of the shock but also takes into account the seismic moment (total rupture area, average slip of the fault, and rigidity of the rock). The Japan Meteorological Agency seismic intensity scale, the Medvedev–Sponheuer–Karnik scale, and the Mercalli intensity scale are based on the observed effects and are related to the intensity of shaking.
66
+
67
+ Every tremor produces different types of seismic waves, which travel through rock with different velocities:
68
+
69
+ Propagation velocity of the seismic waves through solid rock ranges from approx. 3 km/s (1.9 mi/s) up to 13 km/s (8.1 mi/s), depending on the density and elasticity of the medium. In the Earth's interior, the shock- or P-waves travel much faster than the S-waves (approx. relation 1.7:1). The differences in travel time from the epicenter to the observatory are a measure of the distance and can be used to image both sources of quakes and structures within the Earth. Also, the depth of the hypocenter can be computed roughly.
70
+
71
+ In the upper crust, P-waves travel in the range 2–3 km (1.2–1.9 mi) per second (or lower) in soils and unconsolidated sediments, increasing to 3–6 km (1.9–3.7 mi) per second in solid rock. In the lower crust, they travel at about 6–7 km (3.7–4.3 mi) per second; the velocity increases within the deep mantle to about 13 km (8.1 mi) per second. The velocity of S-waves ranges from 2–3 km (1.2–1.9 mi) per second in light sediments and 4–5 km (2.5–3.1 mi) per second in the Earth's crust up to 7 km (4.3 mi) per second in the deep mantle. As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth's mantle.
72
+
73
+ On average, the kilometer distance to the earthquake is the number of seconds between the P- and S-wave times 8.[48] Slight deviations are caused by inhomogeneities of subsurface structure. By such analyses of seismograms the Earth's core was located in 1913 by Beno Gutenberg.
74
+
75
+ S-waves and later arriving surface waves do most of the damage compared to P-waves. P-waves squeeze and expand material in the same direction they are traveling, whereas S-waves shake the ground up and down and back and forth.[49]
76
+
77
+ Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn–Engdahl regions (F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions.
78
+
79
+ Standard reporting of earthquakes includes its magnitude, date and time of occurrence, geographic coordinates of its epicenter, depth of the epicenter, geographical region, distances to population centers, location uncertainty, a number of parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID.[50]
80
+
81
+ Although relatively slow seismic waves have traditionally been used to detect earthquakes, scientists realized in 2016 that gravitational measurements could provide instantaneous detection of earthquakes, and confirmed this by analyzing gravitational records associated with the 2011 Tohoku-Oki ("Fukushima") earthquake.[51][52]
82
+
83
+ The effects of earthquakes include, but are not limited to, the following:
84
+
85
+ Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation.[53] The ground-shaking is measured by ground acceleration.
86
+
87
+ Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits.
88
+
89
+ Ground rupture is a visible breaking and displacement of the Earth's surface along the trace of the fault, which may be of the order of several meters in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges, and nuclear power stations and requires careful mapping of existing faults to identify any that are likely to break the ground surface within the life of the structure.[54]
90
+
91
+ Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves.[55]
92
+
93
+ An earthquake may cause injury and loss of life, road and bridge damage, general property damage, and collapse or destabilization (potentially leading to future collapse) of buildings. The aftermath may bring disease, lack of basic necessities, mental consequences such as panic attacks, depression to survivors,[56] and higher insurance premiums.
94
+
95
+ Earthquakes can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue.[57]
96
+
97
+ Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself.[58]
98
+
99
+ Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water—including when an earthquake occurs at sea. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600–800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them.[59]
100
+
101
+ Ordinarily, subduction earthquakes under magnitude 7.5 do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more.[59]
102
+
103
+ Floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods.[60]
104
+
105
+ The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flooding if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people.[61]
106
+
107
+ One of the most devastating earthquakes in recorded history was the 1556 Shaanxi earthquake, which occurred on 23 January 1556 in Shaanxi province, China. More than 830,000 people died.[63] Most houses in the area were yaodongs—dwellings carved out of loess hillsides—and many victims were killed when these structures collapsed. The 1976 Tangshan earthquake, which killed between 240,000 and 655,000 people, was the deadliest of the 20th century.[64]
108
+
109
+ The 1960 Chilean earthquake is the largest earthquake that has been measured on a seismograph, reaching 9.5 magnitude on 22 May 1960.[30][31] Its epicenter was near Cañete, Chile. The energy released was approximately twice that of the next most powerful earthquake, the Good Friday earthquake (March 27, 1964), which was centered in Prince William Sound, Alaska.[65][66] The ten largest recorded earthquakes have all been megathrust earthquakes; however, of these ten, only the 2004 Indian Ocean earthquake is simultaneously one of the deadliest earthquakes in history.
110
+
111
+ Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes.
112
+
113
+ Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated limits.[67] Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists, scientifically reproducible predictions cannot yet be made to a specific day or month.[68]
114
+
115
+ While forecasting is usually considered to be a type of prediction, earthquake forecasting is often differentiated from earthquake prediction. Earthquake forecasting is concerned with the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades.[69] For well-understood faults the probability that a segment may rupture during the next few decades can be estimated.[70][71]
116
+
117
+ Earthquake warning systems have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system's range to seek shelter before the earthquake's impact is felt.
118
+
119
+ The objective of earthquake engineering is to foresee the impact of earthquakes on buildings and other structures and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting to improve their resistance to earthquakes. Earthquake insurance can provide building owners with financial protection against losses resulting from earthquakes Emergency management strategies can be employed by a government or organization to mitigate risks and prepare for consequences.
120
+
121
+ Individuals can also take preparedness steps like securing water heaters and heavy items that could injure someone, locating shutoffs for utilities, and being educated about what to do when shaking starts. For areas near large bodies of water, earthquake preparedness encompasses the possibility of a tsunami caused by a large quake.
122
+
123
+ From the lifetime of the Greek philosopher Anaxagoras in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth."[72] Thales of Miletus (625–547 BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water.[72] Other theories existed, including the Greek philosopher Anaxamines' (585–526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460–371 BCE) blamed water in general for earthquakes.[72] Pliny the Elder called earthquakes "underground thunderstorms".[72]
124
+
125
+ In recent studies, geologists claim that global warming is one of the reasons for increased seismic activity. According to these studies, melting glaciers and rising sea levels disturb the balance of pressure on Earth's tectonic plates, thus causing an increase in the frequency and intensity of earthquakes.[73]
126
+
127
+ In Norse mythology, earthquakes were explained as the violent struggling of the god Loki. When Loki, god of mischief and strife, murdered Baldr, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki's wife Sigyn stood by him with a bowl to catch the poison, but whenever she had to empty the bowl the poison dripped on Loki's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble.[74]
128
+
129
+ In Greek mythology, Poseidon was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge.[75]
130
+
131
+ In Japanese mythology, Namazu (鯰) is a giant catfish who causes earthquakes. Namazu lives in the mud beneath the earth, and is guarded by the god Kashima who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes.[76]
132
+
133
+ In modern popular culture, the portrayal of earthquakes is shaped by the memory of great cities laid waste, such as Kobe in 1995 or San Francisco in 1906.[77] Fictional earthquakes tend to strike suddenly and without warning.[77] For this reason, stories about earthquakes generally begin with the disaster and focus on its immediate aftermath, as in Short Walk to Daylight (1972), The Ragged Edge (1968) or Aftershock: Earthquake in New York (1999).[77] A notable example is Heinrich von Kleist's classic novella, The Earthquake in Chile, which describes the destruction of Santiago in 1647. Haruki Murakami's short fiction collection After the Quake depicts the consequences of the Kobe earthquake of 1995.
134
+
135
+ The most popular single earthquake in fiction is the hypothetical "Big One" expected of California's San Andreas Fault someday, as depicted in the novels Richter 10 (1996), Goodbye California (1977), 2012 (2009) and San Andreas (2015) among other works.[77] Jacob M. Appel's widely anthologized short story, A Comparative Seismology, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent.[78]
136
+
137
+ Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones.[79] Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, loss of essential supplies and services to maintain survival.[80][81] Particularly for children, the clear availability of caregiving adults who are able to protect, nourish, and clothe them in the aftermath of the earthquake, and to help them make sense of what has befallen them has been shown even more important to their emotional and physical health than the simple giving of provisions.[82] As was observed after other disasters involving destruction and loss of life and their media depictions, recently observed in the 2010 Haiti earthquake, it is also important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate these reactions, to support constructive problem-solving and reflection as to how one might improve the conditions of those affected.[83]
en/5338.html.txt ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ An earthquake (also known as a quake, tremor or temblor) is the shaking of the surface of the Earth resulting from a sudden release of energy in the Earth's lithosphere that creates seismic waves. Earthquakes can range in size from those that are so weak that they cannot be felt to those violent enough to propel objects and people into the air, and wreak destruction across entire cities. The seismicity, or seismic activity, of an area is the frequency, type, and size of earthquakes experienced over a period of time. The word tremor is also used for non-earthquake seismic rumbling.
6
+
7
+ At the Earth's surface, earthquakes manifest themselves by shaking and displacing or disrupting the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides and occasionally, volcanic activity.
8
+
9
+ In its most general sense, the word earthquake is used to describe any seismic event—whether natural or caused by humans—that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake's point of initial rupture is called its hypocenter or focus. The epicenter is the point at ground level directly above the hypocenter.
10
+
11
+ Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increase the frictional resistance. Most fault surfaces do have such asperities, which leads to a form of stick-slip behavior. Once the fault has locked, continued relative motion between the plates leads to increasing stress and therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy.[1] This energy is released as a combination of radiated elastic strain seismic waves,[2] frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior.[3]
12
+
13
+ There are three main types of fault, all of which may cause an interplate earthquake: normal, reverse (thrust), and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and where movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip.
14
+
15
+ Reverse faults, particularly those along convergent plate boundaries, are associated with the most powerful earthquakes, megathrust earthquakes, including almost all of those of magnitude 8 or more. Strike-slip faults, particularly continental transforms, can produce major earthquakes up to about magnitude 8. Earthquakes associated with normal faults are generally less than magnitude 7. For every unit increase in magnitude, there is a roughly thirtyfold increase in the energy released. For instance, an earthquake of magnitude 6.0 releases approximately 32 times more energy than a 5.0 magnitude earthquake and a 7.0 magnitude earthquake releases 1,000 times more energy than a 5.0 magnitude of earthquake. An 8.6 magnitude earthquake releases the same amount of energy as 10,000 atomic bombs like those used in World War II.[4]
16
+
17
+ This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures[5] and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending down into the hot mantle, are the only parts of our planet that can store elastic energy and release it in fault ruptures. Rocks hotter than about 300 °C (572 °F) flow in response to stress; they do not rupture in earthquakes.[6][7] The maximum observed lengths of ruptures and mapped faults (which may break in a single rupture) are approximately 1,000 km (620 mi). Examples are the earthquakes in Alaska (1957), Chile (1960), and Sumatra (2004), all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939), and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter.
18
+
19
+ The most important parameter controlling the maximum earthquake magnitude on a fault, however, is not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees.[8] Thus, the width of the plane within the top brittle crust of the Earth can become 50–100 km (31–62 mi) (Japan, 2011; Alaska, 1964), making the most powerful earthquakes possible.
20
+
21
+ Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of 10 km (6.2 mi) within the brittle crust.[9] Thus, earthquakes with magnitudes much larger than 8 are not possible. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about six kilometres (3.7 mi).[10][11]
22
+
23
+ In addition, there exists a hierarchy of stress level in the three fault types. Thrust faults are generated by the highest, strike-slip by intermediate, and normal faults by the lowest stress levels.[12] This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that "pushes" the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass "escapes" in the direction of the least principal stress, namely upward, lifting the rock mass up, and thus, the overburden equals the least principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions.
24
+
25
+ Where plate boundaries occur within the continental lithosphere, deformation is spread out over a much larger area than the plate boundary itself. In the case of the San Andreas fault continental transform, many earthquakes occur away from the plate boundary and are related to strains developed within the broader zone of deformation caused by major irregularities in the fault trace (e.g., the "Big bend" region). The Northridge earthquake was associated with movement on a blind thrust within such a zone. Another example is the strongly oblique convergent plate boundary between the Arabian and Eurasian plates where it runs through the northwestern part of the Zagros Mountains. The deformation associated with this plate boundary is partitioned into nearly pure thrust sense movements perpendicular to the boundary over a wide zone to the southwest and nearly pure strike-slip motion along the Main Recent Fault close to the actual plate boundary itself. This is demonstrated by earthquake focal mechanisms.[13]
26
+
27
+ All tectonic plates have internal stress fields caused by their interactions with neighboring plates and sedimentary loading or unloading (e.g., deglaciation).[14] These stresses may be sufficient to cause failure along existing fault planes, giving rise to intraplate earthquakes.[15]
28
+
29
+ The majority of tectonic earthquakes originate at the ring of fire in depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than 70 km (43 mi) are classified as "shallow-focus" earthquakes, while those with a focal-depth between 70 and 300 km (43 and 186 mi) are commonly termed "mid-focus" or "intermediate-depth" earthquakes. In subduction zones, where older and colder oceanic crust descends beneath another tectonic plate, deep-focus earthquakes may occur at much greater depths (ranging from 300 to 700 km (190 to 430 mi)).[16] These seismically active areas of subduction are known as Wadati–Benioff zones. Deep-focus earthquakes occur at a depth where the subducted lithosphere should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep-focus earthquakes is faulting caused by olivine undergoing a phase transition into a spinel structure.[17]
30
+
31
+ Earthquakes often occur in volcanic regions and are caused there, both by tectonic faults and the movement of magma in volcanoes. Such earthquakes can serve as an early warning of volcanic eruptions, as during the 1980 eruption of Mount St. Helens.[18] Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers and tiltmeters (a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions.[19]
32
+
33
+ A tectonic earthquake begins by an initial rupture at a point on the fault surface, a process known as nucleation. The scale of the nucleation zone is uncertain, with some evidence, such as the rupture dimensions of the smallest earthquakes, suggesting that it is smaller than 100 m (330 ft) while other evidence, such as a slow component revealed by low-frequency spectra of some earthquakes, suggest that it is larger. The possibility that the nucleation involves some sort of preparation process is supported by the observation that about 40% of earthquakes are preceded by foreshocks. Once the rupture has initiated, it begins to propagate along the fault surface. The mechanics of this process are poorly understood, partly because it is difficult to recreate the high sliding velocities in a laboratory. Also the effects of strong ground motion make it very difficult to record information close to a nucleation zone.[20]
34
+
35
+ Rupture propagation is generally modeled using a fracture mechanics approach, likening the rupture to a propagating mixed mode shear crack. The rupture velocity is a function of the fracture energy in the volume around the crack tip, increasing with decreasing fracture energy. The velocity of rupture propagation is orders of magnitude faster than the displacement velocity across the fault. Earthquake ruptures typically propagate at velocities that are in the range 70–90% of the S-wave velocity, which is independent of earthquake size. A small subset of earthquake ruptures appear to have propagated at speeds greater than the S-wave velocity. These supershear earthquakes have all been observed during large strike-slip events. The unusually wide zone of coseismic damage caused by the 2001 Kunlun earthquake has been attributed to the effects of the sonic boom developed in such earthquakes. Some earthquake ruptures travel at unusually low velocities and are referred to as slow earthquakes. A particularly dangerous form of slow earthquake is the tsunami earthquake, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighboring coast, as in the 1896 Sanriku earthquake.[20]
36
+
37
+ Tides may induce some seismicity. See tidal triggering of earthquakes for details.
38
+
39
+ Most earthquakes form part of a sequence, related to each other in terms of location and time.[21] Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern.[22]
40
+
41
+ An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. An aftershock is in the same region of the main shock but always of a smaller magnitude. If an aftershock is larger than the main shock, the aftershock is redesignated as the main shock and the original main shock is redesignated as a foreshock. Aftershocks are formed as the crust around the displaced fault plane adjusts to the effects of the main shock.[21]
42
+
43
+ Earthquake swarms are sequences of earthquakes striking in a specific area within a short period of time. They are different from earthquakes followed by a series of aftershocks by the fact that no single earthquake in the sequence is obviously the main shock, so none has a notable higher magnitude than another. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park.[23] In August 2012, a swarm of earthquakes shook Southern California's Imperial Valley, showing the most recorded activity in the area since the 1970s.[24]
44
+
45
+ Sometimes a series of earthquakes occur in what has been called an earthquake storm, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution of the previous earthquakes. Similar to aftershocks but on adjacent segments of fault, these storms occur over the course of years, and with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East.[25][26]
46
+
47
+ Quaking or shaking of the earth is a common phenomenon undoubtedly known to humans from earliest times. Prior to the development of strong-motion accelerometers that can measure peak ground speed and acceleration directly, the intensity of the earth-shaking was estimated on the basis of the observed effects, as categorized on various seismic intensity scales. Only in the last century has the source of such shaking been identified as ruptures in the Earth's crust, with the intensity of shaking at any locality dependent not only on the local ground conditions but also on the strength or magnitude of the rupture, and on its distance.[27]
48
+
49
+ The first scale for measuring earthquake magnitudes was developed by Charles F. Richter in 1935. Subsequent scales (see seismic magnitude scales) have retained a key feature, where each unit represents a ten-fold difference in the amplitude of the ground shaking and a 32-fold difference in energy. Subsequent scales are also adjusted to have approximately the same numeric value within the limits of the scale.[28]
50
+
51
+ Although the mass media commonly reports earthquake magnitudes as "Richter magnitude" or "Richter scale", standard practice by most seismological authorities is to express an earthquake's strength on the moment magnitude scale, which is based on the actual energy released by an earthquake.[29]
52
+
53
+ It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt.[30][31] Minor earthquakes occur nearly constantly around the world in places like California and Alaska in the U.S., as well as in El Salvador, Mexico, Guatemala, Chile, Peru, Indonesia, Philippines, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, India, Nepal and Japan.[32] Larger earthquakes occur less frequently, the relationship being exponential; for example, roughly ten times as many earthquakes larger than magnitude 4 occur in a particular time period than earthquakes larger than magnitude 5.[33] In the (low seismicity) United Kingdom, for example, it has been calculated that the average recurrences are:
54
+ an earthquake of 3.7–4.6 every year, an earthquake of 4.7–5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years.[34] This is an example of the Gutenberg–Richter law.
55
+
56
+ The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0–7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable.[36] In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend.[37] More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey (USGS).[38]
57
+ A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case.[39]
58
+
59
+ Most of the world's earthquakes (90%, and 81% of the largest) take place in the 40,000-kilometre-long (25,000 mi), horseshoe-shaped zone called the circum-Pacific seismic belt, known as the Pacific Ring of Fire, which for the most part bounds the Pacific Plate.[40][41] Massive earthquakes tend to occur along other plate boundaries too, such as along the Himalayan Mountains.[42]
60
+
61
+ With the rapid growth of mega-cities such as Mexico City, Tokyo and Tehran in areas of high seismic risk, some seismologists are warning that a single quake may claim the lives of up to three million people.[43]
62
+
63
+ While most earthquakes are caused by movement of the Earth's tectonic plates, human activity can also produce earthquakes. Activities both above ground and below may change the stresses and strains on the crust, including building reservoirs, extracting resources such as coal or oil, and injecting fluids underground for waste disposal or fracking.[44] Most of these earthquakes have small magnitudes. The 5.7 magnitude 2011 Oklahoma earthquake is thought to have been caused by disposing wastewater from oil production into injection wells,[45] and studies point to the state's oil industry as the cause of other earthquakes in the past century.[46] A Columbia University paper suggested that the 8.0 magnitude 2008 Sichuan earthquake was induced by loading from the Zipingpu Dam, though the link has not been conclusively proved.[47]
64
+
65
+ The instrumental scales used to describe the size of an earthquake began with the Richter magnitude scale in the 1930s. It is a relatively simple measurement of an event's amplitude, and its use has become minimal in the 21st century. Seismic waves travel through the Earth's interior and can be recorded by seismometers at great distances. The surface wave magnitude was developed in the 1950s as a means to measure remote earthquakes and to improve the accuracy for larger events. The moment magnitude scale not only measures the amplitude of the shock but also takes into account the seismic moment (total rupture area, average slip of the fault, and rigidity of the rock). The Japan Meteorological Agency seismic intensity scale, the Medvedev–Sponheuer–Karnik scale, and the Mercalli intensity scale are based on the observed effects and are related to the intensity of shaking.
66
+
67
+ Every tremor produces different types of seismic waves, which travel through rock with different velocities:
68
+
69
+ Propagation velocity of the seismic waves through solid rock ranges from approx. 3 km/s (1.9 mi/s) up to 13 km/s (8.1 mi/s), depending on the density and elasticity of the medium. In the Earth's interior, the shock- or P-waves travel much faster than the S-waves (approx. relation 1.7:1). The differences in travel time from the epicenter to the observatory are a measure of the distance and can be used to image both sources of quakes and structures within the Earth. Also, the depth of the hypocenter can be computed roughly.
70
+
71
+ In the upper crust, P-waves travel in the range 2–3 km (1.2–1.9 mi) per second (or lower) in soils and unconsolidated sediments, increasing to 3–6 km (1.9–3.7 mi) per second in solid rock. In the lower crust, they travel at about 6–7 km (3.7–4.3 mi) per second; the velocity increases within the deep mantle to about 13 km (8.1 mi) per second. The velocity of S-waves ranges from 2–3 km (1.2–1.9 mi) per second in light sediments and 4–5 km (2.5–3.1 mi) per second in the Earth's crust up to 7 km (4.3 mi) per second in the deep mantle. As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth's mantle.
72
+
73
+ On average, the kilometer distance to the earthquake is the number of seconds between the P- and S-wave times 8.[48] Slight deviations are caused by inhomogeneities of subsurface structure. By such analyses of seismograms the Earth's core was located in 1913 by Beno Gutenberg.
74
+
75
+ S-waves and later arriving surface waves do most of the damage compared to P-waves. P-waves squeeze and expand material in the same direction they are traveling, whereas S-waves shake the ground up and down and back and forth.[49]
76
+
77
+ Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn–Engdahl regions (F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions.
78
+
79
+ Standard reporting of earthquakes includes its magnitude, date and time of occurrence, geographic coordinates of its epicenter, depth of the epicenter, geographical region, distances to population centers, location uncertainty, a number of parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID.[50]
80
+
81
+ Although relatively slow seismic waves have traditionally been used to detect earthquakes, scientists realized in 2016 that gravitational measurements could provide instantaneous detection of earthquakes, and confirmed this by analyzing gravitational records associated with the 2011 Tohoku-Oki ("Fukushima") earthquake.[51][52]
82
+
83
+ The effects of earthquakes include, but are not limited to, the following:
84
+
85
+ Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation.[53] The ground-shaking is measured by ground acceleration.
86
+
87
+ Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits.
88
+
89
+ Ground rupture is a visible breaking and displacement of the Earth's surface along the trace of the fault, which may be of the order of several meters in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges, and nuclear power stations and requires careful mapping of existing faults to identify any that are likely to break the ground surface within the life of the structure.[54]
90
+
91
+ Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves.[55]
92
+
93
+ An earthquake may cause injury and loss of life, road and bridge damage, general property damage, and collapse or destabilization (potentially leading to future collapse) of buildings. The aftermath may bring disease, lack of basic necessities, mental consequences such as panic attacks, depression to survivors,[56] and higher insurance premiums.
94
+
95
+ Earthquakes can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue.[57]
96
+
97
+ Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself.[58]
98
+
99
+ Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water—including when an earthquake occurs at sea. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600–800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them.[59]
100
+
101
+ Ordinarily, subduction earthquakes under magnitude 7.5 do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more.[59]
102
+
103
+ Floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods.[60]
104
+
105
+ The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flooding if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people.[61]
106
+
107
+ One of the most devastating earthquakes in recorded history was the 1556 Shaanxi earthquake, which occurred on 23 January 1556 in Shaanxi province, China. More than 830,000 people died.[63] Most houses in the area were yaodongs—dwellings carved out of loess hillsides—and many victims were killed when these structures collapsed. The 1976 Tangshan earthquake, which killed between 240,000 and 655,000 people, was the deadliest of the 20th century.[64]
108
+
109
+ The 1960 Chilean earthquake is the largest earthquake that has been measured on a seismograph, reaching 9.5 magnitude on 22 May 1960.[30][31] Its epicenter was near Cañete, Chile. The energy released was approximately twice that of the next most powerful earthquake, the Good Friday earthquake (March 27, 1964), which was centered in Prince William Sound, Alaska.[65][66] The ten largest recorded earthquakes have all been megathrust earthquakes; however, of these ten, only the 2004 Indian Ocean earthquake is simultaneously one of the deadliest earthquakes in history.
110
+
111
+ Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes.
112
+
113
+ Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated limits.[67] Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists, scientifically reproducible predictions cannot yet be made to a specific day or month.[68]
114
+
115
+ While forecasting is usually considered to be a type of prediction, earthquake forecasting is often differentiated from earthquake prediction. Earthquake forecasting is concerned with the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades.[69] For well-understood faults the probability that a segment may rupture during the next few decades can be estimated.[70][71]
116
+
117
+ Earthquake warning systems have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system's range to seek shelter before the earthquake's impact is felt.
118
+
119
+ The objective of earthquake engineering is to foresee the impact of earthquakes on buildings and other structures and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting to improve their resistance to earthquakes. Earthquake insurance can provide building owners with financial protection against losses resulting from earthquakes Emergency management strategies can be employed by a government or organization to mitigate risks and prepare for consequences.
120
+
121
+ Individuals can also take preparedness steps like securing water heaters and heavy items that could injure someone, locating shutoffs for utilities, and being educated about what to do when shaking starts. For areas near large bodies of water, earthquake preparedness encompasses the possibility of a tsunami caused by a large quake.
122
+
123
+ From the lifetime of the Greek philosopher Anaxagoras in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth."[72] Thales of Miletus (625–547 BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water.[72] Other theories existed, including the Greek philosopher Anaxamines' (585–526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460–371 BCE) blamed water in general for earthquakes.[72] Pliny the Elder called earthquakes "underground thunderstorms".[72]
124
+
125
+ In recent studies, geologists claim that global warming is one of the reasons for increased seismic activity. According to these studies, melting glaciers and rising sea levels disturb the balance of pressure on Earth's tectonic plates, thus causing an increase in the frequency and intensity of earthquakes.[73]
126
+
127
+ In Norse mythology, earthquakes were explained as the violent struggling of the god Loki. When Loki, god of mischief and strife, murdered Baldr, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki's wife Sigyn stood by him with a bowl to catch the poison, but whenever she had to empty the bowl the poison dripped on Loki's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble.[74]
128
+
129
+ In Greek mythology, Poseidon was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge.[75]
130
+
131
+ In Japanese mythology, Namazu (鯰) is a giant catfish who causes earthquakes. Namazu lives in the mud beneath the earth, and is guarded by the god Kashima who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes.[76]
132
+
133
+ In modern popular culture, the portrayal of earthquakes is shaped by the memory of great cities laid waste, such as Kobe in 1995 or San Francisco in 1906.[77] Fictional earthquakes tend to strike suddenly and without warning.[77] For this reason, stories about earthquakes generally begin with the disaster and focus on its immediate aftermath, as in Short Walk to Daylight (1972), The Ragged Edge (1968) or Aftershock: Earthquake in New York (1999).[77] A notable example is Heinrich von Kleist's classic novella, The Earthquake in Chile, which describes the destruction of Santiago in 1647. Haruki Murakami's short fiction collection After the Quake depicts the consequences of the Kobe earthquake of 1995.
134
+
135
+ The most popular single earthquake in fiction is the hypothetical "Big One" expected of California's San Andreas Fault someday, as depicted in the novels Richter 10 (1996), Goodbye California (1977), 2012 (2009) and San Andreas (2015) among other works.[77] Jacob M. Appel's widely anthologized short story, A Comparative Seismology, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent.[78]
136
+
137
+ Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones.[79] Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, loss of essential supplies and services to maintain survival.[80][81] Particularly for children, the clear availability of caregiving adults who are able to protect, nourish, and clothe them in the aftermath of the earthquake, and to help them make sense of what has befallen them has been shown even more important to their emotional and physical health than the simple giving of provisions.[82] As was observed after other disasters involving destruction and loss of life and their media depictions, recently observed in the 2010 Haiti earthquake, it is also important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate these reactions, to support constructive problem-solving and reflection as to how one might improve the conditions of those affected.[83]
en/5339.html.txt ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Salt is a mineral composed primarily of sodium chloride (NaCl), a chemical compound belonging to the larger class of salts; salt in its natural form as a crystalline mineral is known as rock salt or halite. Salt is present in vast quantities in seawater, where it is the main mineral constituent. The open ocean has about 35 grams (1.2 oz) of solids per liter of sea water, a salinity of 3.5%.
6
+
7
+ Salt is essential for life in general, and saltiness is one of the basic human tastes. Salt is one of the oldest and most ubiquitous food seasonings, and salting is an important method of food preservation.
8
+
9
+ Some of the earliest evidence of salt processing dates to around 6,000 BC, when people living in the area of present-day Romania boiled spring water to extract salts; a salt-works in China dates to approximately the same period. Salt was also prized by the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites, Egyptians, and the Indians. Salt became an important article of trade and was transported by boat across the Mediterranean Sea, along specially built salt roads, and across the Sahara on camel caravans. The scarcity and universal need for salt have led nations to go to war over it and use it to raise tax revenues. Salt is used in religious ceremonies and has other cultural and traditional significance.
10
+
11
+ Salt is processed from salt mines, and by the evaporation of seawater (sea salt) and mineral-rich spring water in shallow pools. Its major industrial products are caustic soda and chlorine; salt is used in many industrial processes including the manufacture of polyvinyl chloride, plastics, paper pulp and many other products. Of the annual global production of around two hundred million tonnes of salt, about 6% is used for human consumption. Other uses include water conditioning processes, de-icing highways, and agricultural use. Edible salt is sold in forms such as sea salt and table salt which usually contains an anti-caking agent and may be iodised to prevent iodine deficiency. As well as its use in cooking and at the table, salt is present in many processed foods.
12
+
13
+ Sodium is an essential nutrient for human health via its role as an electrolyte and osmotic solute.[1][2][3] Excessive salt consumption may increase the risk of cardiovascular diseases, such as hypertension, in children and adults. Such health effects of salt have long been studied. Accordingly, numerous world health associations and experts in developed countries recommend reducing consumption of popular salty foods.[3][4] The World Health Organization recommends that adults should consume less than 2,000 mg of sodium, equivalent to 5 grams of salt per day.[5]
14
+
15
+ All through history, the availability of salt has been pivotal to civilization. What is now thought to have been the first city in Europe is Solnitsata, in Bulgaria, which was a salt mine, providing the area now known as the Balkans with salt since 5400 BC.[6] Even the name Solnitsata means "salt works".
16
+
17
+ While people have used canning and artificial refrigeration to preserve food for the last hundred years or so, salt has been the best-known food preservative, especially for meat, for many thousands of years.[7] A very ancient salt-works operation has been discovered at the Poiana Slatinei archaeological site next to a salt spring in Lunca, Neamț County, Romania. Evidence indicates that Neolithic people of the Precucuteni Culture were boiling the salt-laden spring water through the process of briquetage to extract the salt as far back as 6050 BC.[8] The salt extracted from this operation may have had a direct correlation to the rapid growth of this society's population soon after its initial production began.[9] The harvest of salt from the surface of Xiechi Lake near Yuncheng in Shanxi, China, dates back to at least 6000 BC, making it one of the oldest verifiable saltworks.[10]
18
+
19
+ There is more salt in animal tissues, such as meat, blood, and milk, than in plant tissues.[11] Nomads who subsist on their flocks and herds do not eat salt with their food, but agriculturalists, feeding mainly on cereals and vegetable matter, need to supplement their diet with salt.[12] With the spread of civilization, salt became one of the world's main trading commodities. It was of high value to the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites and other peoples of antiquity. In the Middle East, salt was used to ceremonially seal an agreement, and the ancient Hebrews made a "covenant of salt" with God and sprinkled salt on their offerings to show their trust in him.[13][better source needed] An ancient practice in time of war was salting the earth: scattering salt around in a defeated city to prevent plant growth. The Bible tells the story of King Abimelech who was ordered by God to do this at Shechem,[14] and various texts claim that the Roman general Scipio Aemilianus Africanus ploughed over and sowed the city of Carthage with salt after it was defeated in the Third Punic War (146 BC).[15]
20
+
21
+ Salt may have been used for barter in connection with the obsidian trade in Anatolia in the Neolithic Era.[16] Salt was included among funeral offerings found in ancient Egyptian tombs from the third millennium BC, as were salted birds, and salt fish.[17] From about 2800 BC, the Egyptians began exporting salt fish to the Phoenicians in return for Lebanon cedar, glass, and the dye Tyrian purple; the Phoenicians traded Egyptian salted fish and salt from North Africa throughout their Mediterranean trade empire.[18] Herodotus described salt trading routes across Libya back in the 5th century BC. In the early years of the Roman Empire, roads were built for the transportation of salt from the salt imported at Ostia to the capital.[19]
22
+
23
+ In Africa, salt was used as currency south of the Sahara, and slabs of rock salt were used as coins in Abyssinia.[12] Moorish merchants in the 6th century traded salt for gold, weight for weight.[dubious – discuss] The Tuareg have traditionally maintained routes across the Sahara especially for the transportation of salt by Azalai (salt caravans). The caravans still cross the desert from southern Niger to Bilma, although much of the trade now takes place by truck. Each camel takes two bales of fodder and two of trade goods northwards and returns laden with salt pillars and dates.[20] In Gabon, before the arrival of Europeans, the coast people carried on a remunerative trade with those of the interior by the medium of sea salt. This was gradually displaced by the salt that Europeans brought in sacks, so that the coast natives lost their previous profits; as of the author's writing in 1958, sea salt was still the currency best appreciated in the interior.[21]
24
+
25
+ Salzburg, Hallstatt, and Hallein lie within 17 km (11 mi) of each other on the river Salzach in central Austria in an area with extensive salt deposits. Salzach literally means "salt river" and Salzburg "salt castle", both taking their names from the German word Salz meaning salt and Hallstatt was the site of the world's first salt mine.[22] The town gave its name to the Hallstatt culture that began mining for salt in the area in about 800 BC. Around 400 BC, the townsfolk, who had previously used pickaxes and shovels, began open pan salt making. During the first millennium BC, Celtic communities grew rich trading salt and salted meat to Ancient Greece and Ancient Rome in exchange for wine and other luxuries.[7]
26
+
27
+ The word salary comes from the Latin word for salt. The reason for this is unknown; a persistent modern claim that the Roman Legions were sometimes paid in salt is baseless.[23][24][25] The word salad literally means "salted", and comes from the ancient Roman practice of salting leaf vegetables.[26]
28
+
29
+ Wars have been fought over salt. Venice fought and won a war with Genoa over the product, and it played an important part in the American Revolution. Cities on overland trade routes grew rich by levying duties,[27] and towns like Liverpool flourished on the export of salt extracted from the salt mines of Cheshire.[28] Various governments have at different times imposed salt taxes on their peoples. The voyages of Christopher Columbus are said to have been financed from salt production in southern Spain, and the oppressive salt tax in France was one of the causes of the French Revolution. After being repealed, this tax was reimposed by Napoleon when he became emperor to pay for his foreign wars, and was not finally abolished until 1946.[27] In 1930, Mahatma Gandhi led at least 100,000 people on the "Dandi March" or "Salt Satyagraha", in which protesters made their own salt from the sea thus defying British rule and avoiding paying the salt tax. This civil disobedience inspired millions of common people and elevated the Indian independence movement from an elitist movement to a national struggle.[29]
30
+
31
+ Salt is mostly sodium chloride, the ionic compound with the formula NaCl, representing equal proportions of sodium and chlorine. Sea salt and freshly mined salt (much of which is sea salt from prehistoric seas) also contain small amounts of trace elements (which in these small amounts are generally good for plant and animal health[citation needed]). Mined salt is often refined in the production of table salt; it is dissolved in water, purified via precipitation of other minerals out of solution, and re-evaporated. During this same refining process it is often also iodized. Salt crystals are translucent and cubic in shape; they normally appear white but impurities may give them a blue or purple tinge. The molar mass of salt is 58.443 g/mol, its melting point is 801 °C (1,474 °F) and its boiling point 1,465 °C (2,669 °F). Its density is 2.17 grams per cubic centimetre and it is readily soluble in water. When dissolved in water it separates into Na+ and Cl− ions, and the solubility is 359 grams per litre.[30] From cold solutions, salt crystallises as the dihydrate NaCl·2H2O. Solutions of sodium chloride have very different properties from those of pure water; the freezing point is −21.12 °C (−6.02 °F) for 23.31 wt% of salt, and the boiling point of saturated salt solution is around 108.7 °C (227.7 °F).[31]
32
+
33
+ Salt is essential to the health of humans and other animals, and it is one of the five basic taste sensations.[32] Salt is used in many cuisines around the world, and it is often found in salt shakers on diners' eating tables for their personal use on food. Salt is also an ingredient in many manufactured foodstuffs. Table salt is a refined salt containing about 97 to 99 percent sodium chloride.[33][34][35] Usually, anticaking agents such as sodium aluminosilicate or magnesium carbonate are added to make it free-flowing. Iodized salt, containing potassium iodide, is widely available. Some people put a desiccant, such as a few grains of uncooked rice[36] or a saltine cracker, in their salt shakers to absorb extra moisture and help break up salt clumps that may otherwise form.[37]
34
+
35
+ Some table salt sold for consumption contains additives which address a variety of health concerns, especially in the developing world. The identities and amounts of additives vary widely from country to country. Iodine is an important micronutrient for humans, and a deficiency of the element can cause lowered production of thyroxine (hypothyroidism) and enlargement of the thyroid gland (endemic goitre) in adults or cretinism in children.[38] Iodized salt has been used to correct these conditions since 1924[39] and consists of table salt mixed with a minute amount of potassium iodide, sodium iodide or sodium iodate. A small amount of dextrose may also be added to stabilize the iodine.[40] Iodine deficiency affects about two billion people around the world and is the leading preventable cause of mental retardation.[41] Iodized table salt has significantly reduced disorders of iodine deficiency in countries where it is used.[42]
36
+
37
+ The amount of iodine and the specific iodine compound added to salt varies from country to country. In the United States, the Food and Drug Administration (FDA) recommends [21 CFR 101.9 (c)(8)(iv)] 150 micrograms of iodine per day for both men and women. US iodized salt contains 46–77 ppm (parts per million), whereas in the UK the iodine content of iodized salt is recommended to be 10–22 ppm.[43]
38
+
39
+ Sodium ferrocyanide, also known as yellow prussiate of soda, is sometimes added to salt as an anticaking agent. The additive is considered safe for human consumption.[44][45] Such anticaking agents have been added since at least 1911 when magnesium carbonate was first added to salt to make it flow more freely.[46] The safety of sodium ferrocyanide as a food additive was found to be provisionally acceptable by the Committee on Toxicity in 1988.[44] Other anticaking agents sometimes used include tricalcium phosphate, calcium or magnesium carbonates, fatty acid salts (acid salts), magnesium oxide, silicon dioxide, calcium silicate, sodium aluminosilicate and calcium aluminosilicate. Both the European Union and the United States Food and Drug Administration permitted the use of aluminium in the latter two compounds.[47]
40
+
41
+ In "doubly fortified salt", both iodide and iron salts are added. The latter alleviates iron deficiency anaemia, which interferes with the mental development of an estimated 40% of infants in the developing world. A typical iron source is ferrous fumarate.[48] Another additive, especially important for pregnant women, is folic acid (vitamin B9), which gives the table salt a yellow color. Folic acid helps prevent neural tube defects and anaemia, which affect young mothers, especially in developing countries.[48]
42
+
43
+ A lack of fluorine in the diet is the cause of a greatly increased incidence of dental caries.[49] Fluoride salts can be added to table salt with the goal of reducing tooth decay, especially in countries that have not benefited from fluoridated toothpastes and fluoridated water. The practice is more common in some European countries where water fluoridation is not carried out. In France, 35% of the table salt sold contains added sodium fluoride.[48]
44
+
45
+ Unrefined sea salt contains small amounts of magnesium and calcium halides and sulfates, traces of algal products, salt-resistant bacteria and sediment particles. The calcium and magnesium salts confer a faintly bitter overtone, and they make unrefined sea salt hygroscopic (i.e., it gradually absorbs moisture from air if stored uncovered). Algal products contribute a mildly "fishy" or "sea-air" odour, the latter from organobromine compounds. Sediments, the proportion of which varies with the source, give the salt a dull grey appearance. Since taste and aroma compounds are often detectable by humans in minute concentrations, sea salt may have a more complex flavor than pure sodium chloride when sprinkled on top of food. When salt is added during cooking however, these flavors would likely be overwhelmed by those of the food ingredients.[50] The refined salt industry cites scientific studies saying that raw sea and rock salts do not contain enough iodine salts to prevent iodine deficiency diseases.[51]
46
+
47
+ Himalayan salt is known for its distinct pink hue. It is used in cooking as a substitute for table salt. Is it also used as cookware, in salt lamps and in spas. It is mined from the Salt Range mountains in Pakistan.
48
+
49
+ Different natural salts have different mineralities depending on their source, giving each one a unique flavour. Fleur de sel, a natural sea salt from the surface of evaporating brine in salt pans, has a unique flavour varying with the region from which it is produced. In traditional Korean cuisine, so-called "bamboo salt" is prepared by roasting salt[52] in a bamboo container plugged with mud at both ends. This product absorbs minerals from the bamboo and the mud, and has been claimed to increase the anticlastogenic and antimutagenic properties of doenjang (a fermented bean paste).[53]
50
+
51
+ Kosher or kitchen salt has a larger grain size than table salt and is used in cooking. It can be useful for brining, bread or pretzel making and as a scrubbing agent when combined with oil.[54]
52
+
53
+ Pickling salt is made of ultra-fine grains to speed dissolving to make brine.
54
+
55
+ Salt is present in most foods, but in naturally occurring foodstuffs such as meats, vegetables and fruit, it is present in very small quantities. It is often added to processed foods (such as canned foods and especially salted foods, pickled foods, and snack foods or other convenience foods), where it functions as both a preservative and a flavoring. Dairy salt is used in the preparation of butter and cheese products.[55] As a flavoring, salt enhances the taste of other foods by suppressing the bitterness of those foods making them more palatable and relatively sweeter.[56]
56
+
57
+ Before the advent of electrically powered refrigeration, salting was one of the main methods of food preservation. Thus, herring contains 67 mg sodium per 100 g, while kipper, its preserved form, contains 990 mg. Similarly, pork typically contains 63 mg while bacon contains 1,480 mg, and potatoes contain 7 mg but potato crisps 800 mg per 100 g.[11] Salt is also used in cooking, such as with salt crusts. The main sources of salt in the Western diet, apart from direct use of sodium chloride, are bread and cereal products, meat products and milk and dairy products.[11]
58
+
59
+ In many East Asian cultures, salt is not traditionally used as a condiment.[57] In its place, condiments such as soy sauce, fish sauce and oyster sauce tend to have a high sodium content and fill a similar role to table salt in western cultures. They are most often used for cooking rather than as table condiments.[58]
60
+
61
+ Table salt is made up of just under 40% sodium by weight, so a 6 g serving (1 teaspoon) contains about 2,400 mg of sodium.[59] Sodium serves a vital purpose in the human body: via its role as an electrolyte, it helps nerves and muscles to function correctly, and it is one factor involved in the osmotic regulation of water content in body organs (fluid balance).[60] Most of the sodium in the Western diet comes from salt.[3] The habitual salt intake in many Western countries is about 10 g per day, and it is higher than that in many countries in Eastern Europe and Asia.[61] The high level of sodium in many processed foods has a major impact on the total amount consumed.[62] In the United States, 75% of the sodium eaten comes from processed and restaurant foods, 11% from cooking and table use and the rest from what is found naturally in foodstuffs.[63]
62
+
63
+ Because consuming too much sodium increases risk of cardiovascular diseases,[3] health organizations generally recommend that people reduce their dietary intake of salt.[3][64][65][66] High sodium intake is associated with a greater risk of stroke, total cardiovascular disease and kidney disease.[2][61] A reduction in sodium intake by 1,000 mg per day may reduce cardiovascular disease by about 30 percent.[1][3] In adults and children with no acute illness, a decrease in the intake of sodium from the typical high levels reduces blood pressure.[65][67] A low sodium diet results in a greater improvement in blood pressure in people with hypertension.[68][69]
64
+
65
+ The World Health Organization recommends that adults should consume less than 2,000 mg of sodium (which is contained in 5 g of salt) per day.[64] Guidelines by the United States recommend that people with hypertension, African Americans, and middle-aged and older adults should limit consumption to no more than 1,500 mg of sodium per day and meet the potassium recommendation of 4,700 mg/day with a healthy diet of fruits and vegetables.[3][70]
66
+
67
+ While reduction of sodium intake to less than 2,300 mg per day is recommended by developed countries,[3] one review recommended that sodium intake be reduced to at least 1,200 mg (contained in 3 g of salt) per day, as a further reduction in salt intake the greater the fall in systolic blood pressure for all age groups and ethinicities.[65] Another review indicated that there is inconsistent/insufficient evidence to conclude that reducing sodium intake to lower than 2,300 mg per day is either beneficial or harmful.[71]
68
+
69
+ More recent evidence is showing a much more complicated relationship between salt and cardiovascular disease. According to a systematic review of multiple large studies, "mortality caused by levels of salt the association between sodium consumption and cardiovascular disease or mortality is U-shaped, with increased risk at both high and low sodium intake" [72] The findings showed that increased mortality from excessive salt intake was primarily associated with individuals with hypertension. The levels of increased mortality among those with restricted salt intake appeared to be similar regardless of blood pressure. This evidence shows that while those with hypertension should primarily focus on reducing sodium to recommended levels, all groups should seek to maintain a healthy level of sodium intake of between 4 and 5 grams a day.[72]
70
+
71
+ One of the two most prominent dietary risks for disability in the world is eating too much sodium.[73]
72
+
73
+ Only about 6% of the salt manufactured in the world is used in food. Of the remainder, 12% is used in water conditioning processes, 8% goes for de-icing highways and 6% is used in agriculture. The rest (68%) is used for manufacturing and other industrial processes,[74] and sodium chloride is one of the largest inorganic raw materials used by volume. Its major chemical products are caustic soda and chlorine, which are separated by the electrolysis of a pure brine solution. These are used in the manufacture of PVC, plastics, paper pulp and many other inorganic and organic compounds. Salt is also used as a flux in the production of aluminium. For this purpose, a layer of melted salt floats on top of the molten metal and removes iron and other metal contaminants. It is also used in the manufacture of soaps and glycerine, where it is added to the vat to precipitate out the saponified products. As an emulsifier, salt is used in the manufacture of synthetic rubber, and another use is in the firing of pottery, when salt added to the furnace vaporises before condensing onto the surface of the ceramic material, forming a strong glaze.[75]
74
+
75
+ When drilling through loose materials such as sand or gravel, salt may be added to the drilling fluid to provide a stable "wall" to prevent the hole collapsing. There are many other processes in which salt is involved. These include its use as a mordant in textile dying, to regenerate resins in water softening, for the tanning of hides, the preservation of meat and fish and the canning of meat and vegetables.[75][76][77]
76
+
77
+ Food-grade salt accounts for only a small part of salt production in industrialized countries (7% in Europe),[78] although worldwide, food uses account for 17.5% of total production.[79]
78
+
79
+ In 2018, total world production of salt was 300 million tonnes, the top six producers being China (68 million), the United States (42 million), India (29 million), Germany (13 million), Canada (13 million) and Australia (12 million).[80]
80
+
81
+ The manufacture of salt is one of the oldest chemical industries.[81] A major source of salt is seawater, which has a salinity of approximately 3.5%. This means that there are about 35 grams (1.2 oz) of dissolved salts, predominantly sodium (Na+) and chloride (Cl−) ions, per kilogram (2.2 lbs) of water.[82] The world's oceans are a virtually inexhaustible source of salt, and this abundance of supply means that reserves have not been calculated.[76] The evaporation of seawater is the production method of choice in marine countries with high evaporation and low precipitation rates. Salt evaporation ponds are filled from the ocean and salt crystals can be harvested as the water dries up. Sometimes these ponds have vivid colours, as some species of algae and other micro-organisms thrive in conditions of high salinity.[83]
82
+
83
+ Elsewhere, salt is extracted from the vast sedimentary deposits which have been laid down over the millennia from the evaporation of seas and lakes. These are either mined directly, producing rock salt, or are extracted in solution by pumping water into the deposit. In either case, the salt may be purified by mechanical evaporation of brine. Traditionally, this was done in shallow open pans which were heated to increase the rate of evaporation. More recently, the process is performed in pans under vacuum.[77] The raw salt is refined to purify it and improve its storage and handling characteristics. This usually involves recrystallization during which a brine solution is treated with chemicals that precipitate most impurities (largely magnesium and calcium salts). Multiple stages of evaporation are then used to collect pure sodium chloride crystals, which are kiln-dried.[84] Some salt is produced using the Alberger process, which involves vacuum pan evaporation combined with the seeding of the solution with cubic crystals, and produces a grainy-type flake.[85] The Ayoreo, an indigenous group from the Paraguayan Chaco, obtain their salt from the ash produced by burning the timber of the Indian salt tree (Maytenus vitis-idaea) and other trees.[86]
84
+
85
+ One of the largest salt mining operations in the world is at the Khewra Salt Mine in Pakistan. The mine has nineteen storeys, eleven of which are underground, and 400 km (250 mi) of passages. The salt is dug out by the room and pillar method, where about half the material is left in place to support the upper levels. Extraction of Himalayan salt is expected to last 350 years at the present rate of extraction of around 385,000 tons per annum.[87]
86
+
87
+ Salt has long held an important place in religion and culture. At the time of Brahmanic sacrifices, in Hittite rituals and during festivals held by Semites and Greeks at the time of the new moon, salt was thrown into a fire where it produced crackling noises.[88] The ancient Egyptians, Greeks and Romans invoked their gods with offerings of salt and water and some people think this to be the origin of Holy Water in the Christian faith.[89] In Aztec mythology, Huixtocihuatl was a fertility goddess who presided over salt and salt water.[90]
88
+
89
+ Salt is considered to be a very auspicious substance in Hinduism and is used in particular religious ceremonies like house-warmings and weddings.[91] In Jainism, devotees lay an offering of raw rice with a pinch of salt before a deity to signify their devotion and salt is sprinkled on a person's cremated remains before the ashes are buried.[92] Salt is believed to ward off evil spirits in Mahayana Buddhist tradition, and when returning home from a funeral, a pinch of salt is thrown over the left shoulder as this prevents evil spirits from entering the house.[93] In Shinto, Shio (塩, lit. "salt") is used for ritual purification of locations and people (harae, specifically shubatsu), and small piles of salt are placed in dishes by the entrance of establishments for the two-fold purposes of warding off evil and attracting patrons.[94]
90
+
91
+ In the Hebrew Bible, there are thirty-five verses which mention salt.[95] One of these mentions Lot's wife, who was turned into a pillar of salt when she looked back at the cities of Sodom and Gomorrah (Genesis 19:26) as they were destroyed. When the judge Abimelech destroyed the city of Shechem, he is said to have "sown salt on it," probably as a curse on anyone who would re-inhabit it (Judges 9:45). The Book of Job contains the first mention of salt as a condiment. "Can that which is unsavoury be eaten without salt? or is there any taste in the white of an egg?" (Job 6:6).[95] In the New Testament, six verses mention salt. In the Sermon on the Mount, Jesus referred to his followers as the "salt of the earth". The apostle Paul also encouraged Christians to "let your conversation be always full of grace, seasoned with salt" (Colossians 4:6).[95] Salt is mandatory in the rite of the Tridentine Mass.[96] Salt is used in the third item (which includes an Exorcism) of the Celtic Consecration (cf. Gallican Rite) that is employed in the consecration of a church. Salt may be added to the water "where it is customary" in the Roman Catholic rite of Holy water.[96]
92
+
93
+ In Judaism, it is recommended to have either a salty bread or to add salt to the bread if this bread is unsalted when doing Kiddush for Shabbat. It is customary to spread some salt over the bread or to dip the bread in a little salt when passing the bread around the table after the Kiddush.[97] To preserve the covenant between their people and God, Jews dip the Sabbath bread in salt.[89]
94
+
95
+ In Wicca, salt is symbolic of the element Earth. It is also believed to cleanse an area of harmful or negative energies. A dish of salt and a dish of water are almost always present on an altar, and salt is used in a wide variety of rituals and ceremonies.[98]
96
+
en/534.html.txt ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Scrooge McDuck is a fictional character created in 1947 by Carl Barks for The Walt Disney Company, appearing in Disney comics. Scrooge is an elderly Scottish anthropomorphic Pekin duck with a yellow-orange bill, legs, and feet. He typically wears a red or blue frock coat, top hat, pince-nez glasses, and spats. He is portrayed in animations as speaking with a Scottish accent. Originally intended to be used only once, Scrooge became one of the most popular characters in Disney comics, and Barks' signature work.[6]
4
+
5
+ Named after Ebenezer Scrooge from the 1843 novella A Christmas Carol, Scrooge is an incredibly wealthy business magnate and self-proclaimed "adventure-capitalist" whose dominant character traits are his wealth, his thrift, and his tendency to seek more wealth through adventure. He is the maternal uncle of Donald Duck, the maternal grand-uncle of Huey, Dewey, and Louie, and a usual financial backer of Gyro Gearloose. Within the context of the fictional Duck universe, he is the world's richest person.[7] He is portrayed as an oil tycoon, businessman, industrialist, and owner of the largest mining concerns and many factories to operate different activities. His "Money Bin" — and indeed Scrooge himself — are often used as a humorous metonyms for great wealth in popular culture around the world.
6
+
7
+ McDuck was initially characterized as a greedy miser and antihero (as Charles Dickens' original Scrooge was), but in later appearances he has often been portrayed as a thrifty hero, adventurer and explorer. He was originally created by Barks as an antagonist for Donald Duck, first appearing in the 1947 Four Color story Christmas on Bear Mountain (#178). However, McDuck's popularity grew so large that he became a major figure of the Duck universe. In 1952 he was given his own comic book series, called Uncle Scrooge, which still runs today.
8
+
9
+ Scrooge was most famously drawn by his creator Carl Barks, and later by Don Rosa. Like other Disney franchise characters, Scrooge McDuck's international popularity has resulted in literature that is often translated into other languages. Comics have remained Scrooge's primary medium, although he has also appeared in animated cartoons, most extensively in the television series DuckTales (1987–1990) and its reboot (2017-) as the main protagonist of both series.
10
+
11
+ Scrooge McDuck, maternal uncle of previously established character Donald Duck, made his first named appearance in the story Christmas on Bear Mountain which was published in Dell's Four Color Comics #178, December 1947, written and drawn by artist Carl Barks. His appearance may have been based on a similar-looking, Scottish "thrifty saver" Donald Duck character from the 1943 propaganda short The Spirit of '43.[8]
12
+
13
+ In Christmas on Bear Mountain,[9] Scrooge was a bearded, bespectacled, reasonably wealthy old duck, visibly leaning on his cane, and living in isolation in a "huge mansion".[10] Scrooge's misanthropic thoughts in this first story are quite pronounced: "Here I sit in this big lonely dump, waiting for Christmas to pass! Bah! That silly season when everybody loves everybody else! A curse on it! Me—I'm different! Everybody hates me, and I hate everybody!"[10]
14
+
15
+ Barks later reflected, "Scrooge in 'Christmas on Bear Mountain' was only my first idea of a rich, old uncle. I had made him too old and too weak. I discovered later on that I had to make him more active. I could not make an old guy like that do the things I wanted him to do."[11]
16
+
17
+ Barks would later claim that he originally only intended to use Scrooge as a one-shot character, but then decided Scrooge (and his fortune) could prove useful for motivating further stories. Barks continued to experiment with Scrooge's appearance and personality over the next four years.
18
+
19
+ Scrooge's second appearance, in The Old Castle's Secret[12] (first published in June 1948), had Scrooge recruiting his nephews to search for a family treasure hidden in Dismal Downs, the McDuck family's ancestral castle, built in the middle of Rannoch Moor in Scotland. Foxy Relations (first published in November 1948) was the first story where Scrooge is called by his title and catchphrase "The Richest Duck in the World".
20
+
21
+ The story, Voodoo Hoodoo, first published in Dell's Four Color Comics #238, August 1949, was the first story to hint at Scrooge's past with the introduction of two figures from it. The first was Foola Zoola, an old African sorcerer and chief of the Voodoo tribe who had cursed Scrooge, seeking revenge for the destruction of his village and the taking of his tribe's lands by Scrooge decades ago.
22
+
23
+ Scrooge privately admitted to his nephews that he had used an army of "cutthroats" to get the tribe to abandon their lands, in order to establish a rubber plantation. The event was placed by Carl Barks in 1879 during the story, but it would later be retconned by Don Rosa to 1909 to fit with Scrooge's later-established personal history.[citation needed]
24
+
25
+ The second figure was Bombie the Zombie, the organ of the sorcerer's curse and revenge. He had reportedly sought Scrooge for decades before reaching Duckburg, mistaking Donald for Scrooge.[citation needed]
26
+
27
+ Barks, with a note of skepticism often found in his stories, explained the zombie as a living person who has never died, but has somehow gotten under the influence of a sorcerer. Although some scenes of the story were intended as a parody of Bela Lugosi's White Zombie, the story is the first to not only focus on Scrooge's past but also touch on the darkest aspects of his personality.
28
+
29
+ Trail of the Unicorn,[13] first published in February 1950, introduced Scrooge's private zoo. One of his pilots had managed to photograph the last living unicorn, which lived in the Indian part of the Himalayas. Scrooge offered a reward to competing cousins Donald Duck and Gladstone Gander, which would go to the one who captured the unicorn for Scrooge's collection of animals.
30
+
31
+ This was also the story that introduced Scrooge's private airplane. Barks would later establish Scrooge as an experienced aviator. Donald had previously been shown as a skilled aviator, as was Flintheart Glomgold in later stories. In comparison, Huey, Dewey, and Louie were depicted as only having taken flying lessons in the story Frozen Gold (published in January 1945).
32
+
33
+ The Pixilated Parrot, first published in July 1950, introduced the precursor to Scrooge's money bin; in this story, Scrooge's central office building is said to contain "three cubic acres of money". Two nameless burglars who briefly appear during the story are considered to be the precursors of the Beagle Boys.[14]
34
+
35
+ The Magic Hourglass, first published in September 1950, was arguably the first story to change the focus of the Duck stories from Donald to Scrooge. During the story, several themes were introduced for Scrooge.
36
+
37
+ Donald first mentions in this story that his uncle practically owns Duckburg, a statement that Scrooge's rival John D. Rockerduck would later put in dispute. Scrooge first hints that he was not born into wealth, as he remembers buying the Hourglass in Morocco when he was a member of a ship's crew as a cabin boy. It is also the first story in which Scrooge mentions speaking another language besides his native English and reading other alphabets besides the Latin alphabet, as during the story, he speaks Arabic and reads the Arabic alphabet.[citation needed]
38
+
39
+ The latter theme would be developed further in later stories. Barks and current Scrooge writer Don Rosa have depicted Scrooge as being fluent in Arabic, Dutch, German, Mongolian, Spanish, Mayan, Bengali, Finnish, and a number of Chinese dialects. Scrooge acquired this knowledge from years of living or traveling to the various regions of the world where those languages are spoken. Later writers would depict Scrooge having at least working knowledge of several other languages.
40
+
41
+ Scrooge was shown in The Magic Hourglass in a more positive light than in previous stories, but his more villainous side is present too. Scrooge is seen in this story attempting to reacquire a magic hourglass that he gave to Donald, before finding out that it acted as a protective charm for him. Scrooge starts losing one billion dollars each minute, and comments that he will go bankrupt within 600 years. This line is a parody of Orson Welles's line in Citizen Kane "You know, Mr. Thatcher, at the rate of a million dollars a year, I'll have to close this place in ... 60 years".[15] To convince his nephews to return it, he pursues them throughout Morocco, where they had headed to earlier in the story. Memorably during the story, Scrooge interrogates Donald by having him tied up and tickled with a feather in an attempt to get Donald to reveal the hourglass's location. Scrooge finally manages to retrieve it, exchanging it for a flask of water, as he had found his nephews exhausted and left in the desert with no supplies. As Scrooge explains, he intended to give them a higher offer, but he just could not resist having somebody at his mercy without taking advantage of it.
42
+
43
+ A Financial Fable, first published in March 1951, had Scrooge teaching Donald some lessons in productivity as the source of wealth, along with the laws of supply and demand. Perhaps more importantly, it was also the first story where Scrooge observes how diligent and industrious Huey, Louie and Dewey are, making them more similar to himself rather than to Donald. Donald in Barks's stories is depicted as working hard on occasion, but given the choice often proves to be a shirker. The three younger nephews first side with Scrooge rather than Donald in this story, with the bond between granduncle and grandnephews strengthening in later stories. However, there have been rare instances where Donald proved invaluable to Scrooge, such as when the group traveled back in time to Ancient Egypt to retrieve a pharaoh's papyrus. Donald cautions against taking it with him, as no one would believe the story unless it was unearthed. Donald then buries it and makes a marking point from the Nile River, making Scrooge think to himself admiringly, "Donald must have swallowed the Encyclopædia Britannica!"
44
+
45
+ Terror of the Beagle Boys, first published in November 1951, introduced the readers to the Beagle Boys, although Scrooge in this story seems to be already familiar with them. The Big Bin on Killmotor Hill introduced Scrooge's money bin, built on Killmotor Hill in the center of Duckburg.
46
+
47
+ By this point, Scrooge had become familiar to readers in the United States and Europe. Other Disney writers and artists besides Barks began using Scrooge in their own stories, including Italian writer Romano Scarpa. Western Publishing, the then-publisher of the Disney crafty comics, started thinking about using Scrooge as a protagonist rather than a supporting character, and then decided to launch Scrooge in his own self-titled comic. Uncle Scrooge #1, featuring the story Only a Poor Old Man, was published in March 1952. This story along with Back to the Klondike, first published a year later in March 1953, became the biggest influences in how Scrooge's character, past, and beliefs would become defined.
48
+
49
+ After this point, Barks produced most of his longer stories in Uncle Scrooge, with a focus mainly on adventure, while his ten-page stories for Walt Disney's Comics and Stories continued to feature Donald as the star and focused on comedy. In Scrooge's stories, Donald and his nephews were cast as Scrooge's assistants, who accompanied Scrooge in his adventures around the world. This change of focus from Donald to Scrooge was also reflected in stories by other contemporary writers. Since then, Scrooge remains a central figure of the Duck comics' universe, thus the coining of the term "Scrooge McDuck Universe".[citation needed]
50
+
51
+ After Barks's retirement, the character continued under other artists. In 1972, Barks was persuaded to write more stories for Disney. He wrote Junior Woodchuck stories where Scrooge often plays the part of the villain, closer to the role he had before he acquired his own series. Under Barks, Scrooge always was a malleable character who would take on whatever persona was convenient to the plot.
52
+
53
+ The Italian writer and artist Romano Scarpa made several additions to Scrooge McDuck's universe, including characters such as Brigitta McBridge, Scrooge's self-styled fiancée, and Gideon McDuck, a newspaper editor who is Scrooge's brother. Those characters have appeared mostly in European comics. So is also the case for Scrooge's rival John D. Rockerduck (created by Barks for just one story) and Donald's cousin Fethry Duck, who sometimes works as a reporter for Scrooge's newspaper.
54
+
55
+ Another major development was the arrival of writer and artist Don Rosa in 1986 with his story "The Son of the Sun", released by Gladstone Publishing and nominated for a Harvey Award, one of the comics industry's highest honors. Rosa has said in interviews that he considers Scrooge to be his favorite Disney character. Unlike most other Disney writers, Don Rosa considered Scrooge as a historical character whose Disney adventures had occurred in the fifties and sixties and ended (in his undepicted death[16]) in 1967 when Barks retired. He considered only Barks' stories canonical, and fleshed out a timeline as well as a family tree based on Barks' stories. Eventually he wrote and drew The Life and Times of Scrooge McDuck, a full history in twelve chapters which received an Eisner Award in 1995. Later editions included additional chapters. Under Rosa, Scrooge became more ethical; while he never cheats, he ruthlessly exploits any loopholes. He owes his fortune to his hard work and his money bin is "full of souvenirs" since every coin reminds him of a specific circumstance. Rosa remains the foremost contemporary duck artist and has been nominated for five 2007 Eisner Awards. His work is regularly reprinted by itself as well as along with Barks stories for which he created a sequel.
56
+
57
+ Daan Jippes, who can mimic Barks's art to a close extent, repenciled all of Barks's 1970s Junior Woodchucks stories, as well as Barks' final Uncle Scrooge stories, from the 1990s to the early 2000s. Other notable Disney artists who have worked with the Scrooge character include Michael Peraza, Marco Rota, William Van Horn, and Tony Strobl.
58
+
59
+ In an interview with the Norwegian "Aftenposten" from 1992 Don Rosa says that "in the beginning Scrooge [owed] his existence to his nephew Donald, but that has changed and today it's Donald that [owes] his existence to Scrooge" and he also says that this is one of the reasons why he is so interested in Scrooge.
60
+
61
+ The character is almost exclusively portrayed as having worked his way up the financial ladder from humble immigrant roots. The real life of Andrew Carnegie, a Scottish-American immigrant and tycoon of the Industrial Age, and the fictional character of Charles Dickens' miser Ebenezer Scrooge are both believed to be strong influences on Scrooge's characterization.
62
+
63
+ The comic book series The Life and Times of Scrooge McDuck, written and drawn by Don Rosa, shows Scrooge as a young boy, he took up a job polishing and shining boots in his native Glasgow. A pivotal moment comes in 1877, when a ditchdigger pays him with an 1875 US dime, which was useless as currency in 19th century Glasgow; he fails to notice what sort of coin he's been given until after the man has left. Enraged, Scrooge vowed to never be taken advantage of again, to be "sharper than the sharpies and smarter than the smarties." He takes a position as cabin boy on a Clyde cattle ship to the United States to make his fortune at the age of 13. In 1898, after many adventures he finally ends up in Klondike, where he finds a golden rock the size of a goose's egg. By the following year he had made his first $1,000,000 and bought the deed for Killmule Hill from Casey Coot, the son of Clinton Coot and grandson of Cornelius Coot, the founder of Duckburg. He finally ends up in Duckburg in 1902. After some dramatic events where he faces both the Beagle Boys and president Roosevelt and his "Rough Riders" at the same time, he tears down the rest of the old fort Duckburg and builds his famous Money Bin at the site.
64
+
65
+ In the years to follow, Scrooge travels all around the world in order to increase his fortune, while his family remained behind to manage the Money Bin. When Scrooge finally returns to Duckburg, he is the richest duck in the world, rivaled only by Flintheart Glomgold, John D. Rockerduck, and less prominently, the maharaja of the fictional country Howdoyoustan (play on Hindustan). His experiences, however, had changed him into a hostile miser, and he made his own family leave.[further explanation needed] Some 12 years later, he closed his empire down, but eventually returned to a public life five years later and started his business.
66
+
67
+ He keeps the majority of his wealth in a massive Money Bin overlooking the city of Duckburg. In the short Scrooge McDuck and Money, he remarks to his nephews that this money is "just petty cash". In the Dutch and Italian version he regularly forces Donald and his nephews to polish the coins one by one in order to pay off Donald's debts; Scrooge will not pay them much for this lengthy, tedious, hand-breaking work. As far as he is concerned, even 5 cents an hour is too much expenditure.
68
+
69
+ A shrewd businessman and noted tightwad, he is fond of diving into and swimming in his money, without injury. He is also the richest member of The Billionaires Club of Duckburg, a society which includes the most successful businessmen of the world and allows them to keep connections with each other. Glomgold and Rockerduck are also influential members of the Club. His most famous prized possession is his Number One Dime.
70
+
71
+ The sum of Scrooge's wealth is unclear.[17] According to Barks' The Second Richest Duck as noted by a Time article, Scrooge is worth "one multiplujillion, nine obsquatumatillion, six hundred twenty-three dollars and sixty-two cents".[18] In the DuckTales episode "Liquid Assets", Fenton Crackshell (Scrooge's accountant) notes that McDuck's money bin contains "607 tillion 386 zillion 947 trillion 522 billion dollars and 36 cents". Don Rosa's Life and Times of Scrooge McDuck notes that Scrooge amounts to "five multiplujillion, nine impossibidillion, seven fantastica trillion dollars and sixteen cents". A thought bubble from Scrooge McDuck sitting in his car with his chauffeur in Walt Disney's Christmas Parade No.1 (published in 1949) that takes place in the story "Letter to Santa" clearly states "What's the use of having 'eleven octillion dollars' if I don't make a big noise about it?" In DuckTales the Movie: Treasure of the Lost Lamp, Scrooge mentions "We quadzillionaires have our own ideas of fun." In the first episode of the 2017 DuckTales series, Scrooge states that he runs "a multi-trillion dollar business".
72
+
73
+ Forbes magazine has occasionally tried to estimate Scrooge's wealth in real terms; in 2007, the magazine estimated his wealth at $28.8 billion;[19] in 2011, it rose to $44.1 billion due to the rise in gold prices.[20] Another, more in-depth, analysis of Scrooge's wealth was done by MatPat of The Film Theory channel on YouTube. Using four different methodologies to calculate the volume of actual gold in Scrooge's money bin (depth gauge, ladder length, blueprints, and 3 cubic acres), the four amounts from most conservative to "more money than the entire planet Earth" the amounts were: $52,348,493,767.50 (depth gauge), $239,307,400,080 (ladder), $12,434,013,552,490 (blueprints), $333,927,633,863,527 (3 cubic acres); with each valuation based on a then current gold price of $1243.30 per troy ounce.[21] Whatever the amount, Scrooge never considers it to be enough; he believes that he has to continue to earn money by any means possible. A running gag is Scrooge always making profit on any business deal.[22]
74
+
75
+ Scrooge never completed a formal education, as he left school at an early age. However, he has a sharp mind and is always ready to learn new skills. Because of his secondary occupation as a treasure hunter, Scrooge has become something of a scholar and an amateur archaeologist. Starting with Barks, several writers have explained how Scrooge becomes aware of the treasures he decides to pursue. This often involves periods of research consulting various written sources in search of passages that might lead him to a treasure. Often Scrooge decides to search for the possible truth behind old legends, or discovers obscure references to the activities of ancient conquerors, explorers and military leaders that he considers interesting enough to begin a new expedition.
76
+
77
+ As a result of his research, Scrooge has built up an extensive personal library, which includes many rare tomes. In Barks's and Rosa's stories, among the prized pieces of this library is an almost complete collection of Spanish and Dutch naval logs of the 16th and 17th centuries. Their references to the fates of other ships have often allowed Scrooge to locate sunken ships and recover their treasures from their watery graves. Mostly self-taught as he is, Scrooge is a firm believer in the saying "knowledge is power". Scrooge is also an accomplished linguist and entrepreneur, having learned to speak several different languages during his business trips around the world, selling refrigerators to Eskimos, wind to windmill manufacturers in the Netherlands, etc.
78
+
79
+ Both as a businessman and as a treasure hunter, Scrooge is noted for his drive to set new goals and face new challenges.[23] As Carl Barks described his character, for Scrooge there is "always another rainbow". The phrase later provided the title for one of Barks's better-known paintings depicting Scrooge. Periods of inactivity between adventures and lack of serious challenges tend to be depressing for Scrooge after a while; some stories see these phases take a toll on his health. Scrooge's other motto is "Work smarter, not harder."
80
+
81
+ As a businessman, Scrooge often resorts to aggressive tactics and deception. He seems to have gained significant experience in manipulating people and events towards his own ends. As often seen in stories by writer Guido Martina and occasionally by others, Scrooge is noted for his cynicism, especially towards ideals of morality when it comes to business and the pursuit of set goals. This has been noted by some as not being part of Barks's original profile of the character, but has since come to be accepted as one valid interpretation of Scrooge's way of thinking.
82
+
83
+ Scrooge seems to have a personal code of honesty that offers him an amount of self-control. He can often be seen contemplating the next course of action, divided between adopting a ruthless pursuit of his current goal against those tactics he considers more honest. At times, he can sacrifice his goal in order to remain within the limits of this sense of honesty. Several fans of the character have come to consider these depictions as adding to the depth of his personality, because based on the decisions he takes Scrooge can be both the hero and the villain of his stories. This is one thing he has in common with his nephew Donald. Scrooge's sense of honesty also distinguishes him from his rival Flintheart Glomgold, who places no such self-limitations. During the cartoon series DuckTales, at times he would be heard saying to Glomgold, "You're a cheater, and cheaters never prosper!"
84
+
85
+ Like his nephew Donald, Scrooge has also a temper (but not as a strong temper as his nephew) and rarely hesitates to use cartoon violence against those who provoke his ire (often his nephew Donald, but also bill and tax collectors as well as door-to-door salesmen); however, he seems to be against the use of lethal force. On occasion, he has even saved the lives of enemies who had threatened his own life but were in danger of losing their own. According to Scrooge's own explanation, this is to save himself from feelings of guilt over their deaths; he generally awaits no gratitude from them. Scrooge has also opined that only in fairy tales do bad people turn good, and that he is old enough to not believe in fairy tales. Scrooge believes in keeping his word—never breaking a promise once given.[24] In Italian-produced stories of the 1950s to 1970s, however, particularly those written by Guido Martina, Scrooge often acts differently from in American or Danish comics productions.
86
+
87
+ Carl Barks gave Scrooge a definite set of ethics which were in tone with the time he was supposed to have made his fortune. The robber barons and industrialists of the 1890–1920s era were McDuck's competition as he earned his fortune. Scrooge proudly asserts "I made it by being tougher than the toughies and smarter than the smarties! And I made it square!" Barks's creation is averse to dishonesty in the pursuit of wealth. When Disney filmmakers first contemplated a Scrooge feature cartoon in the fifties, the animators had no understanding of the Scrooge McDuck character and merely envisioned Scrooge as a duck version of Ebenezer Scrooge—a very unsympathetic character. In the end they shelved the idea because a duck who gets all excited about money just was not funny enough.
88
+
89
+ In an interview, Barks summed up his beliefs about Scrooge and capitalism:
90
+
91
+ I've always looked at the ducks as caricatured human beings. In rereading the stories, I realized that I had gotten kind of deep in some of them: there was philosophy in there that I hadn't realized I was putting in. It was an added feature that went along with the stories. I think a lot of the philosophy in my stories is conservative—conservative in the sense that I feel our civilization peaked around 1910. Since then we've been going downhill. Much of the older culture had basic qualities that the new stuff we keep hatching can never match.
92
+
93
+ Look at the magnificent cathedrals and palaces that were built. Nobody can build that sort of thing nowadays. Also, I believe that we should preserve many old ideals and methods of working: honor, honesty, allowing other people to believe in their own ideas, not trying to force everyone into one form. The thing I have against the present political system is that it tries to make everybody exactly alike. We should have a million different patterns.
94
+
95
+ They say that wealthy people like the Vanderbilts and Rockefellers are sinful because they accumulated fortunes by exploiting the poor. I feel that everybody should be able to rise as high as they can or want to, provided they don't kill anybody or actually oppress other people on the way up. A little exploitation is something you come by in nature. We see it in the pecking order of animals—everybody has to be exploited or to exploit someone else to a certain extent. I don't resent those things.[25]
96
+
97
+ In the DuckTales series, Scrooge has adopted the nephews (as Donald has joined the Navy and is away on his tour of duty), and as a result his darker personality traits are downplayed. While most of his persona remain from the comics, he is notably more optimistic and level-headed in the animated cartoon. In an early episode, Scrooge credits his improved temperament to the nephews and Webby (his housekeeper's granddaughter, who comes to live in Scrooge's mansion), saying that "for the first time since I left Scotland, I have a family". Though Scrooge is far from tyrannical in the comics, he is rarely so openly affectionate. While he still hunts for treasure in DuckTales, many episodes focus on his attempts to thwart villains. However, he remains just as tightfisted with money as he has always been. But he's also affable and patient with his family and friends.
98
+
99
+ Scrooge displays a strict code of honor, insisting that the only valid way to acquire wealth is to "earn it square," and he goes to great lengths to thwart those (sometimes even his own nephews) who gain money dishonestly. This code also prevents him from ever being dishonest himself, and he avows that "Scrooge McDuck's word is as good as gold." He also expresses great disgust at being viewed by others as a greedy liar and cheater.
100
+
101
+ The series fleshes out Scrooge's upbringing by depicting his life as an individual who worked hard his entire life to earn his keep and to fiercely defend it against those who were truly dishonest but also, he defends his family and friends from any dangers, including villains. His value teaches his nephews not to be dishonest with him or anybody else. It is shown that money is no longer the most important thing in his life. For one episode, he was under a love spell, which caused him to lavish his time on a goddess over everything else. The nephews find out that the only way to break the spell is make the person realize that the object of their love will cost them something they truly love. The boys make it appear that Scrooge's love is allergic to money; however, he simply decides to give up his wealth so he can be with her. Later, when he realizes he will have to give up his nephews to be with her, the spell is immediately broken, showing that family is the most important thing to him.
102
+
103
+ On occasion, he demonstrates considerable physical strength by single-handedly beating bigger foes. He credits his robustness to "lifting money bags."
104
+
105
+ Another part of Scrooge's persona is his Scottish accent. Dallas McKennon was the first actor to provide Scrooge's voice for the 1960 Disneyland Records album, Donald Duck and His Friends.
106
+
107
+ When Scrooge later made his speaking animated debut in Scrooge McDuck and Money in 1967, he was voiced by Bill Thompson. Thompson had previously voiced Jock the Scottish Terrier in Lady and the Tramp and, according to Alan Young, Thompson had some Scottish ancestry.[26] Following Scrooge McDuck and Money's release, Scrooge made no further animated appearances prior to Thompson's death in 1971.
108
+
109
+ In 1974, Disneyland Records produced the album, An Adaptation of Dickens' Christmas Carol, Performed by The Walt Disney Players. Alan Young belonged to a Dickens Society and was asked to help adapt the story to fit in the classic Disney characters.[27] Young, whose parents were Scottish and who lived in Scotland for a few years when he was an infant,[28] voiced Scrooge for this record in addition to voicing Mickey Mouse and Merlin from The Sword in the Stone. When Disney decided to adapt the record into the 1983 theatrical short, Mickey's Christmas Carol, Young returned to voice Scrooge. Young remained as Disney's official voice for Scrooge until his death in 2016, although Will Ryan voiced Scrooge for the 1987 television special, Sport Goofy in Soccermania and Alan Reid voiced Scrooge for Tuomas Holopainen's 2014 album, Music Inspired by the Life and Times of Scrooge. Young's last performance as Scrooge was in the 2016 Mickey Mouse short, "No".
110
+
111
+ Since Young's death, several actors have provided Scrooge's voice. John Kassir has voiced Scrooge for the Mickey Mouse shorts starting with "Duck the Halls" in 2016. Eric Bauza voiced Scrooge for a cameo in the television series, Legend of the Three Caballeros. Scottish actor Enn Reitel voices Scrooge for Disney park appearances as well as in the English dub of Kingdom Hearts III.
112
+
113
+ David Tennant voices Scrooge for the 2017 reboot of DuckTales. According to executive producer Matt Youngberg:
114
+
115
+ David Tennant seemed to be the natural choice for this. We really wanted to find somebody who was legitimately Scottish. We thought that was really important in this iteration, someone who had the character to bring this icon alive. And David is an amazing actor. He’s morphed into this role in an incredible way.
116
+
117
+ Many of the European comics based on the Disney Universe have created their own version of Scrooge McDuck, usually involving him in slapstick adventures. This is particularly true of the Italian comics which were very popular in the 1960s–1980s in most parts of Western continental Europe. In these, Scrooge is mainly an anti-hero dragging his long-suffering nephews into treasure hunts and shady business deals. Donald is a reluctant participant in these travels, only agreeing to go along when his uncle reminds him of the debts and back-rent Donald owes him, threatens him with a sword or blunderbuss, or offers a share of the loot. When he promises Donald a share of the treasure, Scrooge will add a little loophole in the terms which may seem obscure at first but which he brings up at the end of the adventure to deny Donald his share, keeping the whole for himself. After Donald risks life and limb – something which Scrooge shows little concern for – he tends to end up with nothing.
118
+
119
+ Another running joke is Scrooge reminiscing about his adventures while gold prospecting in the Klondike much to Donald and the nephews' chagrin at hearing the never-ending and tiresome stories.
120
+
121
+ According to Carl Barks' 1955 one-pager "Watt an Occasion" (Uncle Scrooge #12), Scrooge is 75 years of age. According to Don Rosa, Scrooge was born in Scotland in 1867, and earned his Number One Dime (or First Coin) exactly ten years later. The DuckTales episodes (and many European comics) show a Scrooge who hailed from Scotland in the 19th century, yet was clearly familiar with all the technology and amenities of the 1980s. Despite this extremely advanced age, Scrooge does not appear to be on the verge of dotage, and is vigorous enough to keep up with his nephews in adventures; with rare exception, there appears to be no sign of him slowing down.
122
+
123
+ Barks responded to some fan letters asking about Scrooge's Adamic age, that in the story "That's No Fable!", when Scrooge drank water from a Fountain of Youth for several days, rather than making him young again (bodily contact with the water was required for that), ingesting the water rejuvenated his body and cured him of his rheumatia, which arguably allowed Scrooge to live beyond his expected years with no sign of slowdown or senility. Don Rosa's solution to the issue of Scrooge's age is that he set all of his stories in the 1950s or earlier, which was when he himself discovered and reveled in Barks' stories as a kid, and in his unofficial timelines, he had Scrooge die in 1967, at the age of 100 years.[29][30]
124
+
125
+ In the 15th episode of the 2017 DuckTales reboot, "The Golden Lagoon of White Agony Plains!", it is revealed that Scrooge was "stuck in a timeless demon dimension" called Demogorgana for an unknown amount of time, which is used to explain his young look.[31] In the 19th episode, "The Other Bin of Scrooge McDuck!", Webby Vanderquack's research on Scrooge reveals that he was born in 1867.[32]
126
+
127
+ Forbes magazine routinely lists Scrooge McDuck on its annual "Fictional 15" list of the richest fictional characters by net worth:
128
+
129
+ Grupo Ronda S.A has the license to use the character, as well as other Disney characters in the board game Tío Rico Mc. Pato from 1972 to the present. Being one of the most popular board games in Colombia and being the direct competitor of Monopoly in the region.[41]
130
+
131
+ In tribute to its famous native, Glasgow City Council added Scrooge to its list of "Famous Glaswegians" in 2007, alongside the likes of Billy Connolly and Charles Rennie Mackintosh.[42]
132
+
133
+ In 2008 The Weekly Standard parodied the bailout of the financial markets by publishing a memo where Scrooge applies to the TARP program.[43]
134
+
135
+ An extortionist named Arno Funke targeted German department store chain Karstadt from 1992 until his capture in 1994, under the alias "Dagobert", the German (first) name for Scrooge McDuck.[44]
136
+
137
+ In the Family Guy episode "Lottery Fever", Peter injures himself trying to dive into a pile of coins like Scrooge McDuck.
138
+
139
+ In the 2013 episode of Breaking Bad, "Buried", Saul Goodman associate Patrick Kuby remarks to fellow associate Huell Babineaux "we are here to do a job, not channel Scrooge McDuck" when Huell lies down on Walter White's pile of cash stored in a storage facility locker.
140
+
141
+ Dagobertducktaks ("Dagobert Duck" is the Dutch name for Scrooge McDuck), a tax for the wealthy, was elected Dutch word of the year 2014 in a poll by Van Dale.[45][46]
142
+
143
+ In August 2017, the YouTube channel "The Film Theorists", hosted by Matthew "MatPat" Patrick, estimated the worth of the gold coins in the money bin of Scrooge McDuck based on four sources, with the lowest source equaling $52,348,493,767.50 and the highest source ("three cubic acres") equaling $333,927,633,863,527.10 of gold value.[47]
144
+
145
+ The popularity of Scrooge McDuck comics spawned an entire mythology around the character, including new supporting characters, adventures, and life experiences as told by numerous authors. The popularity of the Duck universe – the fandom term for the associated intellectual properties that have developed from Scrooge's stories over the years, including the city of Duckburg – has led Don Rosa to claim that "in the beginning Scrooge [owed] his existence to his nephew Donald, but that has changed and today it's Donald that [owes] his existence to Scrooge."
146
+
147
+ In addition to the many original and existing characters in stories about Scrooge McDuck, authors have frequently led historical figures to meet Scrooge over the course of his life. Most notably, Scrooge has met US president Theodore Roosevelt. Roosevelt and Scrooge would meet each other at least three times: in the Dakotas in 1883, in Duckburg in 1902, and in Panama in 1906. See Historical Figures in Scrooge McDuck stories.
148
+
149
+ Based on writer Don Rosa's The Life and Times of Scrooge McDuck, a popular timeline chronicling Scrooge's adventures was created consisting of the most important "facts" about Scrooge's life. See Scrooge McDuck timeline according to Don Rosa.[citation needed]
150
+
151
+ In 2014, composer Tuomas Holopainen of Nightwish released a conceptual album based on the book, The Life and Times of Scrooge McDuck. The album is titled Music Inspired by the Life and Times of Scrooge. Don Rosa illustrated the cover artwork for the album.[48]
152
+
153
+ The character of Scrooge has appeared in various mediums aside from comic books. Scrooge's voice was first heard on the 1960 record album Donald Duck and His Friends; Dal McKennon voiced the character for this appearance. It took the form of a short dramatization called "Uncle Scrooge's Rocket to the Moon," a story of how Scrooge builds a rocket to send all his money to the moon to protect it from the Beagle Boys.[49] In 1961 this story was reissued as a 45rpm single record entitled "Donald Duck and Uncle Scrooge's Money Rocket."
154
+
155
+ Initially, Scrooge was to make his animated debut in the Donald Duck theatrical cartoons. Late in 1954, Carl Barks was asked by the Disney Studios if he would be free to write a script for a Scrooge McDuck 7-minute animated cartoon.[50] Scrooge was a huge success in the comic books at the time, and Disney now wanted to introduce the miserly duck to theater audiences as well. Barks supplied the studios with a detailed 9-page script, telling the story of the happy-go-lucky Donald Duck working for the troubled Scrooge who tries to save his money from a hungry rat.[51] Barks also sent number of sketches of his ideas for the short, including a money-sorting machine, which Barks had already used on the cover of one of the Uncle Scrooge issues.[52] The script was never used as Disney soon after decided to concentrate on TV shows instead.
156
+
157
+ Scrooge's first appearance in animated form (save for a brief Mickey Mouse Club television series cameo[53]) was in Disney's 1967 theatrical short Scrooge McDuck and Money (voiced by Bill Thompson), in which he teaches his nephews basic financial tips.[54]
158
+
159
+ In 1974, Disneyland Records released an adaptation of the Charles Dickens classic A Christmas Carol . Eight years later, the Walt Disney Animation Studios decided to make a featurette of this same story, this time dubbed Mickey's Christmas Carol (1983). He also appeared as himself in the television special Sport Goofy in Soccermania (1987).
160
+
161
+ Scrooge's biggest role outside comics would come in the 1987 animated series DuckTales, a series loosely based on Carl Barks's comics, and where Alan Young returned to voice him. In this series, premiered over two-hours on September 18, 1987, while the regular episodes began three days later, Scrooge becomes the legal guardian of Huey, Dewey and Louie when Donald joins the United States Navy. Scrooge's DuckTales persona is considerably mellow compared to most previous appearances; his aggression is played down and his often duplicitous personality is reduced in many episodes to that of a curmudgeonly but well-meaning old uncle. Still, there are flashes of Barks' Scrooge to be seen, particularly in early episodes of the first season. Scrooge also appeared in DuckTales the Movie: Treasure of the Lost Lamp, released during the series' run. He was mentioned in the Darkwing Duck episode "Tiff of the Titans", but never really seen.
162
+
163
+ He has appeared in some episodes of Raw Toonage, two shorts of Mickey Mouse Works and some episodes (specially "House of Scrooge") of Disney's House of Mouse, as well as the direct-to-video films Mickey's Once Upon a Christmas and Mickey's Twice Upon a Christmas. His video game appearances include the three DuckTales releases (DuckTales, DuckTales 2, and DuckTales: The Quest for Gold), and in Toontown Online as the accidental creator of the Cogs. Additionally, he is a secret playable character in 2008 quiz game, Disney TH!NK Fast. In the 2012 Nintendo 3DS game Epic Mickey: Power of Illusion, he is one of the first characters Mickey rescues, running a shop in the fortress selling upgrades and serving as a Sketch summon in which he uses his cane pogostick from the Ducktales NES games.
164
+
165
+ Scrooge also makes sporadic appearances in Disney's and Square Enix's Kingdom Hearts series, helping Mickey Mouse establish a world transit system to expand his business empire to other worlds. He first appears in Kingdom Hearts II as a minor non-playable character in Hollow Bastion, where he is trying to recreate his favorite ice cream flavor – sea-salt.[55] Scrooge later appears in the prequel, Kingdom Hearts: Birth by Sleep, this time with a speaking role. He works on establishing an ice-cream business in Radiant Garden and gives Ventus three passes to the Dream Festival in Disney Town. Scrooge returns in Kingdom Hearts III, now managing a bistro in Twilight Town with the help of Remy from Ratatouille. Alan Young reprises the role in the English version of Birth by Sleep, while Enn Reitel voices the character in III.
166
+
167
+ Scrooge has appeared in the Boom! Studios Darkwing Duck comic, playing a key role at the end of its initial story, "The Duck Knight Returns". Later he would also play a key role on the final story arc "Dangerous Currency", where he teams up with Darkwing Duck in order to stop the Phantom Blot and Magica De Spell from taking over St. Canard and Duckburg.
168
+
169
+ In 2015, Scrooge was seen in the Mickey Mouse short "Goofy's First Love", where Mickey and Donald are trying to help Goofy find his love. Donald suggests money, and they head over to Scrooge's mansion where Donald tells his uncle that Goofy needs a million dollars. Scrooge then has his butler kick them out. When Goofy is inadvertently launched from a treadmill and catapulted off another building, he lands in Scrooge's mansion. The butler kicks Goofy out and the process repeats itself but this time Mickey and Donald are catapulted as well and kicked out by the butler. Scrooge is seen at the end attending Goofy's wedding with a sandwich. In the 2016 Mickey Mouse Christmas special, "Duck the Halls", after Young's death, John Kassir took over voicing Scrooge McDuck, however he later tweeted that he won't be reprising his role in the reboot. Kassir continues to voice the character in subsequent appearances in this series. Scrooge makes a cameo appearance in the Legend of the Three Caballeros episode "Shangri-La-Di-Da".
170
+
171
+ In the new DuckTales, Scrooge is played by Scottish actor David Tennant.[56] This series shows that Scrooge previously adventured with his nephew Donald and his niece Della Duck, but when Della disappeared during an expedition to space, Donald blamed Scrooge and the two became estranged for ten years. He develops a pessimistic attitude about family as a result until Donald re-enters his life with Huey, Dewey, and Louie, rekindling his spirit of adventure and appreciation of family, and he invites them to live with him at McDuck Manor as they travel the world on adventures.
en/5340.html.txt ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Salt is a mineral composed primarily of sodium chloride (NaCl), a chemical compound belonging to the larger class of salts; salt in its natural form as a crystalline mineral is known as rock salt or halite. Salt is present in vast quantities in seawater, where it is the main mineral constituent. The open ocean has about 35 grams (1.2 oz) of solids per liter of sea water, a salinity of 3.5%.
6
+
7
+ Salt is essential for life in general, and saltiness is one of the basic human tastes. Salt is one of the oldest and most ubiquitous food seasonings, and salting is an important method of food preservation.
8
+
9
+ Some of the earliest evidence of salt processing dates to around 6,000 BC, when people living in the area of present-day Romania boiled spring water to extract salts; a salt-works in China dates to approximately the same period. Salt was also prized by the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites, Egyptians, and the Indians. Salt became an important article of trade and was transported by boat across the Mediterranean Sea, along specially built salt roads, and across the Sahara on camel caravans. The scarcity and universal need for salt have led nations to go to war over it and use it to raise tax revenues. Salt is used in religious ceremonies and has other cultural and traditional significance.
10
+
11
+ Salt is processed from salt mines, and by the evaporation of seawater (sea salt) and mineral-rich spring water in shallow pools. Its major industrial products are caustic soda and chlorine; salt is used in many industrial processes including the manufacture of polyvinyl chloride, plastics, paper pulp and many other products. Of the annual global production of around two hundred million tonnes of salt, about 6% is used for human consumption. Other uses include water conditioning processes, de-icing highways, and agricultural use. Edible salt is sold in forms such as sea salt and table salt which usually contains an anti-caking agent and may be iodised to prevent iodine deficiency. As well as its use in cooking and at the table, salt is present in many processed foods.
12
+
13
+ Sodium is an essential nutrient for human health via its role as an electrolyte and osmotic solute.[1][2][3] Excessive salt consumption may increase the risk of cardiovascular diseases, such as hypertension, in children and adults. Such health effects of salt have long been studied. Accordingly, numerous world health associations and experts in developed countries recommend reducing consumption of popular salty foods.[3][4] The World Health Organization recommends that adults should consume less than 2,000 mg of sodium, equivalent to 5 grams of salt per day.[5]
14
+
15
+ All through history, the availability of salt has been pivotal to civilization. What is now thought to have been the first city in Europe is Solnitsata, in Bulgaria, which was a salt mine, providing the area now known as the Balkans with salt since 5400 BC.[6] Even the name Solnitsata means "salt works".
16
+
17
+ While people have used canning and artificial refrigeration to preserve food for the last hundred years or so, salt has been the best-known food preservative, especially for meat, for many thousands of years.[7] A very ancient salt-works operation has been discovered at the Poiana Slatinei archaeological site next to a salt spring in Lunca, Neamț County, Romania. Evidence indicates that Neolithic people of the Precucuteni Culture were boiling the salt-laden spring water through the process of briquetage to extract the salt as far back as 6050 BC.[8] The salt extracted from this operation may have had a direct correlation to the rapid growth of this society's population soon after its initial production began.[9] The harvest of salt from the surface of Xiechi Lake near Yuncheng in Shanxi, China, dates back to at least 6000 BC, making it one of the oldest verifiable saltworks.[10]
18
+
19
+ There is more salt in animal tissues, such as meat, blood, and milk, than in plant tissues.[11] Nomads who subsist on their flocks and herds do not eat salt with their food, but agriculturalists, feeding mainly on cereals and vegetable matter, need to supplement their diet with salt.[12] With the spread of civilization, salt became one of the world's main trading commodities. It was of high value to the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites and other peoples of antiquity. In the Middle East, salt was used to ceremonially seal an agreement, and the ancient Hebrews made a "covenant of salt" with God and sprinkled salt on their offerings to show their trust in him.[13][better source needed] An ancient practice in time of war was salting the earth: scattering salt around in a defeated city to prevent plant growth. The Bible tells the story of King Abimelech who was ordered by God to do this at Shechem,[14] and various texts claim that the Roman general Scipio Aemilianus Africanus ploughed over and sowed the city of Carthage with salt after it was defeated in the Third Punic War (146 BC).[15]
20
+
21
+ Salt may have been used for barter in connection with the obsidian trade in Anatolia in the Neolithic Era.[16] Salt was included among funeral offerings found in ancient Egyptian tombs from the third millennium BC, as were salted birds, and salt fish.[17] From about 2800 BC, the Egyptians began exporting salt fish to the Phoenicians in return for Lebanon cedar, glass, and the dye Tyrian purple; the Phoenicians traded Egyptian salted fish and salt from North Africa throughout their Mediterranean trade empire.[18] Herodotus described salt trading routes across Libya back in the 5th century BC. In the early years of the Roman Empire, roads were built for the transportation of salt from the salt imported at Ostia to the capital.[19]
22
+
23
+ In Africa, salt was used as currency south of the Sahara, and slabs of rock salt were used as coins in Abyssinia.[12] Moorish merchants in the 6th century traded salt for gold, weight for weight.[dubious – discuss] The Tuareg have traditionally maintained routes across the Sahara especially for the transportation of salt by Azalai (salt caravans). The caravans still cross the desert from southern Niger to Bilma, although much of the trade now takes place by truck. Each camel takes two bales of fodder and two of trade goods northwards and returns laden with salt pillars and dates.[20] In Gabon, before the arrival of Europeans, the coast people carried on a remunerative trade with those of the interior by the medium of sea salt. This was gradually displaced by the salt that Europeans brought in sacks, so that the coast natives lost their previous profits; as of the author's writing in 1958, sea salt was still the currency best appreciated in the interior.[21]
24
+
25
+ Salzburg, Hallstatt, and Hallein lie within 17 km (11 mi) of each other on the river Salzach in central Austria in an area with extensive salt deposits. Salzach literally means "salt river" and Salzburg "salt castle", both taking their names from the German word Salz meaning salt and Hallstatt was the site of the world's first salt mine.[22] The town gave its name to the Hallstatt culture that began mining for salt in the area in about 800 BC. Around 400 BC, the townsfolk, who had previously used pickaxes and shovels, began open pan salt making. During the first millennium BC, Celtic communities grew rich trading salt and salted meat to Ancient Greece and Ancient Rome in exchange for wine and other luxuries.[7]
26
+
27
+ The word salary comes from the Latin word for salt. The reason for this is unknown; a persistent modern claim that the Roman Legions were sometimes paid in salt is baseless.[23][24][25] The word salad literally means "salted", and comes from the ancient Roman practice of salting leaf vegetables.[26]
28
+
29
+ Wars have been fought over salt. Venice fought and won a war with Genoa over the product, and it played an important part in the American Revolution. Cities on overland trade routes grew rich by levying duties,[27] and towns like Liverpool flourished on the export of salt extracted from the salt mines of Cheshire.[28] Various governments have at different times imposed salt taxes on their peoples. The voyages of Christopher Columbus are said to have been financed from salt production in southern Spain, and the oppressive salt tax in France was one of the causes of the French Revolution. After being repealed, this tax was reimposed by Napoleon when he became emperor to pay for his foreign wars, and was not finally abolished until 1946.[27] In 1930, Mahatma Gandhi led at least 100,000 people on the "Dandi March" or "Salt Satyagraha", in which protesters made their own salt from the sea thus defying British rule and avoiding paying the salt tax. This civil disobedience inspired millions of common people and elevated the Indian independence movement from an elitist movement to a national struggle.[29]
30
+
31
+ Salt is mostly sodium chloride, the ionic compound with the formula NaCl, representing equal proportions of sodium and chlorine. Sea salt and freshly mined salt (much of which is sea salt from prehistoric seas) also contain small amounts of trace elements (which in these small amounts are generally good for plant and animal health[citation needed]). Mined salt is often refined in the production of table salt; it is dissolved in water, purified via precipitation of other minerals out of solution, and re-evaporated. During this same refining process it is often also iodized. Salt crystals are translucent and cubic in shape; they normally appear white but impurities may give them a blue or purple tinge. The molar mass of salt is 58.443 g/mol, its melting point is 801 °C (1,474 °F) and its boiling point 1,465 °C (2,669 °F). Its density is 2.17 grams per cubic centimetre and it is readily soluble in water. When dissolved in water it separates into Na+ and Cl− ions, and the solubility is 359 grams per litre.[30] From cold solutions, salt crystallises as the dihydrate NaCl·2H2O. Solutions of sodium chloride have very different properties from those of pure water; the freezing point is −21.12 °C (−6.02 °F) for 23.31 wt% of salt, and the boiling point of saturated salt solution is around 108.7 °C (227.7 °F).[31]
32
+
33
+ Salt is essential to the health of humans and other animals, and it is one of the five basic taste sensations.[32] Salt is used in many cuisines around the world, and it is often found in salt shakers on diners' eating tables for their personal use on food. Salt is also an ingredient in many manufactured foodstuffs. Table salt is a refined salt containing about 97 to 99 percent sodium chloride.[33][34][35] Usually, anticaking agents such as sodium aluminosilicate or magnesium carbonate are added to make it free-flowing. Iodized salt, containing potassium iodide, is widely available. Some people put a desiccant, such as a few grains of uncooked rice[36] or a saltine cracker, in their salt shakers to absorb extra moisture and help break up salt clumps that may otherwise form.[37]
34
+
35
+ Some table salt sold for consumption contains additives which address a variety of health concerns, especially in the developing world. The identities and amounts of additives vary widely from country to country. Iodine is an important micronutrient for humans, and a deficiency of the element can cause lowered production of thyroxine (hypothyroidism) and enlargement of the thyroid gland (endemic goitre) in adults or cretinism in children.[38] Iodized salt has been used to correct these conditions since 1924[39] and consists of table salt mixed with a minute amount of potassium iodide, sodium iodide or sodium iodate. A small amount of dextrose may also be added to stabilize the iodine.[40] Iodine deficiency affects about two billion people around the world and is the leading preventable cause of mental retardation.[41] Iodized table salt has significantly reduced disorders of iodine deficiency in countries where it is used.[42]
36
+
37
+ The amount of iodine and the specific iodine compound added to salt varies from country to country. In the United States, the Food and Drug Administration (FDA) recommends [21 CFR 101.9 (c)(8)(iv)] 150 micrograms of iodine per day for both men and women. US iodized salt contains 46–77 ppm (parts per million), whereas in the UK the iodine content of iodized salt is recommended to be 10–22 ppm.[43]
38
+
39
+ Sodium ferrocyanide, also known as yellow prussiate of soda, is sometimes added to salt as an anticaking agent. The additive is considered safe for human consumption.[44][45] Such anticaking agents have been added since at least 1911 when magnesium carbonate was first added to salt to make it flow more freely.[46] The safety of sodium ferrocyanide as a food additive was found to be provisionally acceptable by the Committee on Toxicity in 1988.[44] Other anticaking agents sometimes used include tricalcium phosphate, calcium or magnesium carbonates, fatty acid salts (acid salts), magnesium oxide, silicon dioxide, calcium silicate, sodium aluminosilicate and calcium aluminosilicate. Both the European Union and the United States Food and Drug Administration permitted the use of aluminium in the latter two compounds.[47]
40
+
41
+ In "doubly fortified salt", both iodide and iron salts are added. The latter alleviates iron deficiency anaemia, which interferes with the mental development of an estimated 40% of infants in the developing world. A typical iron source is ferrous fumarate.[48] Another additive, especially important for pregnant women, is folic acid (vitamin B9), which gives the table salt a yellow color. Folic acid helps prevent neural tube defects and anaemia, which affect young mothers, especially in developing countries.[48]
42
+
43
+ A lack of fluorine in the diet is the cause of a greatly increased incidence of dental caries.[49] Fluoride salts can be added to table salt with the goal of reducing tooth decay, especially in countries that have not benefited from fluoridated toothpastes and fluoridated water. The practice is more common in some European countries where water fluoridation is not carried out. In France, 35% of the table salt sold contains added sodium fluoride.[48]
44
+
45
+ Unrefined sea salt contains small amounts of magnesium and calcium halides and sulfates, traces of algal products, salt-resistant bacteria and sediment particles. The calcium and magnesium salts confer a faintly bitter overtone, and they make unrefined sea salt hygroscopic (i.e., it gradually absorbs moisture from air if stored uncovered). Algal products contribute a mildly "fishy" or "sea-air" odour, the latter from organobromine compounds. Sediments, the proportion of which varies with the source, give the salt a dull grey appearance. Since taste and aroma compounds are often detectable by humans in minute concentrations, sea salt may have a more complex flavor than pure sodium chloride when sprinkled on top of food. When salt is added during cooking however, these flavors would likely be overwhelmed by those of the food ingredients.[50] The refined salt industry cites scientific studies saying that raw sea and rock salts do not contain enough iodine salts to prevent iodine deficiency diseases.[51]
46
+
47
+ Himalayan salt is known for its distinct pink hue. It is used in cooking as a substitute for table salt. Is it also used as cookware, in salt lamps and in spas. It is mined from the Salt Range mountains in Pakistan.
48
+
49
+ Different natural salts have different mineralities depending on their source, giving each one a unique flavour. Fleur de sel, a natural sea salt from the surface of evaporating brine in salt pans, has a unique flavour varying with the region from which it is produced. In traditional Korean cuisine, so-called "bamboo salt" is prepared by roasting salt[52] in a bamboo container plugged with mud at both ends. This product absorbs minerals from the bamboo and the mud, and has been claimed to increase the anticlastogenic and antimutagenic properties of doenjang (a fermented bean paste).[53]
50
+
51
+ Kosher or kitchen salt has a larger grain size than table salt and is used in cooking. It can be useful for brining, bread or pretzel making and as a scrubbing agent when combined with oil.[54]
52
+
53
+ Pickling salt is made of ultra-fine grains to speed dissolving to make brine.
54
+
55
+ Salt is present in most foods, but in naturally occurring foodstuffs such as meats, vegetables and fruit, it is present in very small quantities. It is often added to processed foods (such as canned foods and especially salted foods, pickled foods, and snack foods or other convenience foods), where it functions as both a preservative and a flavoring. Dairy salt is used in the preparation of butter and cheese products.[55] As a flavoring, salt enhances the taste of other foods by suppressing the bitterness of those foods making them more palatable and relatively sweeter.[56]
56
+
57
+ Before the advent of electrically powered refrigeration, salting was one of the main methods of food preservation. Thus, herring contains 67 mg sodium per 100 g, while kipper, its preserved form, contains 990 mg. Similarly, pork typically contains 63 mg while bacon contains 1,480 mg, and potatoes contain 7 mg but potato crisps 800 mg per 100 g.[11] Salt is also used in cooking, such as with salt crusts. The main sources of salt in the Western diet, apart from direct use of sodium chloride, are bread and cereal products, meat products and milk and dairy products.[11]
58
+
59
+ In many East Asian cultures, salt is not traditionally used as a condiment.[57] In its place, condiments such as soy sauce, fish sauce and oyster sauce tend to have a high sodium content and fill a similar role to table salt in western cultures. They are most often used for cooking rather than as table condiments.[58]
60
+
61
+ Table salt is made up of just under 40% sodium by weight, so a 6 g serving (1 teaspoon) contains about 2,400 mg of sodium.[59] Sodium serves a vital purpose in the human body: via its role as an electrolyte, it helps nerves and muscles to function correctly, and it is one factor involved in the osmotic regulation of water content in body organs (fluid balance).[60] Most of the sodium in the Western diet comes from salt.[3] The habitual salt intake in many Western countries is about 10 g per day, and it is higher than that in many countries in Eastern Europe and Asia.[61] The high level of sodium in many processed foods has a major impact on the total amount consumed.[62] In the United States, 75% of the sodium eaten comes from processed and restaurant foods, 11% from cooking and table use and the rest from what is found naturally in foodstuffs.[63]
62
+
63
+ Because consuming too much sodium increases risk of cardiovascular diseases,[3] health organizations generally recommend that people reduce their dietary intake of salt.[3][64][65][66] High sodium intake is associated with a greater risk of stroke, total cardiovascular disease and kidney disease.[2][61] A reduction in sodium intake by 1,000 mg per day may reduce cardiovascular disease by about 30 percent.[1][3] In adults and children with no acute illness, a decrease in the intake of sodium from the typical high levels reduces blood pressure.[65][67] A low sodium diet results in a greater improvement in blood pressure in people with hypertension.[68][69]
64
+
65
+ The World Health Organization recommends that adults should consume less than 2,000 mg of sodium (which is contained in 5 g of salt) per day.[64] Guidelines by the United States recommend that people with hypertension, African Americans, and middle-aged and older adults should limit consumption to no more than 1,500 mg of sodium per day and meet the potassium recommendation of 4,700 mg/day with a healthy diet of fruits and vegetables.[3][70]
66
+
67
+ While reduction of sodium intake to less than 2,300 mg per day is recommended by developed countries,[3] one review recommended that sodium intake be reduced to at least 1,200 mg (contained in 3 g of salt) per day, as a further reduction in salt intake the greater the fall in systolic blood pressure for all age groups and ethinicities.[65] Another review indicated that there is inconsistent/insufficient evidence to conclude that reducing sodium intake to lower than 2,300 mg per day is either beneficial or harmful.[71]
68
+
69
+ More recent evidence is showing a much more complicated relationship between salt and cardiovascular disease. According to a systematic review of multiple large studies, "mortality caused by levels of salt the association between sodium consumption and cardiovascular disease or mortality is U-shaped, with increased risk at both high and low sodium intake" [72] The findings showed that increased mortality from excessive salt intake was primarily associated with individuals with hypertension. The levels of increased mortality among those with restricted salt intake appeared to be similar regardless of blood pressure. This evidence shows that while those with hypertension should primarily focus on reducing sodium to recommended levels, all groups should seek to maintain a healthy level of sodium intake of between 4 and 5 grams a day.[72]
70
+
71
+ One of the two most prominent dietary risks for disability in the world is eating too much sodium.[73]
72
+
73
+ Only about 6% of the salt manufactured in the world is used in food. Of the remainder, 12% is used in water conditioning processes, 8% goes for de-icing highways and 6% is used in agriculture. The rest (68%) is used for manufacturing and other industrial processes,[74] and sodium chloride is one of the largest inorganic raw materials used by volume. Its major chemical products are caustic soda and chlorine, which are separated by the electrolysis of a pure brine solution. These are used in the manufacture of PVC, plastics, paper pulp and many other inorganic and organic compounds. Salt is also used as a flux in the production of aluminium. For this purpose, a layer of melted salt floats on top of the molten metal and removes iron and other metal contaminants. It is also used in the manufacture of soaps and glycerine, where it is added to the vat to precipitate out the saponified products. As an emulsifier, salt is used in the manufacture of synthetic rubber, and another use is in the firing of pottery, when salt added to the furnace vaporises before condensing onto the surface of the ceramic material, forming a strong glaze.[75]
74
+
75
+ When drilling through loose materials such as sand or gravel, salt may be added to the drilling fluid to provide a stable "wall" to prevent the hole collapsing. There are many other processes in which salt is involved. These include its use as a mordant in textile dying, to regenerate resins in water softening, for the tanning of hides, the preservation of meat and fish and the canning of meat and vegetables.[75][76][77]
76
+
77
+ Food-grade salt accounts for only a small part of salt production in industrialized countries (7% in Europe),[78] although worldwide, food uses account for 17.5% of total production.[79]
78
+
79
+ In 2018, total world production of salt was 300 million tonnes, the top six producers being China (68 million), the United States (42 million), India (29 million), Germany (13 million), Canada (13 million) and Australia (12 million).[80]
80
+
81
+ The manufacture of salt is one of the oldest chemical industries.[81] A major source of salt is seawater, which has a salinity of approximately 3.5%. This means that there are about 35 grams (1.2 oz) of dissolved salts, predominantly sodium (Na+) and chloride (Cl−) ions, per kilogram (2.2 lbs) of water.[82] The world's oceans are a virtually inexhaustible source of salt, and this abundance of supply means that reserves have not been calculated.[76] The evaporation of seawater is the production method of choice in marine countries with high evaporation and low precipitation rates. Salt evaporation ponds are filled from the ocean and salt crystals can be harvested as the water dries up. Sometimes these ponds have vivid colours, as some species of algae and other micro-organisms thrive in conditions of high salinity.[83]
82
+
83
+ Elsewhere, salt is extracted from the vast sedimentary deposits which have been laid down over the millennia from the evaporation of seas and lakes. These are either mined directly, producing rock salt, or are extracted in solution by pumping water into the deposit. In either case, the salt may be purified by mechanical evaporation of brine. Traditionally, this was done in shallow open pans which were heated to increase the rate of evaporation. More recently, the process is performed in pans under vacuum.[77] The raw salt is refined to purify it and improve its storage and handling characteristics. This usually involves recrystallization during which a brine solution is treated with chemicals that precipitate most impurities (largely magnesium and calcium salts). Multiple stages of evaporation are then used to collect pure sodium chloride crystals, which are kiln-dried.[84] Some salt is produced using the Alberger process, which involves vacuum pan evaporation combined with the seeding of the solution with cubic crystals, and produces a grainy-type flake.[85] The Ayoreo, an indigenous group from the Paraguayan Chaco, obtain their salt from the ash produced by burning the timber of the Indian salt tree (Maytenus vitis-idaea) and other trees.[86]
84
+
85
+ One of the largest salt mining operations in the world is at the Khewra Salt Mine in Pakistan. The mine has nineteen storeys, eleven of which are underground, and 400 km (250 mi) of passages. The salt is dug out by the room and pillar method, where about half the material is left in place to support the upper levels. Extraction of Himalayan salt is expected to last 350 years at the present rate of extraction of around 385,000 tons per annum.[87]
86
+
87
+ Salt has long held an important place in religion and culture. At the time of Brahmanic sacrifices, in Hittite rituals and during festivals held by Semites and Greeks at the time of the new moon, salt was thrown into a fire where it produced crackling noises.[88] The ancient Egyptians, Greeks and Romans invoked their gods with offerings of salt and water and some people think this to be the origin of Holy Water in the Christian faith.[89] In Aztec mythology, Huixtocihuatl was a fertility goddess who presided over salt and salt water.[90]
88
+
89
+ Salt is considered to be a very auspicious substance in Hinduism and is used in particular religious ceremonies like house-warmings and weddings.[91] In Jainism, devotees lay an offering of raw rice with a pinch of salt before a deity to signify their devotion and salt is sprinkled on a person's cremated remains before the ashes are buried.[92] Salt is believed to ward off evil spirits in Mahayana Buddhist tradition, and when returning home from a funeral, a pinch of salt is thrown over the left shoulder as this prevents evil spirits from entering the house.[93] In Shinto, Shio (塩, lit. "salt") is used for ritual purification of locations and people (harae, specifically shubatsu), and small piles of salt are placed in dishes by the entrance of establishments for the two-fold purposes of warding off evil and attracting patrons.[94]
90
+
91
+ In the Hebrew Bible, there are thirty-five verses which mention salt.[95] One of these mentions Lot's wife, who was turned into a pillar of salt when she looked back at the cities of Sodom and Gomorrah (Genesis 19:26) as they were destroyed. When the judge Abimelech destroyed the city of Shechem, he is said to have "sown salt on it," probably as a curse on anyone who would re-inhabit it (Judges 9:45). The Book of Job contains the first mention of salt as a condiment. "Can that which is unsavoury be eaten without salt? or is there any taste in the white of an egg?" (Job 6:6).[95] In the New Testament, six verses mention salt. In the Sermon on the Mount, Jesus referred to his followers as the "salt of the earth". The apostle Paul also encouraged Christians to "let your conversation be always full of grace, seasoned with salt" (Colossians 4:6).[95] Salt is mandatory in the rite of the Tridentine Mass.[96] Salt is used in the third item (which includes an Exorcism) of the Celtic Consecration (cf. Gallican Rite) that is employed in the consecration of a church. Salt may be added to the water "where it is customary" in the Roman Catholic rite of Holy water.[96]
92
+
93
+ In Judaism, it is recommended to have either a salty bread or to add salt to the bread if this bread is unsalted when doing Kiddush for Shabbat. It is customary to spread some salt over the bread or to dip the bread in a little salt when passing the bread around the table after the Kiddush.[97] To preserve the covenant between their people and God, Jews dip the Sabbath bread in salt.[89]
94
+
95
+ In Wicca, salt is symbolic of the element Earth. It is also believed to cleanse an area of harmful or negative energies. A dish of salt and a dish of water are almost always present on an altar, and salt is used in a wide variety of rituals and ceremonies.[98]
96
+
en/5341.html.txt ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Salt is a mineral composed primarily of sodium chloride (NaCl), a chemical compound belonging to the larger class of salts; salt in its natural form as a crystalline mineral is known as rock salt or halite. Salt is present in vast quantities in seawater, where it is the main mineral constituent. The open ocean has about 35 grams (1.2 oz) of solids per liter of sea water, a salinity of 3.5%.
6
+
7
+ Salt is essential for life in general, and saltiness is one of the basic human tastes. Salt is one of the oldest and most ubiquitous food seasonings, and salting is an important method of food preservation.
8
+
9
+ Some of the earliest evidence of salt processing dates to around 6,000 BC, when people living in the area of present-day Romania boiled spring water to extract salts; a salt-works in China dates to approximately the same period. Salt was also prized by the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites, Egyptians, and the Indians. Salt became an important article of trade and was transported by boat across the Mediterranean Sea, along specially built salt roads, and across the Sahara on camel caravans. The scarcity and universal need for salt have led nations to go to war over it and use it to raise tax revenues. Salt is used in religious ceremonies and has other cultural and traditional significance.
10
+
11
+ Salt is processed from salt mines, and by the evaporation of seawater (sea salt) and mineral-rich spring water in shallow pools. Its major industrial products are caustic soda and chlorine; salt is used in many industrial processes including the manufacture of polyvinyl chloride, plastics, paper pulp and many other products. Of the annual global production of around two hundred million tonnes of salt, about 6% is used for human consumption. Other uses include water conditioning processes, de-icing highways, and agricultural use. Edible salt is sold in forms such as sea salt and table salt which usually contains an anti-caking agent and may be iodised to prevent iodine deficiency. As well as its use in cooking and at the table, salt is present in many processed foods.
12
+
13
+ Sodium is an essential nutrient for human health via its role as an electrolyte and osmotic solute.[1][2][3] Excessive salt consumption may increase the risk of cardiovascular diseases, such as hypertension, in children and adults. Such health effects of salt have long been studied. Accordingly, numerous world health associations and experts in developed countries recommend reducing consumption of popular salty foods.[3][4] The World Health Organization recommends that adults should consume less than 2,000 mg of sodium, equivalent to 5 grams of salt per day.[5]
14
+
15
+ All through history, the availability of salt has been pivotal to civilization. What is now thought to have been the first city in Europe is Solnitsata, in Bulgaria, which was a salt mine, providing the area now known as the Balkans with salt since 5400 BC.[6] Even the name Solnitsata means "salt works".
16
+
17
+ While people have used canning and artificial refrigeration to preserve food for the last hundred years or so, salt has been the best-known food preservative, especially for meat, for many thousands of years.[7] A very ancient salt-works operation has been discovered at the Poiana Slatinei archaeological site next to a salt spring in Lunca, Neamț County, Romania. Evidence indicates that Neolithic people of the Precucuteni Culture were boiling the salt-laden spring water through the process of briquetage to extract the salt as far back as 6050 BC.[8] The salt extracted from this operation may have had a direct correlation to the rapid growth of this society's population soon after its initial production began.[9] The harvest of salt from the surface of Xiechi Lake near Yuncheng in Shanxi, China, dates back to at least 6000 BC, making it one of the oldest verifiable saltworks.[10]
18
+
19
+ There is more salt in animal tissues, such as meat, blood, and milk, than in plant tissues.[11] Nomads who subsist on their flocks and herds do not eat salt with their food, but agriculturalists, feeding mainly on cereals and vegetable matter, need to supplement their diet with salt.[12] With the spread of civilization, salt became one of the world's main trading commodities. It was of high value to the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites and other peoples of antiquity. In the Middle East, salt was used to ceremonially seal an agreement, and the ancient Hebrews made a "covenant of salt" with God and sprinkled salt on their offerings to show their trust in him.[13][better source needed] An ancient practice in time of war was salting the earth: scattering salt around in a defeated city to prevent plant growth. The Bible tells the story of King Abimelech who was ordered by God to do this at Shechem,[14] and various texts claim that the Roman general Scipio Aemilianus Africanus ploughed over and sowed the city of Carthage with salt after it was defeated in the Third Punic War (146 BC).[15]
20
+
21
+ Salt may have been used for barter in connection with the obsidian trade in Anatolia in the Neolithic Era.[16] Salt was included among funeral offerings found in ancient Egyptian tombs from the third millennium BC, as were salted birds, and salt fish.[17] From about 2800 BC, the Egyptians began exporting salt fish to the Phoenicians in return for Lebanon cedar, glass, and the dye Tyrian purple; the Phoenicians traded Egyptian salted fish and salt from North Africa throughout their Mediterranean trade empire.[18] Herodotus described salt trading routes across Libya back in the 5th century BC. In the early years of the Roman Empire, roads were built for the transportation of salt from the salt imported at Ostia to the capital.[19]
22
+
23
+ In Africa, salt was used as currency south of the Sahara, and slabs of rock salt were used as coins in Abyssinia.[12] Moorish merchants in the 6th century traded salt for gold, weight for weight.[dubious – discuss] The Tuareg have traditionally maintained routes across the Sahara especially for the transportation of salt by Azalai (salt caravans). The caravans still cross the desert from southern Niger to Bilma, although much of the trade now takes place by truck. Each camel takes two bales of fodder and two of trade goods northwards and returns laden with salt pillars and dates.[20] In Gabon, before the arrival of Europeans, the coast people carried on a remunerative trade with those of the interior by the medium of sea salt. This was gradually displaced by the salt that Europeans brought in sacks, so that the coast natives lost their previous profits; as of the author's writing in 1958, sea salt was still the currency best appreciated in the interior.[21]
24
+
25
+ Salzburg, Hallstatt, and Hallein lie within 17 km (11 mi) of each other on the river Salzach in central Austria in an area with extensive salt deposits. Salzach literally means "salt river" and Salzburg "salt castle", both taking their names from the German word Salz meaning salt and Hallstatt was the site of the world's first salt mine.[22] The town gave its name to the Hallstatt culture that began mining for salt in the area in about 800 BC. Around 400 BC, the townsfolk, who had previously used pickaxes and shovels, began open pan salt making. During the first millennium BC, Celtic communities grew rich trading salt and salted meat to Ancient Greece and Ancient Rome in exchange for wine and other luxuries.[7]
26
+
27
+ The word salary comes from the Latin word for salt. The reason for this is unknown; a persistent modern claim that the Roman Legions were sometimes paid in salt is baseless.[23][24][25] The word salad literally means "salted", and comes from the ancient Roman practice of salting leaf vegetables.[26]
28
+
29
+ Wars have been fought over salt. Venice fought and won a war with Genoa over the product, and it played an important part in the American Revolution. Cities on overland trade routes grew rich by levying duties,[27] and towns like Liverpool flourished on the export of salt extracted from the salt mines of Cheshire.[28] Various governments have at different times imposed salt taxes on their peoples. The voyages of Christopher Columbus are said to have been financed from salt production in southern Spain, and the oppressive salt tax in France was one of the causes of the French Revolution. After being repealed, this tax was reimposed by Napoleon when he became emperor to pay for his foreign wars, and was not finally abolished until 1946.[27] In 1930, Mahatma Gandhi led at least 100,000 people on the "Dandi March" or "Salt Satyagraha", in which protesters made their own salt from the sea thus defying British rule and avoiding paying the salt tax. This civil disobedience inspired millions of common people and elevated the Indian independence movement from an elitist movement to a national struggle.[29]
30
+
31
+ Salt is mostly sodium chloride, the ionic compound with the formula NaCl, representing equal proportions of sodium and chlorine. Sea salt and freshly mined salt (much of which is sea salt from prehistoric seas) also contain small amounts of trace elements (which in these small amounts are generally good for plant and animal health[citation needed]). Mined salt is often refined in the production of table salt; it is dissolved in water, purified via precipitation of other minerals out of solution, and re-evaporated. During this same refining process it is often also iodized. Salt crystals are translucent and cubic in shape; they normally appear white but impurities may give them a blue or purple tinge. The molar mass of salt is 58.443 g/mol, its melting point is 801 °C (1,474 °F) and its boiling point 1,465 °C (2,669 °F). Its density is 2.17 grams per cubic centimetre and it is readily soluble in water. When dissolved in water it separates into Na+ and Cl− ions, and the solubility is 359 grams per litre.[30] From cold solutions, salt crystallises as the dihydrate NaCl·2H2O. Solutions of sodium chloride have very different properties from those of pure water; the freezing point is −21.12 °C (−6.02 °F) for 23.31 wt% of salt, and the boiling point of saturated salt solution is around 108.7 °C (227.7 °F).[31]
32
+
33
+ Salt is essential to the health of humans and other animals, and it is one of the five basic taste sensations.[32] Salt is used in many cuisines around the world, and it is often found in salt shakers on diners' eating tables for their personal use on food. Salt is also an ingredient in many manufactured foodstuffs. Table salt is a refined salt containing about 97 to 99 percent sodium chloride.[33][34][35] Usually, anticaking agents such as sodium aluminosilicate or magnesium carbonate are added to make it free-flowing. Iodized salt, containing potassium iodide, is widely available. Some people put a desiccant, such as a few grains of uncooked rice[36] or a saltine cracker, in their salt shakers to absorb extra moisture and help break up salt clumps that may otherwise form.[37]
34
+
35
+ Some table salt sold for consumption contains additives which address a variety of health concerns, especially in the developing world. The identities and amounts of additives vary widely from country to country. Iodine is an important micronutrient for humans, and a deficiency of the element can cause lowered production of thyroxine (hypothyroidism) and enlargement of the thyroid gland (endemic goitre) in adults or cretinism in children.[38] Iodized salt has been used to correct these conditions since 1924[39] and consists of table salt mixed with a minute amount of potassium iodide, sodium iodide or sodium iodate. A small amount of dextrose may also be added to stabilize the iodine.[40] Iodine deficiency affects about two billion people around the world and is the leading preventable cause of mental retardation.[41] Iodized table salt has significantly reduced disorders of iodine deficiency in countries where it is used.[42]
36
+
37
+ The amount of iodine and the specific iodine compound added to salt varies from country to country. In the United States, the Food and Drug Administration (FDA) recommends [21 CFR 101.9 (c)(8)(iv)] 150 micrograms of iodine per day for both men and women. US iodized salt contains 46–77 ppm (parts per million), whereas in the UK the iodine content of iodized salt is recommended to be 10–22 ppm.[43]
38
+
39
+ Sodium ferrocyanide, also known as yellow prussiate of soda, is sometimes added to salt as an anticaking agent. The additive is considered safe for human consumption.[44][45] Such anticaking agents have been added since at least 1911 when magnesium carbonate was first added to salt to make it flow more freely.[46] The safety of sodium ferrocyanide as a food additive was found to be provisionally acceptable by the Committee on Toxicity in 1988.[44] Other anticaking agents sometimes used include tricalcium phosphate, calcium or magnesium carbonates, fatty acid salts (acid salts), magnesium oxide, silicon dioxide, calcium silicate, sodium aluminosilicate and calcium aluminosilicate. Both the European Union and the United States Food and Drug Administration permitted the use of aluminium in the latter two compounds.[47]
40
+
41
+ In "doubly fortified salt", both iodide and iron salts are added. The latter alleviates iron deficiency anaemia, which interferes with the mental development of an estimated 40% of infants in the developing world. A typical iron source is ferrous fumarate.[48] Another additive, especially important for pregnant women, is folic acid (vitamin B9), which gives the table salt a yellow color. Folic acid helps prevent neural tube defects and anaemia, which affect young mothers, especially in developing countries.[48]
42
+
43
+ A lack of fluorine in the diet is the cause of a greatly increased incidence of dental caries.[49] Fluoride salts can be added to table salt with the goal of reducing tooth decay, especially in countries that have not benefited from fluoridated toothpastes and fluoridated water. The practice is more common in some European countries where water fluoridation is not carried out. In France, 35% of the table salt sold contains added sodium fluoride.[48]
44
+
45
+ Unrefined sea salt contains small amounts of magnesium and calcium halides and sulfates, traces of algal products, salt-resistant bacteria and sediment particles. The calcium and magnesium salts confer a faintly bitter overtone, and they make unrefined sea salt hygroscopic (i.e., it gradually absorbs moisture from air if stored uncovered). Algal products contribute a mildly "fishy" or "sea-air" odour, the latter from organobromine compounds. Sediments, the proportion of which varies with the source, give the salt a dull grey appearance. Since taste and aroma compounds are often detectable by humans in minute concentrations, sea salt may have a more complex flavor than pure sodium chloride when sprinkled on top of food. When salt is added during cooking however, these flavors would likely be overwhelmed by those of the food ingredients.[50] The refined salt industry cites scientific studies saying that raw sea and rock salts do not contain enough iodine salts to prevent iodine deficiency diseases.[51]
46
+
47
+ Himalayan salt is known for its distinct pink hue. It is used in cooking as a substitute for table salt. Is it also used as cookware, in salt lamps and in spas. It is mined from the Salt Range mountains in Pakistan.
48
+
49
+ Different natural salts have different mineralities depending on their source, giving each one a unique flavour. Fleur de sel, a natural sea salt from the surface of evaporating brine in salt pans, has a unique flavour varying with the region from which it is produced. In traditional Korean cuisine, so-called "bamboo salt" is prepared by roasting salt[52] in a bamboo container plugged with mud at both ends. This product absorbs minerals from the bamboo and the mud, and has been claimed to increase the anticlastogenic and antimutagenic properties of doenjang (a fermented bean paste).[53]
50
+
51
+ Kosher or kitchen salt has a larger grain size than table salt and is used in cooking. It can be useful for brining, bread or pretzel making and as a scrubbing agent when combined with oil.[54]
52
+
53
+ Pickling salt is made of ultra-fine grains to speed dissolving to make brine.
54
+
55
+ Salt is present in most foods, but in naturally occurring foodstuffs such as meats, vegetables and fruit, it is present in very small quantities. It is often added to processed foods (such as canned foods and especially salted foods, pickled foods, and snack foods or other convenience foods), where it functions as both a preservative and a flavoring. Dairy salt is used in the preparation of butter and cheese products.[55] As a flavoring, salt enhances the taste of other foods by suppressing the bitterness of those foods making them more palatable and relatively sweeter.[56]
56
+
57
+ Before the advent of electrically powered refrigeration, salting was one of the main methods of food preservation. Thus, herring contains 67 mg sodium per 100 g, while kipper, its preserved form, contains 990 mg. Similarly, pork typically contains 63 mg while bacon contains 1,480 mg, and potatoes contain 7 mg but potato crisps 800 mg per 100 g.[11] Salt is also used in cooking, such as with salt crusts. The main sources of salt in the Western diet, apart from direct use of sodium chloride, are bread and cereal products, meat products and milk and dairy products.[11]
58
+
59
+ In many East Asian cultures, salt is not traditionally used as a condiment.[57] In its place, condiments such as soy sauce, fish sauce and oyster sauce tend to have a high sodium content and fill a similar role to table salt in western cultures. They are most often used for cooking rather than as table condiments.[58]
60
+
61
+ Table salt is made up of just under 40% sodium by weight, so a 6 g serving (1 teaspoon) contains about 2,400 mg of sodium.[59] Sodium serves a vital purpose in the human body: via its role as an electrolyte, it helps nerves and muscles to function correctly, and it is one factor involved in the osmotic regulation of water content in body organs (fluid balance).[60] Most of the sodium in the Western diet comes from salt.[3] The habitual salt intake in many Western countries is about 10 g per day, and it is higher than that in many countries in Eastern Europe and Asia.[61] The high level of sodium in many processed foods has a major impact on the total amount consumed.[62] In the United States, 75% of the sodium eaten comes from processed and restaurant foods, 11% from cooking and table use and the rest from what is found naturally in foodstuffs.[63]
62
+
63
+ Because consuming too much sodium increases risk of cardiovascular diseases,[3] health organizations generally recommend that people reduce their dietary intake of salt.[3][64][65][66] High sodium intake is associated with a greater risk of stroke, total cardiovascular disease and kidney disease.[2][61] A reduction in sodium intake by 1,000 mg per day may reduce cardiovascular disease by about 30 percent.[1][3] In adults and children with no acute illness, a decrease in the intake of sodium from the typical high levels reduces blood pressure.[65][67] A low sodium diet results in a greater improvement in blood pressure in people with hypertension.[68][69]
64
+
65
+ The World Health Organization recommends that adults should consume less than 2,000 mg of sodium (which is contained in 5 g of salt) per day.[64] Guidelines by the United States recommend that people with hypertension, African Americans, and middle-aged and older adults should limit consumption to no more than 1,500 mg of sodium per day and meet the potassium recommendation of 4,700 mg/day with a healthy diet of fruits and vegetables.[3][70]
66
+
67
+ While reduction of sodium intake to less than 2,300 mg per day is recommended by developed countries,[3] one review recommended that sodium intake be reduced to at least 1,200 mg (contained in 3 g of salt) per day, as a further reduction in salt intake the greater the fall in systolic blood pressure for all age groups and ethinicities.[65] Another review indicated that there is inconsistent/insufficient evidence to conclude that reducing sodium intake to lower than 2,300 mg per day is either beneficial or harmful.[71]
68
+
69
+ More recent evidence is showing a much more complicated relationship between salt and cardiovascular disease. According to a systematic review of multiple large studies, "mortality caused by levels of salt the association between sodium consumption and cardiovascular disease or mortality is U-shaped, with increased risk at both high and low sodium intake" [72] The findings showed that increased mortality from excessive salt intake was primarily associated with individuals with hypertension. The levels of increased mortality among those with restricted salt intake appeared to be similar regardless of blood pressure. This evidence shows that while those with hypertension should primarily focus on reducing sodium to recommended levels, all groups should seek to maintain a healthy level of sodium intake of between 4 and 5 grams a day.[72]
70
+
71
+ One of the two most prominent dietary risks for disability in the world is eating too much sodium.[73]
72
+
73
+ Only about 6% of the salt manufactured in the world is used in food. Of the remainder, 12% is used in water conditioning processes, 8% goes for de-icing highways and 6% is used in agriculture. The rest (68%) is used for manufacturing and other industrial processes,[74] and sodium chloride is one of the largest inorganic raw materials used by volume. Its major chemical products are caustic soda and chlorine, which are separated by the electrolysis of a pure brine solution. These are used in the manufacture of PVC, plastics, paper pulp and many other inorganic and organic compounds. Salt is also used as a flux in the production of aluminium. For this purpose, a layer of melted salt floats on top of the molten metal and removes iron and other metal contaminants. It is also used in the manufacture of soaps and glycerine, where it is added to the vat to precipitate out the saponified products. As an emulsifier, salt is used in the manufacture of synthetic rubber, and another use is in the firing of pottery, when salt added to the furnace vaporises before condensing onto the surface of the ceramic material, forming a strong glaze.[75]
74
+
75
+ When drilling through loose materials such as sand or gravel, salt may be added to the drilling fluid to provide a stable "wall" to prevent the hole collapsing. There are many other processes in which salt is involved. These include its use as a mordant in textile dying, to regenerate resins in water softening, for the tanning of hides, the preservation of meat and fish and the canning of meat and vegetables.[75][76][77]
76
+
77
+ Food-grade salt accounts for only a small part of salt production in industrialized countries (7% in Europe),[78] although worldwide, food uses account for 17.5% of total production.[79]
78
+
79
+ In 2018, total world production of salt was 300 million tonnes, the top six producers being China (68 million), the United States (42 million), India (29 million), Germany (13 million), Canada (13 million) and Australia (12 million).[80]
80
+
81
+ The manufacture of salt is one of the oldest chemical industries.[81] A major source of salt is seawater, which has a salinity of approximately 3.5%. This means that there are about 35 grams (1.2 oz) of dissolved salts, predominantly sodium (Na+) and chloride (Cl−) ions, per kilogram (2.2 lbs) of water.[82] The world's oceans are a virtually inexhaustible source of salt, and this abundance of supply means that reserves have not been calculated.[76] The evaporation of seawater is the production method of choice in marine countries with high evaporation and low precipitation rates. Salt evaporation ponds are filled from the ocean and salt crystals can be harvested as the water dries up. Sometimes these ponds have vivid colours, as some species of algae and other micro-organisms thrive in conditions of high salinity.[83]
82
+
83
+ Elsewhere, salt is extracted from the vast sedimentary deposits which have been laid down over the millennia from the evaporation of seas and lakes. These are either mined directly, producing rock salt, or are extracted in solution by pumping water into the deposit. In either case, the salt may be purified by mechanical evaporation of brine. Traditionally, this was done in shallow open pans which were heated to increase the rate of evaporation. More recently, the process is performed in pans under vacuum.[77] The raw salt is refined to purify it and improve its storage and handling characteristics. This usually involves recrystallization during which a brine solution is treated with chemicals that precipitate most impurities (largely magnesium and calcium salts). Multiple stages of evaporation are then used to collect pure sodium chloride crystals, which are kiln-dried.[84] Some salt is produced using the Alberger process, which involves vacuum pan evaporation combined with the seeding of the solution with cubic crystals, and produces a grainy-type flake.[85] The Ayoreo, an indigenous group from the Paraguayan Chaco, obtain their salt from the ash produced by burning the timber of the Indian salt tree (Maytenus vitis-idaea) and other trees.[86]
84
+
85
+ One of the largest salt mining operations in the world is at the Khewra Salt Mine in Pakistan. The mine has nineteen storeys, eleven of which are underground, and 400 km (250 mi) of passages. The salt is dug out by the room and pillar method, where about half the material is left in place to support the upper levels. Extraction of Himalayan salt is expected to last 350 years at the present rate of extraction of around 385,000 tons per annum.[87]
86
+
87
+ Salt has long held an important place in religion and culture. At the time of Brahmanic sacrifices, in Hittite rituals and during festivals held by Semites and Greeks at the time of the new moon, salt was thrown into a fire where it produced crackling noises.[88] The ancient Egyptians, Greeks and Romans invoked their gods with offerings of salt and water and some people think this to be the origin of Holy Water in the Christian faith.[89] In Aztec mythology, Huixtocihuatl was a fertility goddess who presided over salt and salt water.[90]
88
+
89
+ Salt is considered to be a very auspicious substance in Hinduism and is used in particular religious ceremonies like house-warmings and weddings.[91] In Jainism, devotees lay an offering of raw rice with a pinch of salt before a deity to signify their devotion and salt is sprinkled on a person's cremated remains before the ashes are buried.[92] Salt is believed to ward off evil spirits in Mahayana Buddhist tradition, and when returning home from a funeral, a pinch of salt is thrown over the left shoulder as this prevents evil spirits from entering the house.[93] In Shinto, Shio (塩, lit. "salt") is used for ritual purification of locations and people (harae, specifically shubatsu), and small piles of salt are placed in dishes by the entrance of establishments for the two-fold purposes of warding off evil and attracting patrons.[94]
90
+
91
+ In the Hebrew Bible, there are thirty-five verses which mention salt.[95] One of these mentions Lot's wife, who was turned into a pillar of salt when she looked back at the cities of Sodom and Gomorrah (Genesis 19:26) as they were destroyed. When the judge Abimelech destroyed the city of Shechem, he is said to have "sown salt on it," probably as a curse on anyone who would re-inhabit it (Judges 9:45). The Book of Job contains the first mention of salt as a condiment. "Can that which is unsavoury be eaten without salt? or is there any taste in the white of an egg?" (Job 6:6).[95] In the New Testament, six verses mention salt. In the Sermon on the Mount, Jesus referred to his followers as the "salt of the earth". The apostle Paul also encouraged Christians to "let your conversation be always full of grace, seasoned with salt" (Colossians 4:6).[95] Salt is mandatory in the rite of the Tridentine Mass.[96] Salt is used in the third item (which includes an Exorcism) of the Celtic Consecration (cf. Gallican Rite) that is employed in the consecration of a church. Salt may be added to the water "where it is customary" in the Roman Catholic rite of Holy water.[96]
92
+
93
+ In Judaism, it is recommended to have either a salty bread or to add salt to the bread if this bread is unsalted when doing Kiddush for Shabbat. It is customary to spread some salt over the bread or to dip the bread in a little salt when passing the bread around the table after the Kiddush.[97] To preserve the covenant between their people and God, Jews dip the Sabbath bread in salt.[89]
94
+
95
+ In Wicca, salt is symbolic of the element Earth. It is also believed to cleanse an area of harmful or negative energies. A dish of salt and a dish of water are almost always present on an altar, and salt is used in a wide variety of rituals and ceremonies.[98]
96
+
en/5342.html.txt ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Salt is a mineral composed primarily of sodium chloride (NaCl), a chemical compound belonging to the larger class of salts; salt in its natural form as a crystalline mineral is known as rock salt or halite. Salt is present in vast quantities in seawater, where it is the main mineral constituent. The open ocean has about 35 grams (1.2 oz) of solids per liter of sea water, a salinity of 3.5%.
6
+
7
+ Salt is essential for life in general, and saltiness is one of the basic human tastes. Salt is one of the oldest and most ubiquitous food seasonings, and salting is an important method of food preservation.
8
+
9
+ Some of the earliest evidence of salt processing dates to around 6,000 BC, when people living in the area of present-day Romania boiled spring water to extract salts; a salt-works in China dates to approximately the same period. Salt was also prized by the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites, Egyptians, and the Indians. Salt became an important article of trade and was transported by boat across the Mediterranean Sea, along specially built salt roads, and across the Sahara on camel caravans. The scarcity and universal need for salt have led nations to go to war over it and use it to raise tax revenues. Salt is used in religious ceremonies and has other cultural and traditional significance.
10
+
11
+ Salt is processed from salt mines, and by the evaporation of seawater (sea salt) and mineral-rich spring water in shallow pools. Its major industrial products are caustic soda and chlorine; salt is used in many industrial processes including the manufacture of polyvinyl chloride, plastics, paper pulp and many other products. Of the annual global production of around two hundred million tonnes of salt, about 6% is used for human consumption. Other uses include water conditioning processes, de-icing highways, and agricultural use. Edible salt is sold in forms such as sea salt and table salt which usually contains an anti-caking agent and may be iodised to prevent iodine deficiency. As well as its use in cooking and at the table, salt is present in many processed foods.
12
+
13
+ Sodium is an essential nutrient for human health via its role as an electrolyte and osmotic solute.[1][2][3] Excessive salt consumption may increase the risk of cardiovascular diseases, such as hypertension, in children and adults. Such health effects of salt have long been studied. Accordingly, numerous world health associations and experts in developed countries recommend reducing consumption of popular salty foods.[3][4] The World Health Organization recommends that adults should consume less than 2,000 mg of sodium, equivalent to 5 grams of salt per day.[5]
14
+
15
+ All through history, the availability of salt has been pivotal to civilization. What is now thought to have been the first city in Europe is Solnitsata, in Bulgaria, which was a salt mine, providing the area now known as the Balkans with salt since 5400 BC.[6] Even the name Solnitsata means "salt works".
16
+
17
+ While people have used canning and artificial refrigeration to preserve food for the last hundred years or so, salt has been the best-known food preservative, especially for meat, for many thousands of years.[7] A very ancient salt-works operation has been discovered at the Poiana Slatinei archaeological site next to a salt spring in Lunca, Neamț County, Romania. Evidence indicates that Neolithic people of the Precucuteni Culture were boiling the salt-laden spring water through the process of briquetage to extract the salt as far back as 6050 BC.[8] The salt extracted from this operation may have had a direct correlation to the rapid growth of this society's population soon after its initial production began.[9] The harvest of salt from the surface of Xiechi Lake near Yuncheng in Shanxi, China, dates back to at least 6000 BC, making it one of the oldest verifiable saltworks.[10]
18
+
19
+ There is more salt in animal tissues, such as meat, blood, and milk, than in plant tissues.[11] Nomads who subsist on their flocks and herds do not eat salt with their food, but agriculturalists, feeding mainly on cereals and vegetable matter, need to supplement their diet with salt.[12] With the spread of civilization, salt became one of the world's main trading commodities. It was of high value to the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites and other peoples of antiquity. In the Middle East, salt was used to ceremonially seal an agreement, and the ancient Hebrews made a "covenant of salt" with God and sprinkled salt on their offerings to show their trust in him.[13][better source needed] An ancient practice in time of war was salting the earth: scattering salt around in a defeated city to prevent plant growth. The Bible tells the story of King Abimelech who was ordered by God to do this at Shechem,[14] and various texts claim that the Roman general Scipio Aemilianus Africanus ploughed over and sowed the city of Carthage with salt after it was defeated in the Third Punic War (146 BC).[15]
20
+
21
+ Salt may have been used for barter in connection with the obsidian trade in Anatolia in the Neolithic Era.[16] Salt was included among funeral offerings found in ancient Egyptian tombs from the third millennium BC, as were salted birds, and salt fish.[17] From about 2800 BC, the Egyptians began exporting salt fish to the Phoenicians in return for Lebanon cedar, glass, and the dye Tyrian purple; the Phoenicians traded Egyptian salted fish and salt from North Africa throughout their Mediterranean trade empire.[18] Herodotus described salt trading routes across Libya back in the 5th century BC. In the early years of the Roman Empire, roads were built for the transportation of salt from the salt imported at Ostia to the capital.[19]
22
+
23
+ In Africa, salt was used as currency south of the Sahara, and slabs of rock salt were used as coins in Abyssinia.[12] Moorish merchants in the 6th century traded salt for gold, weight for weight.[dubious – discuss] The Tuareg have traditionally maintained routes across the Sahara especially for the transportation of salt by Azalai (salt caravans). The caravans still cross the desert from southern Niger to Bilma, although much of the trade now takes place by truck. Each camel takes two bales of fodder and two of trade goods northwards and returns laden with salt pillars and dates.[20] In Gabon, before the arrival of Europeans, the coast people carried on a remunerative trade with those of the interior by the medium of sea salt. This was gradually displaced by the salt that Europeans brought in sacks, so that the coast natives lost their previous profits; as of the author's writing in 1958, sea salt was still the currency best appreciated in the interior.[21]
24
+
25
+ Salzburg, Hallstatt, and Hallein lie within 17 km (11 mi) of each other on the river Salzach in central Austria in an area with extensive salt deposits. Salzach literally means "salt river" and Salzburg "salt castle", both taking their names from the German word Salz meaning salt and Hallstatt was the site of the world's first salt mine.[22] The town gave its name to the Hallstatt culture that began mining for salt in the area in about 800 BC. Around 400 BC, the townsfolk, who had previously used pickaxes and shovels, began open pan salt making. During the first millennium BC, Celtic communities grew rich trading salt and salted meat to Ancient Greece and Ancient Rome in exchange for wine and other luxuries.[7]
26
+
27
+ The word salary comes from the Latin word for salt. The reason for this is unknown; a persistent modern claim that the Roman Legions were sometimes paid in salt is baseless.[23][24][25] The word salad literally means "salted", and comes from the ancient Roman practice of salting leaf vegetables.[26]
28
+
29
+ Wars have been fought over salt. Venice fought and won a war with Genoa over the product, and it played an important part in the American Revolution. Cities on overland trade routes grew rich by levying duties,[27] and towns like Liverpool flourished on the export of salt extracted from the salt mines of Cheshire.[28] Various governments have at different times imposed salt taxes on their peoples. The voyages of Christopher Columbus are said to have been financed from salt production in southern Spain, and the oppressive salt tax in France was one of the causes of the French Revolution. After being repealed, this tax was reimposed by Napoleon when he became emperor to pay for his foreign wars, and was not finally abolished until 1946.[27] In 1930, Mahatma Gandhi led at least 100,000 people on the "Dandi March" or "Salt Satyagraha", in which protesters made their own salt from the sea thus defying British rule and avoiding paying the salt tax. This civil disobedience inspired millions of common people and elevated the Indian independence movement from an elitist movement to a national struggle.[29]
30
+
31
+ Salt is mostly sodium chloride, the ionic compound with the formula NaCl, representing equal proportions of sodium and chlorine. Sea salt and freshly mined salt (much of which is sea salt from prehistoric seas) also contain small amounts of trace elements (which in these small amounts are generally good for plant and animal health[citation needed]). Mined salt is often refined in the production of table salt; it is dissolved in water, purified via precipitation of other minerals out of solution, and re-evaporated. During this same refining process it is often also iodized. Salt crystals are translucent and cubic in shape; they normally appear white but impurities may give them a blue or purple tinge. The molar mass of salt is 58.443 g/mol, its melting point is 801 °C (1,474 °F) and its boiling point 1,465 °C (2,669 °F). Its density is 2.17 grams per cubic centimetre and it is readily soluble in water. When dissolved in water it separates into Na+ and Cl− ions, and the solubility is 359 grams per litre.[30] From cold solutions, salt crystallises as the dihydrate NaCl·2H2O. Solutions of sodium chloride have very different properties from those of pure water; the freezing point is −21.12 °C (−6.02 °F) for 23.31 wt% of salt, and the boiling point of saturated salt solution is around 108.7 °C (227.7 °F).[31]
32
+
33
+ Salt is essential to the health of humans and other animals, and it is one of the five basic taste sensations.[32] Salt is used in many cuisines around the world, and it is often found in salt shakers on diners' eating tables for their personal use on food. Salt is also an ingredient in many manufactured foodstuffs. Table salt is a refined salt containing about 97 to 99 percent sodium chloride.[33][34][35] Usually, anticaking agents such as sodium aluminosilicate or magnesium carbonate are added to make it free-flowing. Iodized salt, containing potassium iodide, is widely available. Some people put a desiccant, such as a few grains of uncooked rice[36] or a saltine cracker, in their salt shakers to absorb extra moisture and help break up salt clumps that may otherwise form.[37]
34
+
35
+ Some table salt sold for consumption contains additives which address a variety of health concerns, especially in the developing world. The identities and amounts of additives vary widely from country to country. Iodine is an important micronutrient for humans, and a deficiency of the element can cause lowered production of thyroxine (hypothyroidism) and enlargement of the thyroid gland (endemic goitre) in adults or cretinism in children.[38] Iodized salt has been used to correct these conditions since 1924[39] and consists of table salt mixed with a minute amount of potassium iodide, sodium iodide or sodium iodate. A small amount of dextrose may also be added to stabilize the iodine.[40] Iodine deficiency affects about two billion people around the world and is the leading preventable cause of mental retardation.[41] Iodized table salt has significantly reduced disorders of iodine deficiency in countries where it is used.[42]
36
+
37
+ The amount of iodine and the specific iodine compound added to salt varies from country to country. In the United States, the Food and Drug Administration (FDA) recommends [21 CFR 101.9 (c)(8)(iv)] 150 micrograms of iodine per day for both men and women. US iodized salt contains 46–77 ppm (parts per million), whereas in the UK the iodine content of iodized salt is recommended to be 10–22 ppm.[43]
38
+
39
+ Sodium ferrocyanide, also known as yellow prussiate of soda, is sometimes added to salt as an anticaking agent. The additive is considered safe for human consumption.[44][45] Such anticaking agents have been added since at least 1911 when magnesium carbonate was first added to salt to make it flow more freely.[46] The safety of sodium ferrocyanide as a food additive was found to be provisionally acceptable by the Committee on Toxicity in 1988.[44] Other anticaking agents sometimes used include tricalcium phosphate, calcium or magnesium carbonates, fatty acid salts (acid salts), magnesium oxide, silicon dioxide, calcium silicate, sodium aluminosilicate and calcium aluminosilicate. Both the European Union and the United States Food and Drug Administration permitted the use of aluminium in the latter two compounds.[47]
40
+
41
+ In "doubly fortified salt", both iodide and iron salts are added. The latter alleviates iron deficiency anaemia, which interferes with the mental development of an estimated 40% of infants in the developing world. A typical iron source is ferrous fumarate.[48] Another additive, especially important for pregnant women, is folic acid (vitamin B9), which gives the table salt a yellow color. Folic acid helps prevent neural tube defects and anaemia, which affect young mothers, especially in developing countries.[48]
42
+
43
+ A lack of fluorine in the diet is the cause of a greatly increased incidence of dental caries.[49] Fluoride salts can be added to table salt with the goal of reducing tooth decay, especially in countries that have not benefited from fluoridated toothpastes and fluoridated water. The practice is more common in some European countries where water fluoridation is not carried out. In France, 35% of the table salt sold contains added sodium fluoride.[48]
44
+
45
+ Unrefined sea salt contains small amounts of magnesium and calcium halides and sulfates, traces of algal products, salt-resistant bacteria and sediment particles. The calcium and magnesium salts confer a faintly bitter overtone, and they make unrefined sea salt hygroscopic (i.e., it gradually absorbs moisture from air if stored uncovered). Algal products contribute a mildly "fishy" or "sea-air" odour, the latter from organobromine compounds. Sediments, the proportion of which varies with the source, give the salt a dull grey appearance. Since taste and aroma compounds are often detectable by humans in minute concentrations, sea salt may have a more complex flavor than pure sodium chloride when sprinkled on top of food. When salt is added during cooking however, these flavors would likely be overwhelmed by those of the food ingredients.[50] The refined salt industry cites scientific studies saying that raw sea and rock salts do not contain enough iodine salts to prevent iodine deficiency diseases.[51]
46
+
47
+ Himalayan salt is known for its distinct pink hue. It is used in cooking as a substitute for table salt. Is it also used as cookware, in salt lamps and in spas. It is mined from the Salt Range mountains in Pakistan.
48
+
49
+ Different natural salts have different mineralities depending on their source, giving each one a unique flavour. Fleur de sel, a natural sea salt from the surface of evaporating brine in salt pans, has a unique flavour varying with the region from which it is produced. In traditional Korean cuisine, so-called "bamboo salt" is prepared by roasting salt[52] in a bamboo container plugged with mud at both ends. This product absorbs minerals from the bamboo and the mud, and has been claimed to increase the anticlastogenic and antimutagenic properties of doenjang (a fermented bean paste).[53]
50
+
51
+ Kosher or kitchen salt has a larger grain size than table salt and is used in cooking. It can be useful for brining, bread or pretzel making and as a scrubbing agent when combined with oil.[54]
52
+
53
+ Pickling salt is made of ultra-fine grains to speed dissolving to make brine.
54
+
55
+ Salt is present in most foods, but in naturally occurring foodstuffs such as meats, vegetables and fruit, it is present in very small quantities. It is often added to processed foods (such as canned foods and especially salted foods, pickled foods, and snack foods or other convenience foods), where it functions as both a preservative and a flavoring. Dairy salt is used in the preparation of butter and cheese products.[55] As a flavoring, salt enhances the taste of other foods by suppressing the bitterness of those foods making them more palatable and relatively sweeter.[56]
56
+
57
+ Before the advent of electrically powered refrigeration, salting was one of the main methods of food preservation. Thus, herring contains 67 mg sodium per 100 g, while kipper, its preserved form, contains 990 mg. Similarly, pork typically contains 63 mg while bacon contains 1,480 mg, and potatoes contain 7 mg but potato crisps 800 mg per 100 g.[11] Salt is also used in cooking, such as with salt crusts. The main sources of salt in the Western diet, apart from direct use of sodium chloride, are bread and cereal products, meat products and milk and dairy products.[11]
58
+
59
+ In many East Asian cultures, salt is not traditionally used as a condiment.[57] In its place, condiments such as soy sauce, fish sauce and oyster sauce tend to have a high sodium content and fill a similar role to table salt in western cultures. They are most often used for cooking rather than as table condiments.[58]
60
+
61
+ Table salt is made up of just under 40% sodium by weight, so a 6 g serving (1 teaspoon) contains about 2,400 mg of sodium.[59] Sodium serves a vital purpose in the human body: via its role as an electrolyte, it helps nerves and muscles to function correctly, and it is one factor involved in the osmotic regulation of water content in body organs (fluid balance).[60] Most of the sodium in the Western diet comes from salt.[3] The habitual salt intake in many Western countries is about 10 g per day, and it is higher than that in many countries in Eastern Europe and Asia.[61] The high level of sodium in many processed foods has a major impact on the total amount consumed.[62] In the United States, 75% of the sodium eaten comes from processed and restaurant foods, 11% from cooking and table use and the rest from what is found naturally in foodstuffs.[63]
62
+
63
+ Because consuming too much sodium increases risk of cardiovascular diseases,[3] health organizations generally recommend that people reduce their dietary intake of salt.[3][64][65][66] High sodium intake is associated with a greater risk of stroke, total cardiovascular disease and kidney disease.[2][61] A reduction in sodium intake by 1,000 mg per day may reduce cardiovascular disease by about 30 percent.[1][3] In adults and children with no acute illness, a decrease in the intake of sodium from the typical high levels reduces blood pressure.[65][67] A low sodium diet results in a greater improvement in blood pressure in people with hypertension.[68][69]
64
+
65
+ The World Health Organization recommends that adults should consume less than 2,000 mg of sodium (which is contained in 5 g of salt) per day.[64] Guidelines by the United States recommend that people with hypertension, African Americans, and middle-aged and older adults should limit consumption to no more than 1,500 mg of sodium per day and meet the potassium recommendation of 4,700 mg/day with a healthy diet of fruits and vegetables.[3][70]
66
+
67
+ While reduction of sodium intake to less than 2,300 mg per day is recommended by developed countries,[3] one review recommended that sodium intake be reduced to at least 1,200 mg (contained in 3 g of salt) per day, as a further reduction in salt intake the greater the fall in systolic blood pressure for all age groups and ethinicities.[65] Another review indicated that there is inconsistent/insufficient evidence to conclude that reducing sodium intake to lower than 2,300 mg per day is either beneficial or harmful.[71]
68
+
69
+ More recent evidence is showing a much more complicated relationship between salt and cardiovascular disease. According to a systematic review of multiple large studies, "mortality caused by levels of salt the association between sodium consumption and cardiovascular disease or mortality is U-shaped, with increased risk at both high and low sodium intake" [72] The findings showed that increased mortality from excessive salt intake was primarily associated with individuals with hypertension. The levels of increased mortality among those with restricted salt intake appeared to be similar regardless of blood pressure. This evidence shows that while those with hypertension should primarily focus on reducing sodium to recommended levels, all groups should seek to maintain a healthy level of sodium intake of between 4 and 5 grams a day.[72]
70
+
71
+ One of the two most prominent dietary risks for disability in the world is eating too much sodium.[73]
72
+
73
+ Only about 6% of the salt manufactured in the world is used in food. Of the remainder, 12% is used in water conditioning processes, 8% goes for de-icing highways and 6% is used in agriculture. The rest (68%) is used for manufacturing and other industrial processes,[74] and sodium chloride is one of the largest inorganic raw materials used by volume. Its major chemical products are caustic soda and chlorine, which are separated by the electrolysis of a pure brine solution. These are used in the manufacture of PVC, plastics, paper pulp and many other inorganic and organic compounds. Salt is also used as a flux in the production of aluminium. For this purpose, a layer of melted salt floats on top of the molten metal and removes iron and other metal contaminants. It is also used in the manufacture of soaps and glycerine, where it is added to the vat to precipitate out the saponified products. As an emulsifier, salt is used in the manufacture of synthetic rubber, and another use is in the firing of pottery, when salt added to the furnace vaporises before condensing onto the surface of the ceramic material, forming a strong glaze.[75]
74
+
75
+ When drilling through loose materials such as sand or gravel, salt may be added to the drilling fluid to provide a stable "wall" to prevent the hole collapsing. There are many other processes in which salt is involved. These include its use as a mordant in textile dying, to regenerate resins in water softening, for the tanning of hides, the preservation of meat and fish and the canning of meat and vegetables.[75][76][77]
76
+
77
+ Food-grade salt accounts for only a small part of salt production in industrialized countries (7% in Europe),[78] although worldwide, food uses account for 17.5% of total production.[79]
78
+
79
+ In 2018, total world production of salt was 300 million tonnes, the top six producers being China (68 million), the United States (42 million), India (29 million), Germany (13 million), Canada (13 million) and Australia (12 million).[80]
80
+
81
+ The manufacture of salt is one of the oldest chemical industries.[81] A major source of salt is seawater, which has a salinity of approximately 3.5%. This means that there are about 35 grams (1.2 oz) of dissolved salts, predominantly sodium (Na+) and chloride (Cl−) ions, per kilogram (2.2 lbs) of water.[82] The world's oceans are a virtually inexhaustible source of salt, and this abundance of supply means that reserves have not been calculated.[76] The evaporation of seawater is the production method of choice in marine countries with high evaporation and low precipitation rates. Salt evaporation ponds are filled from the ocean and salt crystals can be harvested as the water dries up. Sometimes these ponds have vivid colours, as some species of algae and other micro-organisms thrive in conditions of high salinity.[83]
82
+
83
+ Elsewhere, salt is extracted from the vast sedimentary deposits which have been laid down over the millennia from the evaporation of seas and lakes. These are either mined directly, producing rock salt, or are extracted in solution by pumping water into the deposit. In either case, the salt may be purified by mechanical evaporation of brine. Traditionally, this was done in shallow open pans which were heated to increase the rate of evaporation. More recently, the process is performed in pans under vacuum.[77] The raw salt is refined to purify it and improve its storage and handling characteristics. This usually involves recrystallization during which a brine solution is treated with chemicals that precipitate most impurities (largely magnesium and calcium salts). Multiple stages of evaporation are then used to collect pure sodium chloride crystals, which are kiln-dried.[84] Some salt is produced using the Alberger process, which involves vacuum pan evaporation combined with the seeding of the solution with cubic crystals, and produces a grainy-type flake.[85] The Ayoreo, an indigenous group from the Paraguayan Chaco, obtain their salt from the ash produced by burning the timber of the Indian salt tree (Maytenus vitis-idaea) and other trees.[86]
84
+
85
+ One of the largest salt mining operations in the world is at the Khewra Salt Mine in Pakistan. The mine has nineteen storeys, eleven of which are underground, and 400 km (250 mi) of passages. The salt is dug out by the room and pillar method, where about half the material is left in place to support the upper levels. Extraction of Himalayan salt is expected to last 350 years at the present rate of extraction of around 385,000 tons per annum.[87]
86
+
87
+ Salt has long held an important place in religion and culture. At the time of Brahmanic sacrifices, in Hittite rituals and during festivals held by Semites and Greeks at the time of the new moon, salt was thrown into a fire where it produced crackling noises.[88] The ancient Egyptians, Greeks and Romans invoked their gods with offerings of salt and water and some people think this to be the origin of Holy Water in the Christian faith.[89] In Aztec mythology, Huixtocihuatl was a fertility goddess who presided over salt and salt water.[90]
88
+
89
+ Salt is considered to be a very auspicious substance in Hinduism and is used in particular religious ceremonies like house-warmings and weddings.[91] In Jainism, devotees lay an offering of raw rice with a pinch of salt before a deity to signify their devotion and salt is sprinkled on a person's cremated remains before the ashes are buried.[92] Salt is believed to ward off evil spirits in Mahayana Buddhist tradition, and when returning home from a funeral, a pinch of salt is thrown over the left shoulder as this prevents evil spirits from entering the house.[93] In Shinto, Shio (塩, lit. "salt") is used for ritual purification of locations and people (harae, specifically shubatsu), and small piles of salt are placed in dishes by the entrance of establishments for the two-fold purposes of warding off evil and attracting patrons.[94]
90
+
91
+ In the Hebrew Bible, there are thirty-five verses which mention salt.[95] One of these mentions Lot's wife, who was turned into a pillar of salt when she looked back at the cities of Sodom and Gomorrah (Genesis 19:26) as they were destroyed. When the judge Abimelech destroyed the city of Shechem, he is said to have "sown salt on it," probably as a curse on anyone who would re-inhabit it (Judges 9:45). The Book of Job contains the first mention of salt as a condiment. "Can that which is unsavoury be eaten without salt? or is there any taste in the white of an egg?" (Job 6:6).[95] In the New Testament, six verses mention salt. In the Sermon on the Mount, Jesus referred to his followers as the "salt of the earth". The apostle Paul also encouraged Christians to "let your conversation be always full of grace, seasoned with salt" (Colossians 4:6).[95] Salt is mandatory in the rite of the Tridentine Mass.[96] Salt is used in the third item (which includes an Exorcism) of the Celtic Consecration (cf. Gallican Rite) that is employed in the consecration of a church. Salt may be added to the water "where it is customary" in the Roman Catholic rite of Holy water.[96]
92
+
93
+ In Judaism, it is recommended to have either a salty bread or to add salt to the bread if this bread is unsalted when doing Kiddush for Shabbat. It is customary to spread some salt over the bread or to dip the bread in a little salt when passing the bread around the table after the Kiddush.[97] To preserve the covenant between their people and God, Jews dip the Sabbath bread in salt.[89]
94
+
95
+ In Wicca, salt is symbolic of the element Earth. It is also believed to cleanse an area of harmful or negative energies. A dish of salt and a dish of water are almost always present on an altar, and salt is used in a wide variety of rituals and ceremonies.[98]
96
+
en/5343.html.txt ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Salt is a mineral composed primarily of sodium chloride (NaCl), a chemical compound belonging to the larger class of salts; salt in its natural form as a crystalline mineral is known as rock salt or halite. Salt is present in vast quantities in seawater, where it is the main mineral constituent. The open ocean has about 35 grams (1.2 oz) of solids per liter of sea water, a salinity of 3.5%.
6
+
7
+ Salt is essential for life in general, and saltiness is one of the basic human tastes. Salt is one of the oldest and most ubiquitous food seasonings, and salting is an important method of food preservation.
8
+
9
+ Some of the earliest evidence of salt processing dates to around 6,000 BC, when people living in the area of present-day Romania boiled spring water to extract salts; a salt-works in China dates to approximately the same period. Salt was also prized by the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites, Egyptians, and the Indians. Salt became an important article of trade and was transported by boat across the Mediterranean Sea, along specially built salt roads, and across the Sahara on camel caravans. The scarcity and universal need for salt have led nations to go to war over it and use it to raise tax revenues. Salt is used in religious ceremonies and has other cultural and traditional significance.
10
+
11
+ Salt is processed from salt mines, and by the evaporation of seawater (sea salt) and mineral-rich spring water in shallow pools. Its major industrial products are caustic soda and chlorine; salt is used in many industrial processes including the manufacture of polyvinyl chloride, plastics, paper pulp and many other products. Of the annual global production of around two hundred million tonnes of salt, about 6% is used for human consumption. Other uses include water conditioning processes, de-icing highways, and agricultural use. Edible salt is sold in forms such as sea salt and table salt which usually contains an anti-caking agent and may be iodised to prevent iodine deficiency. As well as its use in cooking and at the table, salt is present in many processed foods.
12
+
13
+ Sodium is an essential nutrient for human health via its role as an electrolyte and osmotic solute.[1][2][3] Excessive salt consumption may increase the risk of cardiovascular diseases, such as hypertension, in children and adults. Such health effects of salt have long been studied. Accordingly, numerous world health associations and experts in developed countries recommend reducing consumption of popular salty foods.[3][4] The World Health Organization recommends that adults should consume less than 2,000 mg of sodium, equivalent to 5 grams of salt per day.[5]
14
+
15
+ All through history, the availability of salt has been pivotal to civilization. What is now thought to have been the first city in Europe is Solnitsata, in Bulgaria, which was a salt mine, providing the area now known as the Balkans with salt since 5400 BC.[6] Even the name Solnitsata means "salt works".
16
+
17
+ While people have used canning and artificial refrigeration to preserve food for the last hundred years or so, salt has been the best-known food preservative, especially for meat, for many thousands of years.[7] A very ancient salt-works operation has been discovered at the Poiana Slatinei archaeological site next to a salt spring in Lunca, Neamț County, Romania. Evidence indicates that Neolithic people of the Precucuteni Culture were boiling the salt-laden spring water through the process of briquetage to extract the salt as far back as 6050 BC.[8] The salt extracted from this operation may have had a direct correlation to the rapid growth of this society's population soon after its initial production began.[9] The harvest of salt from the surface of Xiechi Lake near Yuncheng in Shanxi, China, dates back to at least 6000 BC, making it one of the oldest verifiable saltworks.[10]
18
+
19
+ There is more salt in animal tissues, such as meat, blood, and milk, than in plant tissues.[11] Nomads who subsist on their flocks and herds do not eat salt with their food, but agriculturalists, feeding mainly on cereals and vegetable matter, need to supplement their diet with salt.[12] With the spread of civilization, salt became one of the world's main trading commodities. It was of high value to the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites and other peoples of antiquity. In the Middle East, salt was used to ceremonially seal an agreement, and the ancient Hebrews made a "covenant of salt" with God and sprinkled salt on their offerings to show their trust in him.[13][better source needed] An ancient practice in time of war was salting the earth: scattering salt around in a defeated city to prevent plant growth. The Bible tells the story of King Abimelech who was ordered by God to do this at Shechem,[14] and various texts claim that the Roman general Scipio Aemilianus Africanus ploughed over and sowed the city of Carthage with salt after it was defeated in the Third Punic War (146 BC).[15]
20
+
21
+ Salt may have been used for barter in connection with the obsidian trade in Anatolia in the Neolithic Era.[16] Salt was included among funeral offerings found in ancient Egyptian tombs from the third millennium BC, as were salted birds, and salt fish.[17] From about 2800 BC, the Egyptians began exporting salt fish to the Phoenicians in return for Lebanon cedar, glass, and the dye Tyrian purple; the Phoenicians traded Egyptian salted fish and salt from North Africa throughout their Mediterranean trade empire.[18] Herodotus described salt trading routes across Libya back in the 5th century BC. In the early years of the Roman Empire, roads were built for the transportation of salt from the salt imported at Ostia to the capital.[19]
22
+
23
+ In Africa, salt was used as currency south of the Sahara, and slabs of rock salt were used as coins in Abyssinia.[12] Moorish merchants in the 6th century traded salt for gold, weight for weight.[dubious – discuss] The Tuareg have traditionally maintained routes across the Sahara especially for the transportation of salt by Azalai (salt caravans). The caravans still cross the desert from southern Niger to Bilma, although much of the trade now takes place by truck. Each camel takes two bales of fodder and two of trade goods northwards and returns laden with salt pillars and dates.[20] In Gabon, before the arrival of Europeans, the coast people carried on a remunerative trade with those of the interior by the medium of sea salt. This was gradually displaced by the salt that Europeans brought in sacks, so that the coast natives lost their previous profits; as of the author's writing in 1958, sea salt was still the currency best appreciated in the interior.[21]
24
+
25
+ Salzburg, Hallstatt, and Hallein lie within 17 km (11 mi) of each other on the river Salzach in central Austria in an area with extensive salt deposits. Salzach literally means "salt river" and Salzburg "salt castle", both taking their names from the German word Salz meaning salt and Hallstatt was the site of the world's first salt mine.[22] The town gave its name to the Hallstatt culture that began mining for salt in the area in about 800 BC. Around 400 BC, the townsfolk, who had previously used pickaxes and shovels, began open pan salt making. During the first millennium BC, Celtic communities grew rich trading salt and salted meat to Ancient Greece and Ancient Rome in exchange for wine and other luxuries.[7]
26
+
27
+ The word salary comes from the Latin word for salt. The reason for this is unknown; a persistent modern claim that the Roman Legions were sometimes paid in salt is baseless.[23][24][25] The word salad literally means "salted", and comes from the ancient Roman practice of salting leaf vegetables.[26]
28
+
29
+ Wars have been fought over salt. Venice fought and won a war with Genoa over the product, and it played an important part in the American Revolution. Cities on overland trade routes grew rich by levying duties,[27] and towns like Liverpool flourished on the export of salt extracted from the salt mines of Cheshire.[28] Various governments have at different times imposed salt taxes on their peoples. The voyages of Christopher Columbus are said to have been financed from salt production in southern Spain, and the oppressive salt tax in France was one of the causes of the French Revolution. After being repealed, this tax was reimposed by Napoleon when he became emperor to pay for his foreign wars, and was not finally abolished until 1946.[27] In 1930, Mahatma Gandhi led at least 100,000 people on the "Dandi March" or "Salt Satyagraha", in which protesters made their own salt from the sea thus defying British rule and avoiding paying the salt tax. This civil disobedience inspired millions of common people and elevated the Indian independence movement from an elitist movement to a national struggle.[29]
30
+
31
+ Salt is mostly sodium chloride, the ionic compound with the formula NaCl, representing equal proportions of sodium and chlorine. Sea salt and freshly mined salt (much of which is sea salt from prehistoric seas) also contain small amounts of trace elements (which in these small amounts are generally good for plant and animal health[citation needed]). Mined salt is often refined in the production of table salt; it is dissolved in water, purified via precipitation of other minerals out of solution, and re-evaporated. During this same refining process it is often also iodized. Salt crystals are translucent and cubic in shape; they normally appear white but impurities may give them a blue or purple tinge. The molar mass of salt is 58.443 g/mol, its melting point is 801 °C (1,474 °F) and its boiling point 1,465 °C (2,669 °F). Its density is 2.17 grams per cubic centimetre and it is readily soluble in water. When dissolved in water it separates into Na+ and Cl− ions, and the solubility is 359 grams per litre.[30] From cold solutions, salt crystallises as the dihydrate NaCl·2H2O. Solutions of sodium chloride have very different properties from those of pure water; the freezing point is −21.12 °C (−6.02 °F) for 23.31 wt% of salt, and the boiling point of saturated salt solution is around 108.7 °C (227.7 °F).[31]
32
+
33
+ Salt is essential to the health of humans and other animals, and it is one of the five basic taste sensations.[32] Salt is used in many cuisines around the world, and it is often found in salt shakers on diners' eating tables for their personal use on food. Salt is also an ingredient in many manufactured foodstuffs. Table salt is a refined salt containing about 97 to 99 percent sodium chloride.[33][34][35] Usually, anticaking agents such as sodium aluminosilicate or magnesium carbonate are added to make it free-flowing. Iodized salt, containing potassium iodide, is widely available. Some people put a desiccant, such as a few grains of uncooked rice[36] or a saltine cracker, in their salt shakers to absorb extra moisture and help break up salt clumps that may otherwise form.[37]
34
+
35
+ Some table salt sold for consumption contains additives which address a variety of health concerns, especially in the developing world. The identities and amounts of additives vary widely from country to country. Iodine is an important micronutrient for humans, and a deficiency of the element can cause lowered production of thyroxine (hypothyroidism) and enlargement of the thyroid gland (endemic goitre) in adults or cretinism in children.[38] Iodized salt has been used to correct these conditions since 1924[39] and consists of table salt mixed with a minute amount of potassium iodide, sodium iodide or sodium iodate. A small amount of dextrose may also be added to stabilize the iodine.[40] Iodine deficiency affects about two billion people around the world and is the leading preventable cause of mental retardation.[41] Iodized table salt has significantly reduced disorders of iodine deficiency in countries where it is used.[42]
36
+
37
+ The amount of iodine and the specific iodine compound added to salt varies from country to country. In the United States, the Food and Drug Administration (FDA) recommends [21 CFR 101.9 (c)(8)(iv)] 150 micrograms of iodine per day for both men and women. US iodized salt contains 46–77 ppm (parts per million), whereas in the UK the iodine content of iodized salt is recommended to be 10–22 ppm.[43]
38
+
39
+ Sodium ferrocyanide, also known as yellow prussiate of soda, is sometimes added to salt as an anticaking agent. The additive is considered safe for human consumption.[44][45] Such anticaking agents have been added since at least 1911 when magnesium carbonate was first added to salt to make it flow more freely.[46] The safety of sodium ferrocyanide as a food additive was found to be provisionally acceptable by the Committee on Toxicity in 1988.[44] Other anticaking agents sometimes used include tricalcium phosphate, calcium or magnesium carbonates, fatty acid salts (acid salts), magnesium oxide, silicon dioxide, calcium silicate, sodium aluminosilicate and calcium aluminosilicate. Both the European Union and the United States Food and Drug Administration permitted the use of aluminium in the latter two compounds.[47]
40
+
41
+ In "doubly fortified salt", both iodide and iron salts are added. The latter alleviates iron deficiency anaemia, which interferes with the mental development of an estimated 40% of infants in the developing world. A typical iron source is ferrous fumarate.[48] Another additive, especially important for pregnant women, is folic acid (vitamin B9), which gives the table salt a yellow color. Folic acid helps prevent neural tube defects and anaemia, which affect young mothers, especially in developing countries.[48]
42
+
43
+ A lack of fluorine in the diet is the cause of a greatly increased incidence of dental caries.[49] Fluoride salts can be added to table salt with the goal of reducing tooth decay, especially in countries that have not benefited from fluoridated toothpastes and fluoridated water. The practice is more common in some European countries where water fluoridation is not carried out. In France, 35% of the table salt sold contains added sodium fluoride.[48]
44
+
45
+ Unrefined sea salt contains small amounts of magnesium and calcium halides and sulfates, traces of algal products, salt-resistant bacteria and sediment particles. The calcium and magnesium salts confer a faintly bitter overtone, and they make unrefined sea salt hygroscopic (i.e., it gradually absorbs moisture from air if stored uncovered). Algal products contribute a mildly "fishy" or "sea-air" odour, the latter from organobromine compounds. Sediments, the proportion of which varies with the source, give the salt a dull grey appearance. Since taste and aroma compounds are often detectable by humans in minute concentrations, sea salt may have a more complex flavor than pure sodium chloride when sprinkled on top of food. When salt is added during cooking however, these flavors would likely be overwhelmed by those of the food ingredients.[50] The refined salt industry cites scientific studies saying that raw sea and rock salts do not contain enough iodine salts to prevent iodine deficiency diseases.[51]
46
+
47
+ Himalayan salt is known for its distinct pink hue. It is used in cooking as a substitute for table salt. Is it also used as cookware, in salt lamps and in spas. It is mined from the Salt Range mountains in Pakistan.
48
+
49
+ Different natural salts have different mineralities depending on their source, giving each one a unique flavour. Fleur de sel, a natural sea salt from the surface of evaporating brine in salt pans, has a unique flavour varying with the region from which it is produced. In traditional Korean cuisine, so-called "bamboo salt" is prepared by roasting salt[52] in a bamboo container plugged with mud at both ends. This product absorbs minerals from the bamboo and the mud, and has been claimed to increase the anticlastogenic and antimutagenic properties of doenjang (a fermented bean paste).[53]
50
+
51
+ Kosher or kitchen salt has a larger grain size than table salt and is used in cooking. It can be useful for brining, bread or pretzel making and as a scrubbing agent when combined with oil.[54]
52
+
53
+ Pickling salt is made of ultra-fine grains to speed dissolving to make brine.
54
+
55
+ Salt is present in most foods, but in naturally occurring foodstuffs such as meats, vegetables and fruit, it is present in very small quantities. It is often added to processed foods (such as canned foods and especially salted foods, pickled foods, and snack foods or other convenience foods), where it functions as both a preservative and a flavoring. Dairy salt is used in the preparation of butter and cheese products.[55] As a flavoring, salt enhances the taste of other foods by suppressing the bitterness of those foods making them more palatable and relatively sweeter.[56]
56
+
57
+ Before the advent of electrically powered refrigeration, salting was one of the main methods of food preservation. Thus, herring contains 67 mg sodium per 100 g, while kipper, its preserved form, contains 990 mg. Similarly, pork typically contains 63 mg while bacon contains 1,480 mg, and potatoes contain 7 mg but potato crisps 800 mg per 100 g.[11] Salt is also used in cooking, such as with salt crusts. The main sources of salt in the Western diet, apart from direct use of sodium chloride, are bread and cereal products, meat products and milk and dairy products.[11]
58
+
59
+ In many East Asian cultures, salt is not traditionally used as a condiment.[57] In its place, condiments such as soy sauce, fish sauce and oyster sauce tend to have a high sodium content and fill a similar role to table salt in western cultures. They are most often used for cooking rather than as table condiments.[58]
60
+
61
+ Table salt is made up of just under 40% sodium by weight, so a 6 g serving (1 teaspoon) contains about 2,400 mg of sodium.[59] Sodium serves a vital purpose in the human body: via its role as an electrolyte, it helps nerves and muscles to function correctly, and it is one factor involved in the osmotic regulation of water content in body organs (fluid balance).[60] Most of the sodium in the Western diet comes from salt.[3] The habitual salt intake in many Western countries is about 10 g per day, and it is higher than that in many countries in Eastern Europe and Asia.[61] The high level of sodium in many processed foods has a major impact on the total amount consumed.[62] In the United States, 75% of the sodium eaten comes from processed and restaurant foods, 11% from cooking and table use and the rest from what is found naturally in foodstuffs.[63]
62
+
63
+ Because consuming too much sodium increases risk of cardiovascular diseases,[3] health organizations generally recommend that people reduce their dietary intake of salt.[3][64][65][66] High sodium intake is associated with a greater risk of stroke, total cardiovascular disease and kidney disease.[2][61] A reduction in sodium intake by 1,000 mg per day may reduce cardiovascular disease by about 30 percent.[1][3] In adults and children with no acute illness, a decrease in the intake of sodium from the typical high levels reduces blood pressure.[65][67] A low sodium diet results in a greater improvement in blood pressure in people with hypertension.[68][69]
64
+
65
+ The World Health Organization recommends that adults should consume less than 2,000 mg of sodium (which is contained in 5 g of salt) per day.[64] Guidelines by the United States recommend that people with hypertension, African Americans, and middle-aged and older adults should limit consumption to no more than 1,500 mg of sodium per day and meet the potassium recommendation of 4,700 mg/day with a healthy diet of fruits and vegetables.[3][70]
66
+
67
+ While reduction of sodium intake to less than 2,300 mg per day is recommended by developed countries,[3] one review recommended that sodium intake be reduced to at least 1,200 mg (contained in 3 g of salt) per day, as a further reduction in salt intake the greater the fall in systolic blood pressure for all age groups and ethinicities.[65] Another review indicated that there is inconsistent/insufficient evidence to conclude that reducing sodium intake to lower than 2,300 mg per day is either beneficial or harmful.[71]
68
+
69
+ More recent evidence is showing a much more complicated relationship between salt and cardiovascular disease. According to a systematic review of multiple large studies, "mortality caused by levels of salt the association between sodium consumption and cardiovascular disease or mortality is U-shaped, with increased risk at both high and low sodium intake" [72] The findings showed that increased mortality from excessive salt intake was primarily associated with individuals with hypertension. The levels of increased mortality among those with restricted salt intake appeared to be similar regardless of blood pressure. This evidence shows that while those with hypertension should primarily focus on reducing sodium to recommended levels, all groups should seek to maintain a healthy level of sodium intake of between 4 and 5 grams a day.[72]
70
+
71
+ One of the two most prominent dietary risks for disability in the world is eating too much sodium.[73]
72
+
73
+ Only about 6% of the salt manufactured in the world is used in food. Of the remainder, 12% is used in water conditioning processes, 8% goes for de-icing highways and 6% is used in agriculture. The rest (68%) is used for manufacturing and other industrial processes,[74] and sodium chloride is one of the largest inorganic raw materials used by volume. Its major chemical products are caustic soda and chlorine, which are separated by the electrolysis of a pure brine solution. These are used in the manufacture of PVC, plastics, paper pulp and many other inorganic and organic compounds. Salt is also used as a flux in the production of aluminium. For this purpose, a layer of melted salt floats on top of the molten metal and removes iron and other metal contaminants. It is also used in the manufacture of soaps and glycerine, where it is added to the vat to precipitate out the saponified products. As an emulsifier, salt is used in the manufacture of synthetic rubber, and another use is in the firing of pottery, when salt added to the furnace vaporises before condensing onto the surface of the ceramic material, forming a strong glaze.[75]
74
+
75
+ When drilling through loose materials such as sand or gravel, salt may be added to the drilling fluid to provide a stable "wall" to prevent the hole collapsing. There are many other processes in which salt is involved. These include its use as a mordant in textile dying, to regenerate resins in water softening, for the tanning of hides, the preservation of meat and fish and the canning of meat and vegetables.[75][76][77]
76
+
77
+ Food-grade salt accounts for only a small part of salt production in industrialized countries (7% in Europe),[78] although worldwide, food uses account for 17.5% of total production.[79]
78
+
79
+ In 2018, total world production of salt was 300 million tonnes, the top six producers being China (68 million), the United States (42 million), India (29 million), Germany (13 million), Canada (13 million) and Australia (12 million).[80]
80
+
81
+ The manufacture of salt is one of the oldest chemical industries.[81] A major source of salt is seawater, which has a salinity of approximately 3.5%. This means that there are about 35 grams (1.2 oz) of dissolved salts, predominantly sodium (Na+) and chloride (Cl−) ions, per kilogram (2.2 lbs) of water.[82] The world's oceans are a virtually inexhaustible source of salt, and this abundance of supply means that reserves have not been calculated.[76] The evaporation of seawater is the production method of choice in marine countries with high evaporation and low precipitation rates. Salt evaporation ponds are filled from the ocean and salt crystals can be harvested as the water dries up. Sometimes these ponds have vivid colours, as some species of algae and other micro-organisms thrive in conditions of high salinity.[83]
82
+
83
+ Elsewhere, salt is extracted from the vast sedimentary deposits which have been laid down over the millennia from the evaporation of seas and lakes. These are either mined directly, producing rock salt, or are extracted in solution by pumping water into the deposit. In either case, the salt may be purified by mechanical evaporation of brine. Traditionally, this was done in shallow open pans which were heated to increase the rate of evaporation. More recently, the process is performed in pans under vacuum.[77] The raw salt is refined to purify it and improve its storage and handling characteristics. This usually involves recrystallization during which a brine solution is treated with chemicals that precipitate most impurities (largely magnesium and calcium salts). Multiple stages of evaporation are then used to collect pure sodium chloride crystals, which are kiln-dried.[84] Some salt is produced using the Alberger process, which involves vacuum pan evaporation combined with the seeding of the solution with cubic crystals, and produces a grainy-type flake.[85] The Ayoreo, an indigenous group from the Paraguayan Chaco, obtain their salt from the ash produced by burning the timber of the Indian salt tree (Maytenus vitis-idaea) and other trees.[86]
84
+
85
+ One of the largest salt mining operations in the world is at the Khewra Salt Mine in Pakistan. The mine has nineteen storeys, eleven of which are underground, and 400 km (250 mi) of passages. The salt is dug out by the room and pillar method, where about half the material is left in place to support the upper levels. Extraction of Himalayan salt is expected to last 350 years at the present rate of extraction of around 385,000 tons per annum.[87]
86
+
87
+ Salt has long held an important place in religion and culture. At the time of Brahmanic sacrifices, in Hittite rituals and during festivals held by Semites and Greeks at the time of the new moon, salt was thrown into a fire where it produced crackling noises.[88] The ancient Egyptians, Greeks and Romans invoked their gods with offerings of salt and water and some people think this to be the origin of Holy Water in the Christian faith.[89] In Aztec mythology, Huixtocihuatl was a fertility goddess who presided over salt and salt water.[90]
88
+
89
+ Salt is considered to be a very auspicious substance in Hinduism and is used in particular religious ceremonies like house-warmings and weddings.[91] In Jainism, devotees lay an offering of raw rice with a pinch of salt before a deity to signify their devotion and salt is sprinkled on a person's cremated remains before the ashes are buried.[92] Salt is believed to ward off evil spirits in Mahayana Buddhist tradition, and when returning home from a funeral, a pinch of salt is thrown over the left shoulder as this prevents evil spirits from entering the house.[93] In Shinto, Shio (塩, lit. "salt") is used for ritual purification of locations and people (harae, specifically shubatsu), and small piles of salt are placed in dishes by the entrance of establishments for the two-fold purposes of warding off evil and attracting patrons.[94]
90
+
91
+ In the Hebrew Bible, there are thirty-five verses which mention salt.[95] One of these mentions Lot's wife, who was turned into a pillar of salt when she looked back at the cities of Sodom and Gomorrah (Genesis 19:26) as they were destroyed. When the judge Abimelech destroyed the city of Shechem, he is said to have "sown salt on it," probably as a curse on anyone who would re-inhabit it (Judges 9:45). The Book of Job contains the first mention of salt as a condiment. "Can that which is unsavoury be eaten without salt? or is there any taste in the white of an egg?" (Job 6:6).[95] In the New Testament, six verses mention salt. In the Sermon on the Mount, Jesus referred to his followers as the "salt of the earth". The apostle Paul also encouraged Christians to "let your conversation be always full of grace, seasoned with salt" (Colossians 4:6).[95] Salt is mandatory in the rite of the Tridentine Mass.[96] Salt is used in the third item (which includes an Exorcism) of the Celtic Consecration (cf. Gallican Rite) that is employed in the consecration of a church. Salt may be added to the water "where it is customary" in the Roman Catholic rite of Holy water.[96]
92
+
93
+ In Judaism, it is recommended to have either a salty bread or to add salt to the bread if this bread is unsalted when doing Kiddush for Shabbat. It is customary to spread some salt over the bread or to dip the bread in a little salt when passing the bread around the table after the Kiddush.[97] To preserve the covenant between their people and God, Jews dip the Sabbath bread in salt.[89]
94
+
95
+ In Wicca, salt is symbolic of the element Earth. It is also believed to cleanse an area of harmful or negative energies. A dish of salt and a dish of water are almost always present on an altar, and salt is used in a wide variety of rituals and ceremonies.[98]
96
+
en/5344.html.txt ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A week is a time unit equal to seven days. It is the standard time period used for cycles of rest days in most parts of the world, mostly alongside—although not strictly part of—the Gregorian calendar.
4
+
5
+ In many languages, the days of the week are named after classical planets or gods of a pantheon. In English, the names are Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, and Saturday. Such a week may be called a planetary week.[citation needed] This arrangement is similar to a week in the Bible in which the seven days are simply numbered with the first day being a Christian day of worship and the seventh day being a sabbath day. The traditional Biblical sabbath is aligned with Saturday.
6
+
7
+ While, for example, the United States, Canada, Brazil, Japan and other countries consider Sunday as the first day of the week, and while the week begins with Saturday in much of the Middle East, the international ISO 8601 standard[a] has Monday as the first day of the week. The ISO standard includes the ISO week date system, a numbering system for weeks within a given year, where each week starting on a Monday is associated with the year that contains that week's Thursday (so that if a year starts in a long weekend Friday–Sunday, week number one of the year will start after that). ISO 8601 assigns numbers to the days of the week, running from 1 to 7 for Monday through to Sunday.
8
+
9
+ The term "week" is sometimes expanded to refer to other time units comprising a few days, such as the nundinal cycle of the ancient Roman calendar, the "work week", or "school week" referring only to the days spent on those activities.
10
+
11
+
12
+
13
+ The English word week comes from the Old English wice, ultimately from a Common Germanic *wikōn-, from a root *wik- "turn, move, change". The Germanic word probably had a wider meaning prior to the adoption of the Roman calendar, perhaps "succession series", as suggested by Gothic wikō translating taxis "order" in Luke 1:8.
14
+
15
+ The seven-day week is named in many languages by a word derived from "seven". The archaism sennight ("seven-night") preserves the old Germanic practice of reckoning time by nights, as in the more common fortnight ("fourteen-night").[1] Hebdomad and hebdomadal week both derive from the Greek hebdomás (ἑβδομάς, "a seven"). Septimana is cognate with the Romance terms derived from Latin septimana ("a seven").
16
+
17
+ Slavic has a formation *tъ(žь)dьnь (Serbian тједан, Croatian tjedan, Ukrainian тиждень, Czech týden, Polish tydzień), from *tъ "this" + *dьnь "day". Chinese has 星期, as it were "planetary time unit".
18
+
19
+ A week is defined as an interval of exactly seven days,[b] so that technically, except at daylight saving time transitions or leap seconds,
20
+
21
+ With respect to the Gregorian calendar:
22
+
23
+ In a Gregorian mean year, there are 365.2425 days, and thus exactly ​52 71⁄400 or 52.1775 weeks (unlike the Julian year of 365.25 days or ​52 5⁄28 ≈ 52.1786 weeks, which cannot be represented by a finite decimal expansion). There are exactly 20,871 weeks in 400 Gregorian years, so 27 July 1620 was a Monday just as was 27 July 2020.
24
+
25
+ Relative to the path of the Moon, a week is 23.659% of an average lunation or 94.637% of an average quarter lunation.
26
+
27
+ Historically, the system of dominical letters (letters A to G identifying the weekday of the first day of a given year) has been used to facilitate calculation of the day of week.
28
+ The day of the week can be easily calculated given a date's Julian day number (JD, i.e. the integer value at noon UT):
29
+ Adding one to the remainder after dividing the Julian day number by seven (JD modulo 7 + 1) yields that date's ISO 8601 day of the week. For example, the Julian day number of 27 July 2020 is 2459058. Calculating 2459058 mod 7 + 1 yields 1, corresponding to Monday.[2]
30
+
31
+ The days of the week were originally named for the classical planets. This naming system persisted alongside an "ecclesiastical" tradition of numbering the days, in ecclesiastical Latin beginning with dominica (the Lord's Day) as the first day. The Greco-Roman gods associated with the classical planets were rendered in their interpretatio germanica at some point during the late Roman Empire, yielding the Germanic tradition of names based on indigenous deities.
32
+
33
+ The ordering of the weekday names is not the classical order of the planets (by distance in the planetary spheres model, nor, equivalently, by their apparent speed of movement in the night sky). Instead, the planetary hours systems resulted in succeeding days being named for planets that are three places apart in their traditional listing. This characteristic was apparently discussed in Plutarch in a treatise written in c. AD 100, which is reported to have addressed the question of Why are the days named after the planets reckoned in a different order from the actual order? (the text of Plutarch's treatise has been lost).[3]
34
+
35
+ An ecclesiastical, non-astrological, system of numbering the days of the week was adopted in Late Antiquity. This model also seems to have influenced (presumably via Gothic) the designation of Wednesday as "mid-week" in Old High German (mittawehha) and Old Church Slavonic (срѣда). Old Church Slavonic may have also modeled the name of Monday, понєдѣльникъ, after the Latin feria secunda.[5] The ecclesiastical system became prevalent in Eastern Christianity, but in the Latin West it remains extant only in modern Icelandic, Galician, and Portuguese.[6]
36
+
37
+ A continuous seven-day cycle that runs throughout history without reference to the phases of the moon was first practised in Judaism, dated to the 6th century BC at the latest.[8][9]
38
+
39
+ There are several hypotheses concerning the origin of the biblical seven-day cycle.
40
+
41
+ Friedrich Delitzsch and others suggested that the seven-day week being approximately a quarter of a lunation is the implicit astronomical origin of the seven-day week,[10] and indeed the Babylonian calendar used intercalary days to synchronize the last week of a month with the new moon.[11] According to this theory, the Jewish week was adopted from the Babylonians while removing the moon-dependency.
42
+
43
+ However, Niels-Erik Andreasen, Jeffrey H. Tigay, and others claimed that the Biblical Sabbath is mentioned as a day of rest in some of the earliest layers of the Pentateuch dated to the 9th century BC at the latest, centuries before Judea's Babylonian exile. They also find the resemblance between the Biblical Sabbath and the Babylonian system to be weak. Therefore, they suggested that the seven-day week may reflect an independent Israelite tradition.[12][13][14][15] Tigay writes:
44
+
45
+ It is clear that among neighboring nations that were in position to have an influence over Israel – and in fact which did influence it in various matters – there is no precise parallel to the Israelite Sabbatical week. This leads to the conclusion that the Sabbatical week, which is as unique to Israel as the Sabbath from which it flows, is an independent Israelite creation.[14][16]
46
+
47
+ The seven-day week seems to have been adopted, at different stages, by the Persian Empire, in Hellenistic astrology, and (via Greek transmission) in Gupta India and Tang China.[c][citation needed]
48
+ The Babylonian system was received by the Greeks in the 4th century BC (notably via Eudoxus of Cnidus). However the designation of the seven days of the week to the seven planets is an innovation introduced in the time of Augustus.[18] The astrological concept of planetary hours is rather an original innovation of Hellenistic astrology, probably first conceived in the 2nd century BC.[19]
49
+
50
+ The seven-day week was widely known throughout the Roman Empire by the 1st century AD,[18] along with references to the Jewish Sabbath by Roman authors such as Seneca and Ovid.[20] When the seven-day week came into use in Rome during the early imperial period, it did not immediately replace the older eight-day nundinal system.[21] The nundinal system had probably fallen out of use by the time Emperor Constantine adopted the seven-day week for official use in CE 321, making the Day of the Sun (dies Solis) a legal holiday.[22]
51
+
52
+ The earliest evidence of an astrological significance of a seven-day period is connected to Gudea, priest-king of Lagash in Sumer during the Gutian dynasty, who built a seven-room temple, which he dedicated with a seven-day festival. In the flood story of the Assyro-Babylonian epic of Gilgamesh, the storm lasts for seven days, the dove is sent out after seven days, and the Noah-like character of Utnapishtim leaves the ark seven days after it reaches firm ground.[d]
53
+
54
+ It is possible that the Hebrew seven-day week is based on the Babylonian tradition, although going through certain adaptations.[contradictory] George Aaron Barton speculated that the seven-day creation account of Genesis is connected to the Babylonian creation epic, Enûma Eliš, which is recorded on seven tablets.[26]
55
+
56
+ Counting from the new moon, the Babylonians celebrated the 7th, 14th, 21st and 28th as "holy-days", also called "evil days" (meaning "unsuitable" for prohibited activities). On these days, officials were prohibited from various activities and common men were forbidden to "make a wish", and at least the 28th was known as a "rest-day".[27]
57
+ On each of them, offerings were made to a different god and goddess.
58
+
59
+ In a frequently-quoted suggestion going back to the early 20th century,[28] the Hebrew Sabbath is compared to the Sumerian sa-bat "mid-rest", a term for the full moon. The Sumerian term has been reconstructed as rendered Sapattum or Sabattum in Babylonian, possibly present in the lost fifth tablet of the Enûma Eliš, tentatively reconstructed[according to whom?] "[Sa]bbath shalt thou then encounter, mid[month]ly".[27]
60
+
61
+ The Zoroastrian calendar follows the Babylonian in relating the 7th, 14th, 21st, and 28th of the month to Ahura Mazda.[29]
62
+ The forerunner of all modern Zoroastrian calendars is the system used to determine dates in the Persian Empire, adopted from the Babylonian calendar by the 4th century BC.
63
+
64
+ Frank C. Senn in his book Christian Liturgy: Catholic and Evangelical points to data suggesting evidence of an early continuous use of a seven-day week; referring to the Jews during the Babylonian captivity in the 6th century BC,[9] after the destruction of the Temple of Solomon.
65
+ While the seven-day week in Judaism is tied to Creation account in the Book of Genesis in the Hebrew Bible (where God creates the heavens and the earth in six days and rests on the seventh; Genesis 1:1–2:3, in the Book of Exodus, the fourth of the Ten Commandments is to rest on the seventh day, Shabbat, which can be seen as implying a socially instituted seven-day week), it is not clear whether the Genesis narrative predates the Babylonian captivity of the Jews in the 6th century BC. At least since the Second Temple period under Persian rule, Judaism relied on the seven-day cycle of recurring Sabbaths.[30]
66
+
67
+ Tablets[citation needed] from the Achaemenid period indicate that the lunation of 29 or 30 days basically contained three seven-day weeks, and a final week of eight or nine days inclusive, breaking the continuous seven-day cycle.[27]
68
+ The Babylonians additionally celebrated the 19th as a special "evil day", the "day of anger", because it was roughly the 49th day of the (preceding) month, completing a "week of weeks", also with sacrifices and prohibitions.[27]
69
+
70
+ Difficulties with Friedrich Delitzsch's origin theory connecting Hebrew Shabbat with the Babylonian lunar cycle[31] include reconciling the differences between an unbroken week and a lunar week, and explaining the absence of texts naming the lunar week as Shabbat in any language.[32]
71
+
72
+ In Jewish sources by the time of the Septuagint, the term "Sabbath" (Greek Sabbaton) by synecdoche also came to refer to an entire seven-day week,[33] the interval between two weekly Sabbaths.
73
+ Jesus's parable of the Pharisee and the Publican (Luke 18:12) describes the Pharisee as fasting "twice in the week" (Greek δὶς τοῦ σαββάτου dis tou sabbatou).
74
+
75
+ The ancient Romans traditionally used the eight-day nundinum but, after the Julian calendar had come into effect in 45 BC, the seven-day week came into increasing use. For a while, the week and the nundinal cycle coexisted, but by the time the week was officially adopted by Constantine in AD 321, the nundinal cycle had fallen out of use. The association of the days of the week with the Sun, the Moon and the five planets visible to the naked eye dates to the Roman era (2nd century).[34][30]
76
+
77
+ The continuous seven-day cycle of the days of the week can be traced back to the reign of Augustus; the first identifiable date cited complete with day of the week is 6 February AD 60, identified as a "Sunday" (as viii idus Februarius dies solis "eighth day before the ides of February, day of the Sun") in a Pompeiian graffito. According to the (contemporary) Julian calendar, 6 February 60 was, however, a Wednesday. This is explained by the existence of two conventions of naming days of the weeks based on the planetary hours system: 6 February was a "Sunday" based on the sunset naming convention, and a "Wednesday" based on the sunrise naming convention.[35]
78
+
79
+ The earliest known reference in Chinese writings to a seven-day week is attributed to Fan Ning, who lived in the late 4th century in the Jin Dynasty, while diffusions from the Manichaeans are documented with the writings of the Chinese Buddhist monk Yi Jing and the Ceylonese or Central Asian Buddhist monk Bu Kong of the 7th century (Tang Dynasty).
80
+
81
+ The Chinese variant of the planetary system was brought to Japan by the Japanese monk Kūkai (9th century). Surviving diaries of the Japanese statesman Fujiwara Michinaga show the seven-day system in use in Heian Period Japan as early as 1007. In Japan, the seven-day system was kept in use for astrological purposes until its promotion to a full-fledged Western-style calendrical basis during the Meiji Period.
82
+
83
+ The seven-day week was known in India by the 6th century, referenced in the Pañcasiddhāntikā.[citation needed] Shashi (2000) mentions the Garga Samhita, which he places in the 1st century BC or AD, as a possible earlier reference to a seven-day week in India. He concludes "the above references furnish a terminus ad quem (viz. 1st century) The terminus a quo cannot be stated with certainty".[36][37]
84
+
85
+ In Arabia, a similar seven-week system was adopted, that may be influenced by the Hebrew week (via Christianity).[citation needed]
86
+
87
+ The seven-day weekly cycle has remained unbroken in Christendom, and hence in Western history, for almost two millennia, despite changes to the Coptic, Julian, and Gregorian calendars, demonstrated by the date of Easter Sunday having been traced back through numerous computistic tables to an Ethiopic copy of an early Alexandrian table beginning with the Easter of AD 311.[38][39]
88
+
89
+ A tradition of divinations arranged for the days of the week on which certain feast days occur develops in the Early Medieval period. There are many later variants of this, including the German Bauern-Praktik and the versions of Erra Pater published in 16th to 17th century England, mocked in Samuel Butler's Hudibras. South and East Slavic versions are known as koliadniki (from koliada, a loan of Latin calendae), with Bulgarian copies dating from the 13th century, and Serbian versions from the 14th century.[40]
90
+
91
+ Medieval Christian traditions associated with the lucky or unlucky nature of certain days of the week survived into the modern period. This concerns primarily Friday, associated with the crucifixion of Jesus. Sunday, sometimes personified as Saint Anastasia, was itself an object of worship in Russia, a practice denounced in a sermon extant in copies going back to the 14th century.[41]
92
+
93
+ Sunday, in the ecclesiastical numbering system also counted as the feria prima or the first day of the week; yet, at the same time, figures as the "eighth day", and has occasionally been so called in Christian liturgy.
94
+ [e]
95
+
96
+ Justin Martyr wrote: "the first day after the Sabbath, remaining the first of all the days, is called, however, the eighth, according to the number of all the days of the cycle, and [yet] remains the first."[42]
97
+
98
+ A period of eight days, usually (but not always, mainly because of Christmas Day) starting and ending on a Sunday, is called an octave, particularly in Roman Catholic liturgy. In German, the phrase heute in acht Tagen (literally "today in eight days") means one week from today (i.e. on the same weekday). The same is true of the Italian phrase oggi otto (literally "today eight").
99
+
100
+ Weeks in a Gregorian calendar year can be numbered for each year. This style of numbering is often used in European and Asian countries. It is less common in the U.S. and elsewhere.
101
+
102
+ The system for numbering weeks is the ISO week date system, which is included in ISO 8601. This system dictates that each week begins on a Monday and is associated with the year that contains that week's Thursday.
103
+
104
+ In practice week 1 (W01 in ISO notation) of any year can be determined as follows:
105
+
106
+ Examples:
107
+
108
+ It is also possible to determine if the last week of the previous year was Week 52 or Week 53 as follows:
109
+
110
+ In some countries, though, the numbering system is different from the ISO standard. At least six numberings are in use:[43][44][dubious – discuss]
111
+
112
+ The semiconductor package date code is often a 4 digit date code YYWW where the first two digits YY are the last 2 digits of the calendar year and the last two digits WW are the two-digit week number.[45][46]
113
+
114
+ The tire date code mandated by the US DOT is a 4 digit date code WWYY with two digits of the week number WW followed by the last two digits of the calendar year YY.[47]
115
+
116
+ The term "week" is sometimes expanded to refer to other time units comprising a few days. Such "weeks" of between four and ten days have been used historically in various places.[48] Intervals longer than 10 days are not usually termed "weeks" as they are closer in length to the fortnight or the month than to the seven-day week.
117
+
118
+ Calendars unrelated to the Chaldean, Hellenistic, Christian, or Jewish traditions often have time cycles between the day and the month of varying lengths, sometimes also called "weeks".
119
+
120
+ An eight-day week was used in Ancient Rome and possibly in the pre-Christian Celtic calendar. Traces of a nine-day week are found in Baltic languages and in Welsh. The ancient Chinese calendar had a ten-day week, as did the ancient Egyptian calendar (and, incidentally, the French Republican Calendar, dividing its 30-day months into thirds).
121
+
122
+ A six-day week is found in the Akan Calendar. Several cultures used a five-day week, including the 10th century Icelandic calendar, the Javanese calendar, and the traditional cycle of market days in Korea.[citation needed] The Igbo have a "market week" of four days. Evidence of a "three-day week" has been derived from the names of the days of the week in Guipuscoan Basque.[49]
123
+
124
+ The Aztecs and Mayas used the Mesoamerican calendars. The most important of these calendars divided a ritual cycle of 260 days (known as Tonalpohualli in Nahuatl and Tzolk'in in Yucatec Maya) into 20 weeks of 13 days (known in Spanish as trecenas). They also divided the solar year into 18 periods (winal) of 20 days and five nameless days (wayebʼ), creating a 20-day month divided into four five-day weeks. The end of each five-day week was a market day.[50][51]
125
+
126
+ The Balinese Pawukon is a 210-day calendar consisting of 10 different simultaneously running weeks of 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10 days, of which the weeks of 4, 8, and 9 days are interrupted to fit into the 210-day cycle.
127
+
128
+ A 10-day week, called décade, was used in France for nine and a half years from October 1793 to April 1802; furthermore, the Paris Commune adopted the Revolutionary Calendar for 18 days in 1871.
129
+
130
+ The Bahá'í calendar features a 19-day period which some classify as a month and others classify as a week.[52]
131
+
132
+ The International Fixed Calendar (also known as the "Eastman plan") fixed every date always on the same weekday. This plan kept a 7-day week while defining a year of 13 months with 28 days each. It was the official calendar of the Eastman Kodak Company for decades.
133
+
134
+ In the Soviet Union between 1929 and 1940, most factory and enterprise workers, but not collective farm workers, used five- and six-day work weeks while the country as a whole continued to use the traditional seven-day week.[53][54][55] From 1929 to 1951, five national holidays were days of rest (22 January, 1–2 May, 7–8 November). From autumn 1929 to summer 1931, the remaining 360 days of the year were subdivided into 72 five-day work weeks beginning on 1 January. Workers were assigned any one of the five days as their day off, even if their spouse or friends might be assigned a different day off. Peak usage of the five-day work week occurred on 1 October 1930 at 72% of industrial workers. From summer 1931 until 26 June 1940, each Gregorian month was subdivided into five six-day work weeks, more-or-less, beginning with the first day of each month. The sixth day of each six-day work week was a uniform day of rest. On 1 July 1935 74.2% of industrial workers were on non-continuous schedules, mostly six-day work weeks, while 25.8% were still on continuous schedules, mostly five-day work weeks. The Gregorian calendar with its irregular month lengths and the traditional seven-day week were used in the Soviet Union during its entire existence, including 1929–1940; for example, in the masthead of Pravda, the official Communist newspaper, and in both Soviet calendars displayed here. The traditional names of the seven-day week continued to be used, including "Resurrection" (Воскресенье) for Sunday and "Sabbath" (Суббота) for Saturday, despite the government's official atheism.
en/5345.html.txt ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ in the African Union (light blue)
6
+
7
+ Senegal (/ˌsɛnɪˈɡɔːl, -ˈɡɑːl/ (listen);[9][10] French: Sénégal; Wolof: Senegaal), officially the Republic of Senegal (French: République du Sénégal; Wolof: Réewum Senegaal), is a country in West Africa. Senegal is bordered by Mauritania in the north, Mali to the east, Guinea to the southeast, and Guinea-Bissau to the southwest. Senegal nearly surrounds The Gambia, a country occupying a narrow sliver of land along the banks of the Gambia River, which separates Senegal's southern region of Casamance from the rest of the country. Senegal also shares a maritime border with Cape Verde. Senegal's economic and political capital is Dakar.
8
+
9
+ It is a unitary presidential republic and is the westernmost country in the mainland of the Old World, or Afro-Eurasia.[11] It owes its name to the Senegal River, which borders it to the east and north. Senegal covers a land area of almost 197,000 square kilometres (76,000 sq mi) and has a population of around 16 million.[2][3] The state was formed as part of the independence of French West Africa from French colonial rule. Because of this history, the official language is French. Like other post-colonial African states, the country includes a wide mix of ethnic and linguistic communities, with the largest being the Wolof people, and the Wolof language acting as a lingua franca.
10
+
11
+ Senegal is classified as a heavily indebted poor country, with a relatively low Human Development Index. Most of the population is on the coast and works in agriculture or other food industries. Other major industries include mining, tourism and services[12].
12
+
13
+ The climate is typically Sahelian, though there is a rainy season. However, climate change has led to a consistent decline in rainfall and projected increase in temperatures. Impacts of climate change and other environmental concerns are expected to greatly impact the economy and population.
14
+
15
+ "Senegal" probably derives from a Portuguese transliteration of the name of the Zenaga, also known as the Sanhaja,[13] or else a combination of the supreme deity in Serer religion (Rog Sene) and o gal meaning body of water in the Serer language. Alternatively, the name could derive from the Wolof phrase "Sunuu Gaal," which means "our boat."
16
+
17
+ Archaeological findings throughout the area indicate that Senegal was inhabited in prehistoric times and has been continuously occupied by various ethnic groups. Some kingdoms were created around the 7th century: Takrur in the 9th century, Namandiru [wo] and the Jolof Empire during the 13th and 14th centuries. Eastern Senegal was once part of the Ghana Empire.
18
+
19
+ Islam was introduced through Toucouleur and Soninke contact with the Almoravid dynasty of the Maghreb, who in turn propagated it with the help of the Almoravids and Toucouleur allies. This movement faced resistance from ethnicities of traditional religions, the Serers in particular.[14][15]
20
+
21
+ In the 13th and 14th centuries, the area came under the influence of the empires to the east; the Jolof Empire of Senegal was also founded during this time. In the Senegambia region, between 1300 and 1900, close to one-third of the population was enslaved, typically as a result of being taken captive in warfare.[16]
22
+
23
+ In the 14th century the Jolof Empire grew more powerful, having united Cayor and the kingdoms of Baol, Siné, Saloum, Waalo, Futa Tooro and Bambouk, or much of present-day West Africa. The empire was a voluntary confederacy of various states rather than being built on military conquest.[17][18] The empire was founded by Ndiadiane Ndiaye, a part Serer[19][20] and part Toucouleur, who was able to form a coalition with many ethnicities, but collapsed around 1549 with the defeat and killing of Lele Fouli Fak by Amari Ngone Sobel Fall [fr].
24
+
25
+ In the mid-15th century, the Portuguese landed on the Senegal coastline, followed by traders representing other countries, including the French.[21] Various European powers — Portugal, the Netherlands, and Great Britain — competed for trade in the area from the 15th century onward.
26
+
27
+ In 1677, France gained control of what had become a minor departure point in the Atlantic slave trade: the island of Gorée next to modern Dakar, used as a base to purchase slaves from the warring chiefdoms on the mainland.[22][23]
28
+
29
+ European missionaries introduced Christianity to Senegal and the Casamance in the 19th century. It was only in the 1850s that the French began to expand onto the Senegalese mainland, after they abolished slavery and began promoting an abolitionist doctrine,[24] adding native kingdoms like the Waalo, Cayor, Baol, and Jolof Empire. French colonists progressively invaded and took over all the kingdoms, except Siné and Saloum, under Governor Louis Faidherbe.[17][25]
30
+
31
+ Yoro Dyao was in command of the canton of Foss-Galodjina and was set over Wâlo (Ouâlo) by Louis Faidherbe,[26] where he served as a chief from 1861 to 1914.[27] Senegalese resistance to the French expansion and curtailing of their lucrative slave trade was led in part by Lat-Dior, Damel of Cayor, and Maad a Sinig Kumba Ndoffene Famak Joof, the Maad a Sinig of Siné, resulting in the Battle of Logandème.
32
+ In 1915, over 300 Senegalese came under Australian command, ahead of the taking of Damascus by Australians, before the expected arrival of the famed Lawrence Of Arabia. French and British diplomacy in the area were thrown into disarray.
33
+
34
+ On 4 April 1959 Senegal and the French Sudan merged to form the Mali Federation, which became fully independent on 20 June 1960, as a result of a transfer of power agreement signed with France on 4 April 1960. Due to internal political difficulties, the Federation broke up on 20 August, when Senegal and French Sudan (renamed the Republic of Mali) each proclaimed independence.
35
+
36
+ Léopold Sédar Senghor was Senegal's first president in September 1960. Senghor was a very well-read man, educated in France. He was a poet and philosopher who personally drafted the Senegalese national anthem, "Pincez tous vos koras, frappez les balafons". Pro-African, he advocated a brand of African socialism.[28]
37
+
38
+ In 1980, President Senghor decided to retire from politics. The next year, he transferred power in 1981 to his hand-picked successor, Abdou Diouf. Former prime minister Mamadou Dia, who was Senghor's rival, ran for election in 1983 against Diouf, but lost. Senghor moved to France, where he died at the age of 96.
39
+
40
+ In the 1980s, Boubacar Lam discovered Senegalese oral history that was initially compiled by the Tuculor noble, Yoro Dyâo, not long after World War I, which documented migrations into West Africa from the Nile Valley; ethnic groups, from the Senegal River to the Niger Delta, retained traditions of having an eastern origin.[29]
41
+
42
+ Senegal joined with The Gambia to form the nominal Senegambia Confederation on 1 February 1982. However, the union was dissolved in 1989. Despite peace talks, a southern separatist group (Movement of Democratic Forces of Casamance or MFDC) in the Casamance region has clashed sporadically with government forces since 1982 in the Casamance conflict. In the early 21st century, violence has subsided and President Macky Sall held talks with rebels in Rome in December 2012.[30]
43
+
44
+ Abdou Diouf was president between 1981 and 2000. He encouraged broader political participation, reduced government involvement in the economy, and widened Senegal's diplomatic engagements, particularly with other developing nations. Domestic politics on occasion spilled over into street violence, border tensions, and a violent separatist movement in the southern region of the Casamance. Nevertheless, Senegal's commitment to democracy and human rights strengthened. Abdou Diouf served four terms as president.
45
+
46
+ In the presidential election of 1999, opposition leader Abdoulaye Wade defeated Diouf in an election deemed free and fair by international observers. Senegal experienced its second peaceful transition of power, and its first from one political party to another. On 30 December 2004 President Wade announced that he would sign a peace treaty with the separatist group in the Casamance region. This, however, has yet to be implemented. There was a round of talks in 2005, but the results have not yet yielded a resolution.
47
+
48
+ Senegal is a republic with a presidency; the president is elected every five years as of 2016, previously being seven years from independence to 2001, five years from 2001 to 2008, and 7 years again from 2008 to 2016, by adult voters. The first president, Léopold Sédar Senghor, was a poet and writer, and was the first African elected to the Académie française. Senegal's second president, Abdou Diouf, later served as general secretary of the Organisation de la Francophonie. The third president was Abdoulaye Wade, a lawyer. The current president is Macky Sall, elected in March 2012 and reelected in February 2019.[31]
49
+
50
+ Senegal has more than 80 political parties. The unicameral parliament consists of the National Assembly, which has 150 seats (a Senate was in place from 1999 to 2001 and 2007 to 2012).[1] An independent judiciary also exists in Senegal. The nation's highest courts that deal with business issues are the constitutional council and the court of justice, members of which are named by the president.
51
+
52
+ Currently, Senegal has a quasi-democratic political culture, one of the more successful post-colonial democratic transitions in Africa. Local administrators are appointed and held accountable by the president. Marabouts, religious leaders of the various Muslim brotherhoods of Senegal, have also exercised a strong political influence in the country especially during Wade's presidency. In 2009, Freedom House downgraded Senegal's status from "Free" to "Partially Free", based on increased centralisation of power in the executive. By 2014, it had recovered its Free status.[32]
53
+
54
+ In 2008, Senegal finished in 12th position on the Ibrahim Index of African Governance.[33] The Ibrahim Index is a comprehensive measure of African governance (limited to sub-Saharan Africa until 2008), based on a number of different variables which reflect the success with which governments deliver essential political goods to their citizens. When the Northern African countries were added to the index in 2009, Senegal's 2008 position was retroactively downgraded to 15th place (with Tunisia, Egypt and Morocco placing themselves ahead of Senegal). As of 2012[update], Senegal's rank in the Ibrahim Index has decreased another point to 16 out of 52 African countries.
55
+
56
+ On 22 February 2011, Senegal reportedly severed diplomatic ties with Iran, saying it supplied rebels with weapons which killed Senegalese troops in the Casamance conflict.[34]
57
+
58
+ The 2012 presidential election was controversial due to President Wade's candidacy, as the opposition argued he should not be considered eligible to run again. Several youth opposition movements, including M23 and Y'en a Marre, emerged in June 2011. In the end, Macky Sall of the Alliance for the Republic won, and Wade conceded the election to Sall. This peaceful and democratic transition was hailed by many foreign observers, such as the EU[35] as a show of "maturity".
59
+
60
+ On 19 September 2012, lawmakers voted to do away with the Senate to save an estimated $15 million.[36]
61
+
62
+ Senegal is subdivided into 14 regions,[37] each administered by a Conseil Régional (Regional Council) elected by population weight at the Arrondissement level. The country is further subdivided by 45 Départements, 113 Arrondissements (neither of which have administrative function) and by Collectivités Locales, which elect administrative officers.[38]
63
+
64
+ Regional capitals have the same name as their respective regions:
65
+
66
+ Senegal has a high profile in many international organizations and was a member of the UN Security Council in 1988–89 and 2015–2016. It was elected to the UN Commission on Human Rights in 1997. Friendly to the West, especially to France and to the United States, Senegal also is a vigorous proponent of more assistance from developed countries to the Third World.
67
+
68
+ Senegal enjoys mostly cordial relations with its neighbors. In spite of clear progress on other fronts with Mauritania (border security, resource management, economic integration, etc.), an estimated 35,000 Mauritanian refugees (of the estimated 40,000 who were expelled from their home country in 1989) remain in Senegal.[39]
69
+
70
+ Senegal is part of the Economic Community of West African States (ECOWAS). Integrated with the main bodies of the international community, Senegal is also a member of the African Union (AU) and the Community of Sahel-Saharan States.
71
+
72
+ The Armed Forces of Senegal consist of about 17,000 personnel in the army, air force, navy, and gendarmerie. The Senegalese military force receives most of its training, equipment, and support from France and the United States. Germany also provides support but on a smaller scale.
73
+
74
+ Military noninterference in political affairs has contributed to Senegal's stability since independence. Senegal has participated in many international and regional peacekeeping missions. Most recently, in 2000, Senegal sent a battalion to the Democratic Republic of Congo to participate in MONUC, the United Nations peacekeeping mission, and agreed to deploy a United States-trained battalion to Sierra Leone to participate in UNAMSIL, another UN peacekeeping mission.
75
+
76
+ In 2015, Senegal participated in the Saudi Arabian-led military intervention in Yemen against the Shia Houthis.[40]
77
+
78
+ Senegal is a secular state, as defined in its Constitution.[41]
79
+
80
+ To fight corruption, the government has created the National Anti-Corruption Office (OFNAC) and the Commission of Restitution and Recovery of Illegally Acquired Assets. According to Business Anti-Corruption Portal, President Sall created the OFNAC to replace the Commission Nationale de Lutte Contre la non Transparence, la Corruption et la Concussion (CNLCC). It is said that the OFNAC represents a more effective tool for fighting corruption than the CNLCC established under former President Wade.[42] The mission of OFNAC is to fight corruption, embezzlement of public funds and fraud. OFNAC has the power of self-referral (own initiative investigation). OFNAC is composed of twelve members appointed by decree.
81
+
82
+ Homosexuality is illegal in Senegal.[43] According to 2013 survey by the Pew Research Center, 96% of Senegalese believe that homosexuality should not be accepted by society.[44] LGBTQ community members in Senegal report a strong feeling of being unsafe.[45]
83
+
84
+ Senegal is located on the west of the African continent. It lies between latitudes 12° and 17°N, and longitudes 11° and 18°W.
85
+
86
+ Senegal is externally bounded by the Atlantic Ocean to the west, Mauritania to the north, Mali to the east, and Guinea and Guinea-Bissau to the south; internally it almost completely surrounds The Gambia, namely on the north, east and south, except for Gambia's short Atlantic coastline.
87
+
88
+ The Senegalese landscape consists mainly of the rolling sandy plains of the western Sahel which rise to foothills in the southeast. Here is also found Senegal's highest point, an otherwise unnamed feature 2.7 km southeast of Nepen Diakha at 648 m (2,126 ft).[46] The northern border is formed by the Senegal River; other rivers include the Gambia and Casamance Rivers. The capital Dakar lies on the Cap-Vert peninsula, the westernmost point of continental Africa.
89
+
90
+ The Cape Verde islands lie some 560 kilometres (350 mi) off the Senegalese coast, but Cap-Vert ("Cape Green") is a maritime placemark, set at the foot of "Les Mammelles", a 105-metre (344 ft) cliff resting at one end of the Cap-Vert peninsula onto which is settled Senegal's capital Dakar, and 1 kilometre (0.6 mi) south of the "Pointe des Almadies", the westernmost point in Africa.
91
+
92
+ Senegal has a tropical climate with pleasant heat throughout the year with well-defined dry and humid seasons that result from northeast winter winds and southwest summer winds. The dry season (December to April) is dominated by hot, dry, harmattan wind.[1]
93
+ Dakar's annual rainfall of about 600 mm (24 in) occurs between June and October when maximum temperatures average 30 °C (86.0 °F) and minimums 24.2 °C (75.6 °F); December to February maximum temperatures average 25.7 °C (78.3 °F) and minimums 18 °C (64.4 °F).[47]
94
+
95
+ Interior temperatures are higher than along the coast (for example, average daily temperatures in Kaolack and Tambacounda for May are 30 °C (86.0 °F) and 32.7 °C (90.9 °F) respectively, compared to Dakar's 23.2 °C (73.8 °F) ),[48] and rainfall increases substantially farther south, exceeding 1,500 mm (59.1 in) annually in some areas.
96
+
97
+ In Tambacounda in the far interior, particularly on the border of Mali where desert begins, temperatures can reach as high as 54 °C (129.2 °F). The northernmost part of the country has a near hot desert climate, the central part has a hot semi-arid climate and the southernmost part has a tropical wet and dry climate. Senegal is mainly a sunny and dry country.
98
+
99
+ Climate change in Senegal will have wide reaching impacts on the country. Senegal was not a major contributor to global greenhouse gas emissions, contributing only 6/10ths of one ton of CO2 per capita -- or is 150th in the list of most emitting countries.[50]
100
+
101
+ Predominantly rural, and with limited natural resources, the economy of Senegal gains most of its foreign exchange from fish, phosphates, groundnuts, tourism, and services. As one of the dominate parts of the economy, the agricultural sector of Senegal is highly vulnerable to environmental conditions, such as variations in rainfall and climate change, and changes in world commodity prices.
102
+
103
+ The former capital of French West Africa, is also home to banks and other institutions which serve all of Francophone West Africa, and is a hub for shipping and transport in the region.
104
+
105
+ The main industries include food processing, mining, cement, artificial fertilizer, chemicals, textiles, refining imported petroleum, and tourism. Exports include fish, chemicals, cotton, fabrics, groundnuts, and calcium phosphate. The principal foreign market is India with 26.7% of exports (as of 1998). Other foreign markets include the United States, Italy and the United Kingdom.
106
+
107
+ As a member of the West African Economic and Monetary Union (WAEMU), Senegal is working toward greater regional integration with a unified external tariff. Senegal is also a member of the Organization for the Harmonization of Business Law in Africa.[51]
108
+
109
+ Senegal achieved full Internet connectivity in 1996, creating a mini-boom in information technology-based services. Private activity now accounts for 82 percent of its GDP. On the negative side, Senegal faces deep-seated urban problems of chronic high unemployment, socioeconomic disparity, juvenile delinquency, and drug addiction.[52]
110
+
111
+ Senegal is a major recipient of international development assistance. Donors include the United States Agency for International Development (USAID), Japan, France and China. Over 3,000 Peace Corps Volunteers have served in Senegal since 1963.[53]
112
+
113
+ Agriculture is one of the dominant parts of Senegal's economy. Even though Senegal lies within the drought-prone Sahel region, only about 5 percent of the land irrigated, thus Senegal continues to rely on rain-fed agriculture. Agriculture occupies about 75 percent of the workforce. Despite a relatively wide variety of agricultural production, the majority of farmers produce for subsistence needs. Millet, rice, corn, and sorghum are the primary food crops grown in Senegal. Production is subject to drought and threats of pests such as locusts, birds, fruit flies, and white flies.[54] Moreover, the effects of climate change in Senegal are expected to severely harm the agricultural economy due to extreme weather such as drought, as well increased temperatures.[55]
114
+
115
+ Senegal is a net food importer, particularly for rice, which represents almost 75 percent of cereal imports. Peanuts, sugarcane, and cotton are important cash crops, and a wide variety of fruits and vegetables are grown for local and export markets. In 2006 gum arabic exports soared to $280 million, making it by far the leading agricultural export. Green beans, industrial tomato, cherry tomato, melon, and mango are Senegal's main vegetable cash crops. The Casamance region, isolated from the rest of Senegal by Gambia, is an important agriculture producing area, but without the infrastructure or transportation links to improve its capacity.[54]
116
+
117
+ Senegal has a 12-nautical-mile (22 km; 14 mi) exclusive fishing zone that has been regularly breached in recent years (as of 2014[update]). It has been estimated that the country's fishermen lose 300,000 tonnes of fish each year to illegal fishing. The Senegalese government have tried to control the illegal fishing which is conducted by fishing trawlers, some of which are registered in Russia, Mauritania, Belize and Ukraine. In January 2014, a Russian trawler, Oleg Naydenov, was seized by Senegalese authorities close to the maritime border with Guinea-Bissau.[56]
118
+
119
+ As of April 2020[update], energy sector in Senegal has an installed capacity of 864 megawatts (MW).[57] Energy is produced by private operators and sold to the Senelec energy corporation. According to a 2020 report by the International Energy Agency, Senegal had nearly 70% of the country connected to the national grid.[58] Current government strategies for electrification include investments in off-grid solar and connection to the grid.[57][58]
120
+
121
+ Most of the energy production is from fossil fuels, mostly diesel and gas (733 of 864 MW).[59] An increasing amount of the energy production comes from sustainable sources, such as Manantali Dam in Mali and a new wind farm in Thiès opened in 2020—however, it is still a small portion of the total production. Despite increases in production in the 2010s, the economy is frequently hindered by energy shortages compared to demand.
122
+
123
+
124
+
125
+ Senegal has a population of around 15.9 million[2][3], about 42 percent of whom live in rural areas. Density in these areas varies from about 77 inhabitants per square kilometre (200/sq mi) in the west-central region to 2 per square kilometre (5.2/sq mi) in the arid eastern section.
126
+
127
+ Senegal has a wide variety of ethnic groups and, as in most West African countries, several languages are widely spoken. The Wolof are the largest single ethnic group in Senegal at 43%; the Fula[61] and Toucouleur (also known as Halpulaar'en, literally "Pulaar-speakers") (24%) are the second biggest group, followed by the Serer (14.7%),[62] then others such as Jola (4%), Mandinka (3%), Maures or (Naarkajors), Soninke, Bassari and many smaller communities (9%). (See also the Bedick ethnic group.)
128
+
129
+ About 50,000 Europeans (mostly French) and Lebanese[63] as well as smaller numbers of Mauritanians and Moroccans[citation needed] reside in Senegal, mainly in the cities and some retirees who reside in the resort towns around Mbour. The majority of Lebanese work in commerce.[64] Most of the Lebanese originate from the Lebanese city of Tyre, which is known as "Little West Africa and has a main promenade that is called "Avenue du Senegal".[65]
130
+
131
+ The country experienced a wave of immigration from France in the decades between World War II and Senegalese independence; most of these French people purchased homes in Dakar or other major urban centers.[66] Also located primarily in urban settings are small Vietnamese communities as well as a growing number of Chinese immigrant traders, each numbering perhaps a few hundred people.[67][68] There are also tens of thousands of Mauritanian refugees in Senegal, primarily in the country's north.[69]
132
+
133
+ According to the World Refugee Survey 2008, published by the U.S. Committee for Refugees and Immigrants, Senegal has a population of refugees and asylum seekers numbering approximately 23,800 in 2007. The majority of this population (20,200) is from Mauritania. Refugees live in N'dioum, Dodel, and small settlements along the Senegal River valley.[70]
134
+
135
+ French is the official language, spoken at least by all those who enjoyed several years in the educational system that is of French origin (Koranic schools are even more popular, but Arabic is not widely spoken outside of the context of recitation). During the 15th century, many European territories started to engage in trade in Senegal. By the 19th century, French powers also strengthened their roots in Senegal and thus the number of French-speaking people multiplied continuously. French was ratified as the official language of Senegal in 1960 when the country achieved independence.
136
+
137
+ Around 15 to 20% of men and 2% of women speak and understand French.[citation needed] In addition, 21% of the population is partially fluent in the French language.[citation needed]
138
+
139
+ Most people also speak their own ethnic language while, especially in Dakar, Wolof is the lingua franca.[71] Pulaar is spoken by the Fulas and Toucouleur. The Serer language is widely spoken by both Serers and non-Serers (including President Sall, whose wife is Serer); so are the Cangin languages, whose speakers are ethnically Serers. Jola languages are widely spoken in the Casamance. Overall Senegal is home to around 39 distinct languages. Several have the legal status of "national languages": Balanta-Ganja, Hassaniya Arabic, Jola-Fonyi, Mandinka, Mandjak, Mankanya, Noon (Serer-Noon), Pulaar, Serer, Soninke, and Wolof.
140
+
141
+ English is taught as a foreign language in secondary schools and many graduate school programs, and it is the only subject matter that has a special office in the Ministry of Education.[72] Dakar hosts a couple of Bilingual schools which offer 50% of their syllabus in English. The Senegalese American Bilingual School (SABS), Yavuz Selim, and The West African College of the Atlantic (WACA) train thousands of fluent English speakers in four-year programs. English is widely used by the scientific community and in business, including by the Modou-Modou (illiterate, self-taught businessmen).[72]
142
+
143
+ Portuguese Creole, locally known as Portuguese, is a prominent minority language in Ziguinchor, regional capital of the Casamance, spoken by local Portuguese creoles and immigrants from Guinea-Bissau. The local Cape Verdean community speak a similar Portuguese creole, Cape Verdean Creole, and standard Portuguese. Portuguese was introduced in Senegal's secondary education in 1961 in Dakar by the country's first president, Léopold Sédar Senghor. It is currently available in most of Senegal and in higher education. It is especially prevalent in Casamance as it relates with the local cultural identity.[73]
144
+
145
+ A variety of immigrant languages are spoken, such as Bambara (70,000), Kabuverdiano (34,000), Krio (6,100), Mooré (937,000), Portuguese (1,700) and Vietnamese (2,500), mostly in Dakar.[72]
146
+
147
+ While French is the sole official language, a rising Senegalese linguistic nationalist movement supports the integration of Wolof, the common vernacular language of the country, into the national constitution.[74]
148
+
149
+ Senegalese regions of Dakar, Diourbel, Fatick, Kaffrine, Kaolack, Kedougou, Kolda, Louga, Matam, Saint-Louis, Sedhiou, Tambacounda, Thies and Ziguinchor are members of the International Association of Francophone regions.
150
+
151
+ Dakar, the capital, is by far the largest city in Senegal, with over two million residents.[75] The second most populous city is Touba, a de jure communaute rurale (rural community), with half a million.[75]
152
+
153
+
154
+
155
+ Religion in Senegal (2013)[77]
156
+
157
+ Senegal is a secular state,[41] although Islam is the predominant religion in the country, practiced by approximately 95.9% of the country's population; the Christian community, at 4.1% of the population, are mostly Catholics but there are also diverse Protestant denominations. One percent has animist beliefs, particularly in the southeastern region of the country.[1] Some Serer people follow the Serer religion.[78][79]
158
+
159
+ According to Pew, 55% of the Muslims in Senegal are Sunni of the Maliki madhhab with Sufi influences, whilst 27% are non-denominational Muslim.[80] Islamic communities in Senegal are generally organized around one of several Islamic Sufi orders or brotherhoods, headed by a khalif (xaliifa in Wolof, from Arabic khalīfa), who is usually a direct descendant of the group's founder. The two largest and most prominent Sufi orders in Senegal are the Tijaniyya, whose largest sub-groups are based in the cities of Tivaouane and Kaolack, and the Murīdiyya (Murid), based in the city of Touba.
160
+
161
+ The Halpulaar (Pulaar-speakers), composed of Fula people, a widespread group found along the Sahel from Chad to Senegal, and Toucouleurs, represent 23.8 percent of the population.[1] Historically, they were the first to become Muslim. Many of the Toucouleurs, or sedentary Halpulaar of the Senegal River Valley in the north, converted to Islam around a millennium ago and later contributed to Islam's propagation throughout Senegal. Success was gained among the Wolofs, but repulsed by the Serers.
162
+
163
+ Most communities south of the Senegal River Valley, however, were not thoroughly Islamized. The Serer people stood out as one of this group, who spent over one thousand years resisting Islamization (see Serer history). Although many Serers are Christians or Muslim, their conversion to Islam in particular is very recent, who converted on their own free will rather than by force, although force had been tried centuries earlier unsuccessfully (see the Battle of Fandane-Thiouthioune).[81]
164
+
165
+ The spread of formal Quranic school (called daara in Wolof) during the colonial period increased largely through the effort of the Tidjâniyya. In Murid communities, which place more emphasis on the work ethic than on literary Quranic studies, the term daara often applies to work groups devoted to working for a religious leader. Other Islamic groups include the much older Qādiriyya order and the Senegalese Laayeen order, which is prominent among the coastal Lebu. Today, most Senegalese children study at daaras for several years, memorizing as much of the Qur'an as they can. Some of them continue their religious studies at councils (majlis) or at the growing number of private Arabic schools and publicly funded Franco-Arabic schools.
166
+
167
+ Small Catholic communities are mainly found in coastal Serer, Jola, Mankanya and Balant populations, and in eastern Senegal among the Bassari and Coniagui. The Protestant churches are mainly attended by immigrants but during the second half of the 20th century Protestant churches led by Senegalese leaders from different ethnic groups have evolved. In Dakar Catholic and Protestant rites are practiced by the Lebanese, Cape Verdean, European, and American immigrant populations, and among certain Africans of other countries as well as by the Senegalese themselves. Although Islam is Senegal's majority religion, Senegal's first president, Léopold Sédar Senghor, was a Catholic Serer.
168
+
169
+ Serer religion encompasses a belief in a supreme deity called Roog (Koox among the Cangin), Serer cosmogony, cosmology and divination ceremonies such as the annual Xooy (or Khoy) ceremony precided over by the Serer Saltigues (high priests and priestesses). Senegambian (both Senegal and the Gambia) Muslim festivals such as Tobaski, Gamo, Koriteh, Weri Kor, etc., are all borrowed words from the Serer religion.[82] They were ancient Serer festivals rooted in Serer religion, not Islam.[82]
170
+
171
+ The Boukout is one of the Jola's religious ceremonies.
172
+
173
+ There are a small number of members of the Bani Israel tribe in the Senegalese bush that claim Jewish ancestry, though this is disputed.[83] The Mahayana branch of Buddhism in Senegal is followed by a very tiny portion of the ex-pat Vietnamese community. The Bahá'í Faith in Senegal was established after 'Abdu'l-Bahá, the son of the founder of the religion, mentioned Africa as a place that should be more broadly visited by Bahá'ís.[84] The first Bahá'is to set foot in the territory of French West Africa that would become Senegal arrived in 1953.[85] The first Bahá'í Local Spiritual Assembly of Senegal was elected in 1966 in Dakar.[86] In 1975 the Bahá'í community elected the first National Spiritual Assembly of Senegal. The most recent estimate, by the Association of Religion Data Archives in a 2005 report details the population of Senegalese Bahá'ís at 22,000.[87]
174
+
175
+ Life expectancy at birth was estimated to be 66.8 years in 2016 (64.7 years male, 68.7 years female).[88] Public expenditure on health was at 2.4 percent of the GDP in 2004, whereas private expenditure was at 3.5 percent.[89] Health expenditure was at US$72 (PPP) per capita in 2004.[89] The fertility rate ranged 5 to 5.3 between 2005 and 2013, with 4.1 in urban areas and 6.3 in rural areas, as official survey (6.4 in 1986 and 5.7 in 1997) point out.[90] There were six physicians per 100,000 persons in the early 2000s (decade).[89] Infant mortality in Senegal was 157 per 1,000 live births in 1950., but since then it has declined five-fold to 32 per 1,000 in 2018.[91] In the past 5 years infant mortality rates of malaria have dropped. According to a 2013 UNICEF report,[92] 26% of women in Senegal have undergone female genital mutilation.
176
+
177
+ Articles 21 and 22 of the Constitution adopted in January 2001 guarantee access to education for all children.[93] Education is compulsory and free up to the age of 16.[93] The Ministry of Labor has indicated that the public school system is unable to cope with the number of children that must enroll each year.[93]
178
+
179
+ Illiteracy is high, particularly among women.[89] The net primary enrollment rate was 69 percent in 2005. Public expenditure on education was 5.4 percent of the 2002–2005 GDP.
180
+
181
+ Senegal is well known for the West African tradition of storytelling, which is done by griots, who have kept West African history alive for thousands of years through words and music. The griot profession is passed down generation to generation and requires years of training and apprenticeship in genealogy, history and music. Griots give voice to generations of West African society.[21]
182
+
183
+ The African Renaissance Monument built in 2010 in Dakar is the tallest statue in Africa. Dakar also hosts a film festival, Recidak.[94]
184
+
185
+ Because Senegal borders the Atlantic Ocean, fish is very important. Chicken, lamb, peas, eggs, and beef are also used in Senegalese cooking, but not pork, due to the nation's largely Muslim population. Peanuts, the primary crop of Senegal, as well as couscous, white rice, sweet potatoes, lentils, black-eyed peas and various vegetables, are also incorporated into many recipes. Meats and vegetables are typically stewed or marinated in herbs and spices, and then poured over rice or couscous, or eaten with bread.
186
+
187
+ Popular fresh juices are made from bissap, ginger, buy (pronounced 'buoy', which is the fruit of the baobab tree, also known as "monkey bread fruit"), mango, or other fruit or wild trees (most famously soursop, which is called corossol in French). Desserts are very rich and sweet, combining native ingredients with the extravagance and style characteristic of the French impact on Senegal's culinary methods. They are often served with fresh fruit and are traditionally followed by coffee or tea.
188
+
189
+ Senegal is known across Africa for its musical heritage, due to the popularity of mbalax, which originated from the Serer percussive tradition especially the Njuup, it has been popularized by Youssou N'Dour, Omar Pene and others. Sabar drumming is especially popular. The sabar is mostly used in special celebrations like weddings. Another instrument, the tama, is used in more ethnic groups. Other popular international renowned Senegalese musicians are Ismael Lô, Cheikh Lô, Orchestra Baobab, Baaba Maal, Akon Thione Seck, Viviane, Fallou Dieng Titi and Pape Diouf.
190
+
191
+ Hospitality, in theory, is given such importance in Senegalese culture that it is widely considered to be part of the national identity. The Wolof[95] word for hospitality is "teranga" and it is so identified with the pride of Senegal that the national football team is known as the Lions of Teranga.[21][original research?]
192
+
193
+ Senegalese play many sports. Wrestling and football are the most popular sports in the country. Senegal will host the 2022 Summer Youth Olympics in Dakar, making Senegal the first African country to host the Olympics.[96][97]
194
+
195
+ Wrestling is Senegal's most popular sport[98] and has become a national obsession.[99] It traditionally serves many young men to escape poverty and it is the only sport recognized as developed independently of Western culture.
196
+
197
+ Football is the most popular sport in Senegal. In 2002 and 2019, the national team were runners-up at the Africa Cup of Nations and became one of only three African teams to ever reach the quarter-finals of the FIFA World Cup, defeating holders France in their first game. Popular players for Senegal include El Hadji Diouf, Khalilou Fadiga, Henri Camara, Papa Bouba Diop, Salif Diao, Kalidou Koulibaly, Ferdinand Coly, and Sadio Mané, all of whom have played in Europe.
198
+ Senegal qualified for the 2018 FIFA World Cup in Russia, in Group H alongside Japan, Colombia, and Poland.
199
+
200
+ Basketball is also a popular sport in Senegal. The country has traditionally been one of Africa's dominant basketball powers. The men's team performed better than that of any other African nation at the 2014 FIBA World Cup, where they reached the playoffs for the first time. The women's team won 19 medals at 20 African Championships, more than twice as many medals as any competitor.
201
+
202
+ In 2016, the NBA announced the launch of an Elite's Academy in Africa, and more precisely in Senegal.[100]
203
+
204
+ The country hosted the Paris–Dakar rally from 1979 until 2007. The Dakar Rally was an off-road endurance motorsport race which followed a course from Paris, France, to Dakar, Senegal. The competitors used off-road vehicles to cross the difficult geography. The last race was held in 2007, before the 2008 rally was canceled a day before the event due to security concerns in Mauritania.[101]
en/5346.html.txt ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sensation is the physical process during which sensory systems respond to stimuli and provide data for perception.[1] A sense is any of the systems involved in sensation. During sensation, sense organs engage in stimulus collection and transduction.[2] Sensation is often differentiated from the related and dependent concept of perception, which processes and integrates sensory information in order to give meaning to and understand detected stimuli, giving rise to subjective perceptual experience, or qualia.[3] Sensation and perception are central to and precede almost all aspects of cognition, behavior and thought.[1]
2
+
3
+ In organisms, a sensory organ consists of a group of related sensory cells that respond to a specific type of physical stimulus. Via cranial and spinal nerves, the different types of sensory receptor cells (mechanoreceptors, photoreceptors, chemoreceptors, thermoreceptors) in sensory organs transduct sensory information from sensory organs towards the central nervous system, to the sensory cortices in the brain, where sensory signals are further processed and interpreted (perceived).[1][4][5] Sensory systems, or senses, are often divided into external (exteroception) and internal (interoception) sensory systems.[6][7] Sensory modalities or submodalities refer to the way sensory information is encoded or transduced.[4] Multimodality integrates different senses into one unified perceptual experience. For example, information from one sense has the potential to influence how information from another is perceived.[2] Sensation and perception are studied by a variety of related fields, most notably psychophysics, neurobiology, cognitive psychology, and cognitive science.[1]
4
+
5
+ Humans have a multitude of sensory systems. Human external sensation is based on the sensory organs of the eyes, ears, skin, inner ear, nose, and mouth. The corresponding sensory systems of the visual system (sense of vision), auditory system (sense of hearing), somatosensory system (sense of touch), vestibular system (sense of balance), olfactory system (sense of smell), and gustatory system (sense of taste) contribute, respectively, to the perceptions of vision, hearing, touch, spatial orientation, smell, and taste (flavor).[2][1] Internal sensation, or interoception, detects stimuli from internal organs and tissues. Many internal sensory and perceptual systems exist in humans, including proprioception (body position) and nociception (pain). Further internal chemoreception and osmoreception based sensory systems lead to various perceptions, such as hunger, thirst, suffocation, and nausea, or different involuntary behaviors, such as vomiting.[6][7][8]
6
+
7
+ Nonhuman animals experience sensation and perception, with varying levels of similarity to and difference from humans and other animal species. For example, mammals, in general, have a stronger sense of smell than humans. Some animal species lack one or more human sensory system analogues, some have sensory systems that are not found in humans, while others process and interpret the same sensory information in very different ways. For example, some animals are able to detect electrical[9] and magnetic fields,[10] air moisture,[11] or polarized light,[12] while others sense and perceive through alternative systems, such as echolocation.[13][14] Recently, it has been suggested that plants and artificial agents may be able to detect and interpret environmental information in an analogous manner to animals.[15][16][17]
8
+
9
+ Sensory modality refers to the way that information is encoded, which is similar to the idea of transduction. The main sensory modalities can be described on the basis of how each is transduced. Listing all the different sensory modalities, which can number as many as 17, involves separating the major senses into more specific categories, or submodalities, of the larger sense. An individual sensory modality represents the sensation of a specific type of stimulus. For example, the general sensation and perception of touch, which is known as somatosensation, can be separated into light pressure, deep pressure, vibration, itch, pain, temperature, or hair movement, while the general sensation and perception of taste can be separated into submodalities of sweet, salty, sour, bitter, spicy. and umami, all of which are based on different chemicals binding to sensory neurons.[4]
10
+
11
+ Sensory receptors are the cells or structures that detect sensations. Stimuli in the environment activate specialized receptor cells in the peripheral nervous system. During transduction, physical stimulus is converted into action potential by receptors and transmitted towards the central nervous system for processing.[5] Different types of stimuli are sensed by different types of receptor cells. Receptor cells can be classified into types on the basis of three different criteria: cell type, position, and function. Receptors can be classified structurally on the basis of cell type and their position in relation to stimuli they sense. Receptors can further be classified functionally on the basis of the transduction of stimuli, or how the mechanical stimulus, light, or chemical changed the cell membrane potential.[4]
12
+
13
+ One way to classify receptors is based on their location relative to the stimuli. An exteroceptor is a receptor that is located near a stimulus of the external environment, such as the somatosensory receptors that are located in the skin. An interoceptor is one that interprets stimuli from internal organs and tissues, such as the receptors that sense the increase in blood pressure in the aorta or carotid sinus.[4]
14
+
15
+ The cells that interpret information about the environment can be either (1) a neuron that has a free nerve ending, with dendrites embedded in tissue that would receive a sensation; (2) a neuron that has an encapsulated ending in which the sensory nerve endings are encapsulated in connective tissue that enhances their sensitivity; or (3) a specialized receptor cell, which has distinct structural components that interpret a specific type of stimulus. The pain and temperature receptors in the dermis of the skin are examples of neurons that have free nerve endings (1). Also located in the dermis of the skin are lamellated corpuscles, neurons with encapsulated nerve endings that respond to pressure and touch (2). The cells in the retina that respond to light stimuli are an example of a specialized receptor (3), a photoreceptor.[4]
16
+
17
+ A transmembrane protein receptor is a protein in the cell membrane that mediates a physiological change in a neuron, most often through the opening of ion channels or changes in the cell signaling processes. Transmembrane receptors are activated by chemicals called ligands. For example, a molecule in food can serve as a ligand for taste receptors. Other transmembrane proteins, which are not accurately called receptors, are sensitive to mechanical or thermal changes. Physical changes in these proteins increase ion flow across the membrane, and can generate an action potential or a graded potential in the sensory neurons.[4]
18
+
19
+ A third classification of receptors is by how the receptor transduces stimuli into membrane potential changes. Stimuli are of three general types. Some stimuli are ions and macromolecules that affect transmembrane receptor proteins when these chemicals diffuse across the cell membrane. Some stimuli are physical variations in the environment that affect receptor cell membrane potentials. Other stimuli include the electromagnetic radiation from visible light. For humans, the only electromagnetic energy that is perceived by our eyes is visible light. Some other organisms have receptors that humans lack, such as the heat sensors of snakes, the ultraviolet light sensors of bees, or magnetic receptors in migratory birds.[4]
20
+
21
+ Receptor cells can be further categorized on the basis of the type of stimuli they transduce. The different types of functional receptor cell types are mechanoreceptors, photoreceptors, chemoreceptors (osmoreceptor), thermoreceptors, and nociceptors. Physical stimuli, such as pressure and vibration, as well as the sensation of sound and body position (balance), are interpreted through a mechanoreceptor. Photoreceptors convert light (visible electromagnetic radiation) into signals. Chemical stimuli can be interpreted by a chemoreceptor that interprets chemical stimuli, such as an object's taste or smell, while osmoreceptors respond to a chemical solute concentrations of body fluids. Nociception (pain) interprets the presence of tissue damage, from sensory information from mechano-, chemo-, and thermoreceptors.[18] Another physical stimulus that has its own type of receptor is temperature, which is sensed through a thermoreceptor that is either sensitive to temperatures above (heat) or below (cold) normal body temperature.[4]
22
+
23
+ Each sense organ (eyes or nose, for instance) requires a minimal amount of stimulation in order to detect a stimulus. This minimum amount of stimulus is called the absolute threshold.[2] The absolute threshold is defined as the minimum amount of stimulation necessary for the detection of a stimulus 50% of the time.[1] Absolute threshold is measured by using a method called signal detection. This process involves presenting stimuli of varying intensities to a subject in order to determine the level at which the subject can reliably detect stimulation in a given sense.[2]
24
+
25
+ Differential threshold or just noticeable difference (JDS) is the smallest detectable difference between two stimuli, or the smallest difference in stimuli that can be judged to be different from each other.[1] Weber's Law is an empirical law that states that the difference threshold is a constant fraction of the comparison stimulus.[1] According to Weber's Law, bigger stimuli require larger differences to be noticed.[2]
26
+
27
+ Magnitude estimation is a psychophysical method in which subjects assign perceived values of given stimuli. The relationship between stimulus intensity and perceptive intensity is described by Steven's power law.[1]
28
+
29
+ Signal detection theory quantifies the experience of the subject to the presentation of a stimulus in the presence of noise. There is internal noise and there is external noise when it comes to signal detection. The internal noise originates from static in the nervous system. For example, an individual with closed eyes in a dark room still sees something - a blotchy pattern of grey with intermittent brighter flashes -, this is internal noise. External noise is the result of noise in the environment that can interfere with the detection of the stimulus of interest. Noise is only a problem if the magnitude of the noise is large enough to interfere with signal collection. The nervous system calculates a criterion, or an internal threshold, for the detection of a signal in the presence of noise. If a signal is judged to be above the criterion, thus the signal is differentiated from the noise, the signal is sensed and perceived. Errors in signal detection can potentially lead to false positives and false negatives. The sensory criterion might be shifted based on the importance of the detecting the signal. Shifting of the criterion may influence the likelihood of false positives and false negatives.[1]
30
+
31
+ Subjective visual and auditory experiences appear to be similar across humans subjects. The same cannot be said about taste. For example, there is a molecule called propylthiouracil (PROP) that some humans experience as bitter, some as almost tasteless, while others experience it as somewhere between tasteless and bitter. There is a genetic basis for this difference between perception given the same sensory stimulus. This subjective difference in taste perception has implications for individuals' food preferences, and consequently, health.[1]
32
+
33
+ When a stimulus is constant and unchanging, perceptual sensory adaptation occurs. During this process, the subject becomes less sensitive to the stimulus.[2]
34
+
35
+ Biological auditory (hearing), vestibular and spatial, and visual systems (vision) appear to break down real-world complex stimuli into sine wave components, through the mathematical process called Fourier analysis. Many neurons have a strong preference for certain sine frequency components in contrast to others. The way that simpler sounds and images are encoded during sensation can provide insight into how perception of real-world objects happens.[1]
36
+
37
+ Perception occurs when nerves that lead from the sensory organs (e.g. eye) to the brain are stimulated, even if that stimulation is unrelated to the target signal of the sensory organ. For example, in the case of the eye, it does not matter whether light or something else stimulates the optic nerve, that stimulation will results in visual perception, even if there was no visual stimulus to begin with. (To prove this point to yourself (and if you are a human), close your eyes (preferably in a dark room) and press gently on the outside corner of one eye through the eyelid. You will see a visual spot toward the inside of your visual field, near your nose.)[1]
38
+
39
+ All stimuli received by the receptors are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area (cortex) of the brain. Just as different nerves are dedicated to sensory and motors tasks, different areas of the brain (cortices) are similarly dedicated to different sensory and perceptual tasks. More complex processing is accomplished across primary cortical regions that spread beyond the primary cortices. Every nerve, sensory or motor, has its own signal transmission speed. For example, nerves in the frog's legs have a 90 ft/s (99 km/h) signal transmission speed, while sensory nerves in humans, transmit sensory information at speeds between 165 ft/s (181 km/h) and 330 ft/s (362 km/h).[1]
40
+
41
+ Perceptual experience is often multimodal. Multimodality integrates different senses into one unified perceptual experience. Information from one sense has the potential to influence how information from another is perceived.[2] Multimodal perception is qualitatively different from unimodal perception. There has been a growing body of evidence since the mid-1990s on the neural correlates of multimodal perception.[20]
42
+
43
+ Historical inquiries into the underlying mechanisms of sensation and perception have lead early researchers to subscribe to various philosophical interpretations of perception and the mind, including panpsychism, dualism, and materialism. The majority of modern scientists who study sensation and perception take on a materialistic view of the mind.[1]
44
+
45
+ Some examples of human absolute thresholds for the 9-21 external senses.[21]
46
+
47
+ Humans respond more strongly to multimodal stimuli compared to the sum of each single modality together, an effect called the superadditive effect of multisensory integration.[2] Neurons that respond to both visual and auditory stimuli have been identified in the superior temporal sulcus.[20] Additionally, multimodal “what” and “where” pathways have been proposed for auditory and tactile stimuli.[22]Siddhu
48
+
49
+ External receptors that respond to stimuli from outside the body are called extoreceptors.[23] Human external sensation is based on the sensory organs of the eyes, ears, skin, vestibular system, nose, and mouth, which contribute, respectively, to the sensory perceptions of vision, hearing, touch, spatial orientation, smell, and taste. Smell and taste are both responsible for identifying molecules and thus both are types of chemoreceptors. Both olfaction (smell) and gustation (taste) require the transduction of chemical stimuli into electrical potentials.[2][1]
50
+
51
+ The visual system, or sense of sight, is based on the transduction of light stimuli received through the eyes and contributes to visual perception. The visual system detects light on photoreceptors in the retina of each eye that generates electrical nerve impulses for the perception of varying colors and brightness. There are two types of photoreceptors: rods and cones. Rods are very sensitive to light but do not distinguish colors. Cones distinguish colors but are less sensitive to dim light.[4]
52
+
53
+ At the molecular level, visual stimuli cause changes in the photopigment molecule that lead to changes in membrane potential of the photoreceptor cell. A single unit of light is called a photon, which is described in physics as a packet of energy with properties of both a particle and a wave. The energy of a photon is represented by its wavelength, with each wavelength of visible light corresponding to a particular color. Visible light is electromagnetic radiation with a wavelength between 380 and 720 nm. Wavelengths of electromagnetic radiation longer than 720 nm fall into the infrared range, whereas wavelengths shorter than 380 nm fall into the ultraviolet range. Light with a wavelength of 380 nm is blue whereas light with a wavelength of 720 nm is dark red. All other colors fall between red and blue at various points along the wavelength scale.[4]
54
+
55
+ The three types of cone opsins, being sensitive to different wavelengths of light, provide us with color vision. By comparing the activity of the three different cones, the brain can extract color information from visual stimuli. For example, a bright blue light that has a wavelength of approximately 450 nm would activate the “red” cones minimally, the “green” cones marginally, and the “blue” cones predominantly. The relative activation of the three different cones is calculated by the brain, which perceives the color as blue. However, cones cannot react to low-intensity light, and rods do not sense the color of light. Therefore, our low-light vision is—in essence—in grayscale. In other words, in a dark room, everything appears as a shade of gray. If you think that you can see colors in the dark, it is most likely because your brain knows what color something is and is relying on that memory.[4]
56
+
57
+ There is some disagreement as to whether the visual system consists of one, two, or three submodalities. Neuroanatomists generally regard it as two submodalities, given that different receptors are responsible for the perception of color and brightness. Some argue[citation needed] that stereopsis, the perception of depth using both eyes, also constitutes a sense, but it is generally regarded as a cognitive (that is, post-sensory) function of the visual cortex of the brain where patterns and objects in images are recognized and interpreted based on previously learned information. This is called visual memory.
58
+
59
+ The inability to see is called blindness. Blindness may result from damage to the eyeball, especially to the retina, damage to the optic nerve that connects each eye to the brain, and/or from stroke (infarcts in the brain). Temporary or permanent blindness can be caused by poisons or medications. People who are blind from degradation or damage to the visual cortex, but still have functional eyes, are actually capable of some level of vision and reaction to visual stimuli but not a conscious perception; this is known as blindsight. People with blindsight are usually not aware that they are reacting to visual sources, and instead just unconsciously adapt their behavior to the stimulus.
60
+
61
+ On February 14, 2013 researchers developed a neural implant that gives rats the ability to sense infrared light which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities.[24]
62
+
63
+ Visual Perception in Psychology
64
+
65
+ According to Gestalt Psychology, people perceive the whole of something even if it is not there. The Gestalt’s Law of Organization states that people have seven factors that help to group what is seen into patterns or groups: Common Fate, Similarity, Proximity, Closure, Symmetry, Continuity, and Past Experience.[25]
66
+
67
+ The Law of Common fate says that objects are led along the smoothest path. People follow the trend of motion as the lines/dots flow. [26]
68
+
69
+ The Law of Similarity refers to the grouping of images or objects that are similar to each other in some aspect. This could be due to shade, colour, size, shape, or other qualities you could distinguish.[27]
70
+
71
+ The Law of Proximity states that our minds like to group based on how close objects are to each other. We may see 42 objects in a group, but we can also perceive three groups of two lines with seven objects in each line. [26]
72
+
73
+ The Law of Closure is the idea that we as humans still see a full picture even if there are gaps within that picture. There could be gaps or parts missing from a section of a shape, but we would still perceive the shape as whole.[27]
74
+
75
+ The Law of Symmetry refers to a person's preference to see symmetry around a central point. An example would be when we use parentheses in writing. We tend to perceive all of the words in the parentheses as one section instead of individual words within the parentheses.[27]
76
+
77
+ The Law of Continuity tells us that objects are grouped together by their elements and then perceived as a whole. This usually happens when we see overlapping objects.We will see the overlapping objects with no interruptions.[27]
78
+
79
+ The Law of Past Experience refers to the tendency humans have to categorize objects according to past experiences under certain circumstances. If two objects are usually perceived together or within close proximity of each other the Law of Past Experience is usually seen.[26]
80
+
81
+ Hearing, or audition, is the transduction of sound waves into a neural signal that is made possible by the structures of the ear. The large, fleshy structure on the lateral aspect of the head is known as the auricle. At the end of the auditory canal is the tympanic membrane, or ear drum, which vibrates after it is struck by sound waves. The auricle, ear canal, and tympanic membrane are often referred to as the external ear. The middle ear consists of a space spanned by three small bones called the ossicles. The three ossicles are the malleus, incus, and stapes, which are Latin names that roughly translate to hammer, anvil, and stirrup. The malleus is attached to the tympanic membrane and articulates with the incus. The incus, in turn, articulates with the stapes. The stapes is then attached to the inner ear, where the sound waves will be transduced into a neural signal. The middle ear is connected to the pharynx through the Eustachian tube, which helps equilibrate air pressure across the tympanic membrane. The tube is normally closed but will pop open when the muscles of the pharynx contract during swallowing or yawning.[4]
82
+
83
+ Mechanoreceptors turn motion into electrical nerve pulses, which are located in the inner ear. Since sound is vibration, propagating through a medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical sense because these vibrations are mechanically conducted from the eardrum through a series of tiny bones to hair-like fibers in the inner ear, which detect mechanical motion of the fibers within a range of about 20 to 20,000 hertz,[28] with substantial variation between individuals. Hearing at high frequencies declines with an increase in age. Inability to hear is called deafness or hearing impairment. Sound can also be detected as vibrations conducted through the body by tactition. Lower frequencies that can be heard are detected this way. Some deaf people are able to determine the direction and location of vibrations picked up through the feet.[29]
84
+
85
+ Studies pertaining to Audition started to increase in number towards the latter end of the nineteenth century. During this time, many laboratories in the United States began to create new models, diagrams, and instruments that all pertained to the ear. [30]
86
+
87
+ There is a branch of Cognitive Psychology dedicated strictly to Audition. They call it Auditory Cognitive Psychology. The main point is to understand why humans are able to use sound in thinking outside of actually saying it. [31]
88
+
89
+ Relating to Auditory Cognitive Psychology is Psychoacoustics. Psychoacoustics is more pointed to people interested in music.[32] Haptics, a word used to refer to both taction and kinesthesia, has many parallels with psychoacoustics.[32] Most research around these two are focused on the instrument, the listener, and the player of the instrument.  [32]
90
+
91
+ Somatosensation is considered a general sense, as opposed to the special senses discussed in this section. Somatosensation is the group of sensory modalities that are associated with touch and interoception. The modalities of somatosensation include pressure, vibration, light touch, tickle, itch, temperature, pain, kinesthesia.[4] Somatosensation, also called tactition (adjectival form: tactile) is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc.). The touch sense of itching caused by insect bites or allergies involves special itch-specific neurons in the skin and spinal cord.[33] The loss or impairment of the ability to feel anything touched is called tactile anesthesia. Paresthesia is a sensation of tingling, pricking, or numbness of the skin that may result from nerve damage and may be permanent or temporary.
92
+
93
+ Two types of somatosensory signals that are transduced by free nerve endings are pain and temperature. These two modalities use thermoreceptors and nociceptors to transduce temperature and pain stimuli, respectively. Temperature receptors are stimulated when local temperatures differ from body temperature. Some thermoreceptors are sensitive to just cold and others to just heat. Nociception is the sensation of potentially damaging stimuli. Mechanical, chemical, or thermal stimuli beyond a set threshold will elicit painful sensations. Stressed or damaged tissues release chemicals that activate receptor proteins in the nociceptors. For example, the sensation of heat associated with spicy foods involves capsaicin, the active molecule in hot peppers.[4]
94
+
95
+ Low frequency vibrations are sensed by mechanoreceptors called Merkel cells, also known as type I cutaneous mechanoreceptors. Merkel cells are located in the stratum basale of the epidermis. Deep pressure and vibration is transduced by lamellated (Pacinian) corpuscles, which are receptors with encapsulated endings found deep in the dermis, or subcutaneous tissue. Light touch is transduced by the encapsulated endings known as tactile (Meissner) corpuscles. Follicles are also wrapped in a plexus of nerve endings known as the hair follicle plexus. These nerve endings detect the movement of hair at the surface of the skin, such as when an insect may be walking along the skin. Stretching of the skin is transduced by stretch receptors known as bulbous corpuscles. Bulbous corpuscles are also known as Ruffini corpuscles, or type II cutaneous mechanoreceptors.[4]
96
+
97
+ The heat receptors are sensitive to infrared radiation and can occur in specialized organs, for instance in pit vipers. The thermoceptors in the skin are quite different from the homeostatic thermoceptors in the brain (hypothalamus), which provide feedback on internal body temperature.
98
+
99
+ The vestibular sense, or sense of balance (equilibrium), is the sense that contributes to the perception of balance (equilibrium), spatial orientation, direction, or acceleration (equilibrioception). Along with audition, the inner ear is responsible for encoding information about equilibrium. A similar mechanoreceptor—a hair cell with stereocilia—senses head position, head movement, and whether our bodies are in motion. These cells are located within the vestibule of the inner ear. Head position is sensed by the utricle and saccule, whereas head movement is sensed by the semicircular canals. The neural signals generated in the vestibular ganglion are transmitted through the vestibulocochlear nerve to the brain stem and cerebellum.[4]
100
+
101
+ The semicircular canals are three ring-like extensions of the vestibule. One is oriented in the horizontal plane, whereas the other two are oriented in the vertical plane. The anterior and posterior vertical canals are oriented at approximately 45 degrees relative to the sagittal plane. The base of each semicircular canal, where it meets with the vestibule, connects to an enlarged region known as the ampulla. The ampulla contains the hair cells that respond to rotational movement, such as turning the head while saying “no.” The stereocilia of these hair cells extend into the cupula, a membrane that attaches to the top of the ampulla. As the head rotates in a plane parallel to the semicircular canal, the fluid lags, deflecting the cupula in the direction opposite to the head movement. The semicircular canals contain several ampullae, with some oriented horizontally and others oriented vertically. By comparing the relative movements of both the horizontal and vertical ampullae, the vestibular system can detect the direction of most head movements within three-dimensional (3D) space.[4]
102
+
103
+ The vestibular nerve conducts information from sensory receptors in three ampulla that sense motion of fluid in three semicircular canals caused by three-dimensional rotation of the head. The vestibular nerve also conducts information from the utricle and the saccule, which contain hair-like sensory receptors that bend under the weight of otoliths (which are small crystals of calcium carbonate) that provide the inertia needed to detect head rotation, linear acceleration, and the direction of gravitational force.
104
+
105
+ The gustatory system or the sense of taste is the sensory system that is partially responsible for the perception of taste (flavor).[34] A few recognized submodalities exist within taste: sweet, salty, sour, bitter, and umami. Very recent research has suggested that there may also be a sixth taste submodality for fats, or lipids.[4] The sense of taste is often confused with the perception of flavor, which is the results of the multimodal integration of gustatory (taste) and olfactory (smell) sensations.[35]
106
+
107
+ Within the structure of the lingual papillae are taste buds that contain specialized gustatory receptor cells for the transduction of taste stimuli. These receptor cells are sensitive to the chemicals contained within foods that are ingested, and they release neurotransmitters based on the amount of the chemical in the food. Neurotransmitters from the gustatory cells can activate sensory neurons in the facial, glossopharyngeal, and vagus cranial nerves.[4]
108
+
109
+ Salty and sour taste submodalities are triggered by the cations Na+ and H+, respectively. The other taste modalities result from food molecules binding to a G protein–coupled receptor. A G protein signal transduction system ultimately leads to depolarization of the gustatory cell. The sweet taste is the sensitivity of gustatory cells to the presence of glucose (or sugar substitutes) dissolved in the saliva. Bitter taste is similar to sweet in that food molecules bind to G protein–coupled receptors. The taste known as umami is often referred to as the savory taste. Like sweet and bitter, it is based on the activation of G protein–coupled receptors by a specific molecule.[4]
110
+
111
+ Once the gustatory cells are activated by the taste molecules, they release neurotransmitters onto the dendrites of sensory neurons. These neurons are part of the facial and glossopharyngeal cranial nerves, as well as a component within the vagus nerve dedicated to the gag reflex. The facial nerve connects to taste buds in the anterior third of the tongue. The glossopharyngeal nerve connects to taste buds in the posterior two thirds of the tongue. The vagus nerve connects to taste buds in the extreme posterior of the tongue, verging on the pharynx, which are more sensitive to noxious stimuli such as bitterness.[4]
112
+
113
+ Flavor depends on odor, texture, and temperature as well as on taste. Humans receive tastes through sensory organs called taste buds, or gustatory calyculi, concentrated on the upper surface of the tongue. Other tastes such as calcium[36][37] and free fatty acids[38] may also be basic tastes but have yet to receive widespread acceptance. The inability to taste is called ageusia.
114
+
115
+ There is a rare phenomenon when it comes to the Gustatory sense. It is called Lexical-Gustatory Synesthesia. Lexical-Gustatory Synesthesia is when people can “taste” words. [39] They have reported having flavor sensations they aren’t actually eating. When they read words, hear words, or even imagine words. They have reported not only simple flavors, but textures, complex flavors, and temperatures as well.  [40]
116
+
117
+ Like the sense of taste, the sense of smell, or the olfactiory system, is also responsive to chemical stimuli.[4] Unlike taste, there are hundreds of olfactory receptors (388 according to one source), each binding to a particular molecular feature. Odor molecules possess a variety of features and, thus, excite specific receptors more or less strongly. This combination of excitatory signals from different receptors makes up what humans perceive as the molecule's smell.[41]
118
+
119
+ The olfactory receptor neurons are located in a small region within the superior nasal cavity. This region is referred to as the olfactory epithelium and contains bipolar sensory neurons. Each olfactory sensory neuron has dendrites that extend from the apical surface of the epithelium into the mucus lining the cavity. As airborne molecules are inhaled through the nose, they pass over the olfactory epithelial region and dissolve into the mucus. These odorant molecules bind to proteins that keep them dissolved in the mucus and help transport them to the olfactory dendrites. The odorant–protein complex binds to a receptor protein within the cell membrane of an olfactory dendrite. These receptors are G protein–coupled, and will produce a graded membrane potential in the olfactory neurons.[4]
120
+
121
+ In the brain, olfaction is processed by the olfactory cortex. Olfactory receptor neurons in the nose differ from most other neurons in that they die and regenerate on a regular basis. The inability to smell is called anosmia. Some neurons in the nose are specialized to detect pheromones.[42] Loss of the sense of smell can result in food tasting bland. A person with an impaired sense of smell may require additional spice and seasoning levels for food to be tasted. Anosmia may also be related to some presentations of mild depression, because the loss of enjoyment of food may lead to a general sense of despair. The ability of olfactory neurons to replace themselves decreases with age, leading to age-related anosmia. This explains why some elderly people salt their food more than younger people do.[4]
122
+
123
+ Causes of Olfactory dysfunction can be caused by age, exposure to toxic chemicals, viral infections, epilepsy, some sort of neurodegenerative disease, head trauma, or as a result of another disorder. [5]
124
+
125
+ As studies in olfaction have continued, there has been a positive correlation to its dysfunction or degeneration and early signs of Alzheimers and sporadic Parkinson’s disease. Many patients don’t notice the decline in smell before being tested. In Parkinson’s Disease and Alzheimers, an olfactory deficit is present in 85 to 90% of the early onset cases. [5]There is evidence that the decline of this sense can precede the Alzheimers or Parkinson’s Disease by a couple years. Although the deficit is present in these two diseases, as well as others, it is important to make note that the severity or magnitude vary with every disease. This has brought to light some suggestions that olfactory testing could be used in some cases to aid in differentiating many of the neurodegenerative diseases. [5]
126
+
127
+ Those who were born without a sense of smell or have a damaged sense of smell usually complain about 1, or more, of 3 things. Our olfactory sense is also used as a warning against bad food. If the sense of smell is damaged or not there, it can lead to a person contracting food poisoning more often. Not having a sense of smell can also lead to damaged relationships or insecurities within the relationships because of the inability for the person to not smell body odor. Lastly, smell influences how food and drink taste. When the olfactory sense is damaged, the satisfaction from eating and drinking is not as prominent.
128
+
129
+ Proprioception, the kinesthetic sense, provides the parietal cortex of the brain with information on the movement and relative positions of the parts of the body. Neurologists test this sense by telling patients to close their eyes and touch their own nose with the tip of a finger. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses. Proprioception and touch are related in subtle ways, and their impairment results in surprising and deep deficits in perception and action.[43]
130
+
131
+ Nociception (physiological pain) signals nerve-damage or damage to tissue. The three types of pain receptors are cutaneous (skin), somatic (joints and bones), and visceral (body organs). It was previously believed that pain was simply the overloading of pressure receptors, but research in the first half of the 20th century indicated that pain is a distinct phenomenon that intertwines with all of the other senses, including touch. Pain was once considered an entirely subjective experience, but recent studies show that pain is registered in the anterior cingulate gyrus of the brain.[44] The main function of pain is to attract our attention to dangers and motivate us to avoid them. For example, humans avoid touching a sharp needle, or hot object, or extending an arm beyond a safe limit because it is dangerous, and thus hurts. Without pain, people could do many dangerous things without being aware of the dangers.
132
+
133
+ An internal sensation and perception also known as interoception[45] is "any sense that is normally stimulated from within the body".[46] These involve numerous sensory receptors in internal organs. Interoception is thought to be atypical in clinical conditions such as alexithymia.[47]
134
+ Some examples of specific receptors are:
135
+
136
+ Other living organisms have receptors to sense the world around them, including many of the senses listed above for humans. However, the mechanisms and capabilities vary widely.
137
+
138
+ An example of smell in non-mammals is that of sharks, which combine their keen sense of smell with timing to determine the direction of a smell. They follow the nostril that first detected the smell.[54] Insects have olfactory receptors on their antennae. Although it is unknown to the degree and magnitude which non-human animals can smell better than humans.[55]
139
+
140
+ Many animals (salamanders, reptiles, mammals) have a vomeronasal organ[56] that is connected with the mouth cavity. In mammals it is mainly used to detect pheromones of marked territory, trails, and sexual state. Reptiles like snakes and monitor lizards make extensive use of it as a smelling organ by transferring scent molecules to the vomeronasal organ with the tips of the forked tongue. In reptiles the vomeronasal organ is commonly referred to as Jacobsons organ. In mammals, it is often associated with a special behavior called flehmen characterized by uplifting of the lips. The organ is vestigial in humans, because associated neurons have not been found that give any sensory input in humans.[57]
141
+
142
+ Flies and butterflies have taste organs on their feet, allowing them to taste anything they land on. Catfish have taste organs across their entire bodies, and can taste anything they touch, including chemicals in the water.[58]
143
+
144
+ Cats have the ability to see in low light, which is due to muscles surrounding their irides–which contract and expand their pupils–as well as to the tapetum lucidum, a reflective membrane that optimizes the image.
145
+ Pit vipers, pythons and some boas have organs that allow them to detect infrared light, such that these snakes are able to sense the body heat of their prey. The common vampire bat may also have an infrared sensor on its nose.[59] It has been found that birds and some other animals are tetrachromats and have the ability to see in the ultraviolet down to 300 nanometers. Bees and dragonflies[60] are also able to see in the ultraviolet. Mantis shrimps can perceive both polarized light and multispectral images and have twelve distinct kinds of color receptors, unlike humans which have three kinds and most mammals which have two kinds.[61]
146
+
147
+ Cephalopods have the ability to change color using chromatophores in their skin. Researchers believe that opsins in the skin can sense different wavelengths of light and help the creatures choose a coloration that camouflages them, in addition to light input from the eyes.[62] Other researchers hypothesize that cephalopod eyes in species which only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into color vision,[63] explaining pupils shaped like the letter U, the letter W, or a dumbbell, as well as explaining the need for colorful mating displays.[64] Some cephalopods can distinguish the polarization of light.
148
+
149
+ Many invertebrates have a statocyst, which is a sensor for acceleration and orientation that works very differently from the mammalian's semi-circular canals.
150
+
151
+ In addition, some animals have senses that humans do not, including the following:
152
+
153
+ Magnetoception (or magnetoreception) is the ability to detect the direction one is facing based on the Earth's magnetic field. Directional awareness is most commonly observed in birds, which rely on their magnetic sense to navigate during migration.[65][65][66][permanent dead link][67][68] It has also been observed in insects such as bees. Cattle make use of magnetoception to align themselves in a north–south direction.[69] Magnetotactic bacteria build miniature magnets inside themselves and use them to determine their orientation relative to the Earth's magnetic field.[70][71] There has been some recent (tentative) research suggesting that the Rhodopsin in the human eye, which responds particularly well to blue light, can facilitate magnetoception in humans.[72]
154
+
155
+ Certain animals, including bats and cetaceans, have the ability to determine orientation to other objects through interpretation of reflected sound (like sonar). They most often use this to navigate through poor lighting conditions or to identify and track prey. There is currently an uncertainty whether this is simply an extremely developed post-sensory interpretation of auditory perceptions or it actually constitutes a separate sense. Resolution of the issue will require brain scans of animals while they actually perform echolocation, a task that has proven difficult in practice.
156
+
157
+ Blind people report they are able to navigate and in some cases identify an object by interpreting reflected sounds (especially their own footsteps), a phenomenon known as human echolocation.
158
+
159
+ Electroreception (or electroception) is the ability to detect electric fields. Several species of fish, sharks, and rays have the capacity to sense changes in electric fields in their immediate vicinity. For cartilaginous fish this occurs through a specialized organ called the Ampullae of Lorenzini. Some fish passively sense changing nearby electric fields; some generate their own weak electric fields, and sense the pattern of field potentials over their body surface; and some use these electric field generating and sensing capacities for social communication. The mechanisms by which electroceptive fish construct a spatial representation from very small differences in field potentials involve comparisons of spike latencies from different parts of the fish's body.
160
+
161
+ The only orders of mammals that are known to demonstrate electroception are the dolphin and monotreme orders. Among these mammals, the platypus[73] has the most acute sense of electroception.
162
+
163
+ A dolphin can detect electric fields in water using electroreceptors in vibrissal crypts arrayed in pairs on its snout and which evolved from whisker motion sensors.[74] These electroreceptors can detect electric fields as weak as 4.6 microvolts per centimeter, such as those generated by contracting muscles and pumping gills of potential prey. This permits the dolphin to locate prey from the seafloor where sediment limits visibility and echolocation.
164
+
165
+ Spiders have been shown to detect electric fields to determine a suitable time to extend web for 'ballooning'.[75]
166
+
167
+ Body modification enthusiasts have experimented with magnetic implants to attempt to replicate this sense.[76] However, in general humans (and it is presumed other mammals) can detect electric fields only indirectly by detecting the effect they have on hairs. An electrically charged balloon, for instance, will exert a force on human arm hairs, which can be felt through tactition and identified as coming from a static charge (and not from wind or the like). This is not electroreception, as it is a post-sensory cognitive action.
168
+
169
+ Hygroreception is the ability to detect changes in the moisture content of the environment.[11][77]
170
+
171
+ The ability to sense infrared thermal radiation evolved independently in various families of snakes. Essentially, it allows these reptiles to "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes.[78] It was previously thought that the organs evolved primarily as prey detectors, but it is now believed that it may also be used in thermoregulatory decision making.[79] The facial pit underwent parallel evolution in pitvipers and some boas and pythons, having evolved once in pitvipers and multiple times in boas and pythons.[80] The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (Loreal pit), while boas and pythons have three or more comparatively smaller pits lining the upper and sometimes the lower lip, in or between the scales. Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure. Within the family Viperidae, the pit organ is seen only in the subfamily Crotalinae: the pitvipers. The organ is used extensively to detect and target endothermic prey such as rodents and birds, and it was previously assumed that the organ evolved specifically for that purpose. However, recent evidence shows that the pit organ may also be used for thermoregulation. According to Krochmal et al., pitvipers can use their pits for thermoregulatory decision-making while true vipers (vipers who do not contain heat-sensing pits) cannot.
172
+
173
+ In spite of its detection of IR light, the pits' IR detection mechanism is not similar to photoreceptors – while photoreceptors detect light via photochemical reactions, the protein in the pits of snakes is in fact a temperature-sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than a chemical reaction to light.[81] This is consistent with the thin pit membrane, which allows incoming IR radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as vascularize the pit membrane in order to rapidly cool the ion channel back to its original "resting" or "inactive" temperature.[81]
174
+
175
+ Pressure detection uses the organ of Weber, a system consisting of three appendages of vertebrae transferring changes in shape of the gas bladder to the middle ear. It can be used to regulate the buoyancy of the fish. Fish like the weather fish and other loaches are also known to respond to low pressure areas but they lack a swim bladder.
176
+
177
+ Current detection is a detection system of water currents, consisting mostly of vortices, found in the lateral line of fish and aquatic forms of amphibians. The lateral line is also sensitive to low-frequency vibrations. The mechanoreceptors are hair cells, the same mechanoreceptors for vestibular sense and hearing. It is used primarily for navigation, hunting, and schooling. The receptors of the electrical sense are modified hair cells of the lateral line system.
178
+
179
+ Polarized light direction/detection is used by bees to orient themselves, especially on cloudy days. Cuttlefish, some beetles, and mantis shrimp can also perceive the polarization of light. Most sighted humans can in fact learn to roughly detect large areas of polarization by an effect called Haidinger's brush, however this is considered an entoptic phenomenon rather than a separate sense.
180
+
181
+ Slit sensillae of spiders detect mechanical strain in the exoskeleton, providing information on force and vibrations.
182
+
183
+ By using a variety of sense receptors, plants sense light, temperature, humidity, chemical substances, chemical gradients, reorientation, magnetic fields, infections, tissue damage and mechanical pressure. The absence of a nervous system notwithstanding, plants interpret and respond to these stimuli by a variety of hormonal and cell-to-cell communication pathways that result in movement, morphological changes and physiological state alterations at the organism level, that is, result in plant behavior. Such physiological and cognitive functions are generally not believed to give rise to mental phenomena or qualia, however, as these are typically considered the product of nervous system activity. The emergence of mental phenomena from the activity of systems functionally or computationally analogous to that of nervous systems is, however, a hypothetical possibility explored by some schools of thought in the philosophy of mind field, such as functionalism and computationalism.
184
+
185
+ However, plants could perceive the world around them,[15] and might be able to emit airborne sounds similar to "screaming" when stressed. Those noises could not be detectable by human ears, but organisms with a hearing range that can hear ultrasonic frequencies—like mice, bats or perhaps other plants—could hear the plants' cries from as far as 15 feet (4.6 m) away.[82]
186
+
187
+ Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them.[16][17][83] Computers take in and respond to their environment through attached hardware. Until recently, input was limited to a keyboard, joystick or a mouse, but advances in technology, both in hardware and software, have allowed computers to take in sensory input in a way similar to humans.[16][17]
188
+
189
+ In the time of William Shakespeare, there were commonly reckoned to be five wits or five senses.[85] At that time, the words "sense" and "wit" were synonyms,[85] so the senses were known as the five outward wits.[86][87] This traditional concept of five senses is common today.
190
+
191
+ The traditional five senses are enumerated as the "five material faculties" (pañcannaṃ indriyānaṃ avakanti) in Hindu literature. They appear in allegorical representation as early as in the Katha Upanishad (roughly 6th century BC), as five horses drawing the "chariot" of the body, guided by the mind as "chariot driver".
192
+
193
+ Depictions of the five traditional senses as allegory became a popular subject for seventeenth-century artists, especially among Dutch and Flemish Baroque painters. A typical example is Gérard de Lairesse's Allegory of the Five Senses (1668), in which each of the figures in the main group alludes to a sense: Sight is the reclining boy with a convex mirror, hearing is the cupid-like boy with a triangle, smell is represented by the girl with flowers, taste is represented by the woman with the fruit, and touch is represented by the woman holding the bird.
194
+
195
+ In Buddhist philosophy, Ayatana or "sense-base" includes the mind as a sense organ, in addition to the traditional five. This addition to the commonly acknowledged senses may arise from the psychological orientation involved in Buddhist thought and practice. The mind considered by itself is seen as the principal gateway to a different spectrum of phenomena that differ from the physical sense data. This way of viewing the human sense system indicates the importance of internal sources of sensation and perception that complements our experience of the external world.[citation needed]
en/5347.html.txt ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Existence is the ability of an entity to interact with physical or mental reality. In philosophy, it refers to the ontological property[1] of being.[2]
2
+
3
+ The term existence comes from Old French existence, from Medieval Latin existentia/exsistentia.[3]
4
+
5
+ Materialism holds that the only things that exist are matter and energy, that all things are composed of material, that all actions require energy, and that all phenomena (including consciousness) are the result of the interaction of matter. Dialectical materialism does not make a distinction between being and existence, and defines it as the objective reality of various forms of matter.[2]
6
+
7
+ Idealism holds that the only things that exist are thoughts and ideas, while the material world is secondary.[4][5] In idealism, existence is sometimes contrasted with transcendence, the ability to go beyond the limits of existence.[2] As a form of epistemological idealism, rationalism interprets existence as cognizable and rational, that all things are composed of strings of reasoning, requiring an associated idea of the thing, and all phenomena (including consciousness) are the result of an understanding of the imprint from the noumenal world in which lies beyond the thing-in-itself.
8
+
9
+ In scholasticism, existence of a thing is not derived from its essence but is determined by the creative volition of God, the dichotomy of existence and essence demonstrates that the dualism of the created universe is only resolvable through God.[2]
10
+ Empiricism recognizes existence of singular facts, which are not derivable and which are observable through empirical experience.
11
+
12
+ The exact definition of existence is one of the most important and fundamental topics of ontology, the philosophical study of the nature of being, existence, or reality in general, as well as of the basic categories of being and their relations. Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology deals with questions concerning what things or entities exist or can be said to exist, and how such things or entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences.
13
+
14
+ In the Western tradition of philosophy, the earliest known comprehensive treatments of the subject are from Plato's Phaedo, Republic, and Statesman and Aristotle's Metaphysics, though earlier fragmentary writing exists. Aristotle developed a comprehensive theory of being, according to which only individual things, called substances, fully have to be, but other things such as relations, quantity, time, and place (called the categories) have a derivative kind of being, dependent on individual things. In Aristotle's Metaphysics, there are four causes of existence or change in nature: the material cause, the formal cause, the efficient cause and the final cause.
15
+
16
+ The Neo-Platonists and some early Christian philosophers argued about whether existence had any reality except in the mind of God.[citation needed] Some taught that existence was a snare and a delusion, that the world, the flesh, and the devil existed only to tempt weak humankind away from God.
17
+
18
+ In Hindu philosophy, the term Advaita refers to its idea that the true self, Atman, is the same as the highest metaphysical Reality (Brahman). The Upanishads describe the universe, and the human experience, as an interplay of Purusha (the eternal, unchanging principles, consciousness) and Prakṛti (the temporary, changing material world, nature).The former manifests itself as Ātman (Soul, Self), and the latter as Māyā. The Upanishads refer to the knowledge of Atman as "true knowledge" (Vidya), and the knowledge of Maya as "not true knowledge" (Avidya, Nescience, lack of awareness, lack of true knowledge).
19
+
20
+ The medieval philosopher Thomas Aquinas argued that God is pure being, and that in God essence and existence are the same. More specifically, what is identical in God, according to Aquinas, is God's essence and God's actus essendi.[6] At about the same time, the nominalist philosopher William of Ockham argued, in Book I of his Summa Totius Logicae (Treatise on all Logic, written some time before 1327), that Categories are not a form of Being in their own right, but derivative on the existence of individuals.
21
+
22
+ The Indian philosopher Nagarjuna (c. 150–250 CE) largely advanced existence concepts and founded the Madhyamaka school of Mahāyāna Buddhism.
23
+
24
+ In Eastern philosophy, Anicca (Sanskrit anitya) or "impermanence" describes existence. It refers to the fact that all conditioned things (sankhara) are in a constant state of flux. In reality there is no thing that ultimately ceases to exist; only the appearance of a thing ceases as it changes from one form to another. Imagine a leaf that falls to the ground and decomposes. While the appearance and relative existence of the leaf ceases, the components that formed the leaf become particulate material that goes on to form new plants. Buddhism teaches a middle way, avoiding the extreme views of eternalism and nihilism.[7] The middle way recognizes there are vast differences between the way things are perceived to exist and the way things really exist. The differences are reconciled in the concept of Shunyata by addressing the existing object's served purpose for the subject's identity in being. What exists is in non-existence, because the subject changes.
25
+
26
+ Trailokya elaborates on three kinds of existence, those of desire, form, and formlessness in which there are karmic rebirths. Taken further to the Trikaya doctrine, it describes how the Buddha exists. In this philosophy, it is accepted that Buddha exists in more than one absolute way.
27
+
28
+ The early modern treatment of the subject derives from Antoine Arnauld and Pierre Nicole's Logic, or The Art of Thinking, better known as the Port-Royal Logic, first published in 1662. Arnauld thought that a proposition or judgment consists of taking two different ideas and either putting them together or rejecting them:
29
+
30
+ After conceiving things by our ideas, we compare these ideas and, finding that some belong together and others do not, we unite or separate them. This is called affirming or denying, and in general judging.
31
+ This judgment is also called a proposition, and it is easy to see that it must have two terms. One term, of which one affirms or denies something, is called the subject; the other term, which is affirmed or denied, is called the attribute or Praedicatum.
32
+
33
+ The two terms are joined by the verb "is" (or "is not", if the predicate is denied of the subject). Thus every proposition has three components: the two terms, and the "copula" that connects or separates them. Even when the proposition has only two words, the three terms are still there. For example, "God loves humanity", really means "God is a lover of humanity", "God exists" means "God is a thing".
34
+
35
+ This theory of judgment dominated logic for centuries, but it has some obvious difficulties: it only considers proposition of the form "All A are B.", a form logicians call universal. It does not allow propositions of the form "Some A are B", a form logicians call existential. If neither A nor B includes the idea of existence, then "some A are B" simply adjoins A to B. Conversely, if A or B do include the idea of existence in the way that "triangle" contains the idea "three angles equal to two right angles", then "A exists" is automatically true, and we have an ontological proof of A's existence. (Indeed, Arnauld's contemporary Descartes famously argued so, regarding the concept "God" (discourse 4, Meditation 5)). Arnauld's theory was current until the middle of the nineteenth century.
36
+
37
+ David Hume argued that the claim that a thing exists, when added to our notion of a thing, does not add anything to the concept. For example, if we form a complete notion of Moses, and superadd to that notion the claim that Moses existed, we are not adding anything to the notion of Moses.
38
+
39
+ Kant also argued that existence is not a "real" predicate, but gave no explanation of how this is possible. Indeed, his famous discussion of the subject is merely a restatement of Arnauld's doctrine that in the proposition "God is omnipotent", the verb "is" signifies the joining or separating of two concepts such as "God" and "omnipotence".[original research?]
40
+
41
+ Schopenhauer claimed that “everything that exists for knowledge, and hence the whole of this world, only object in relation to the subject, the perception of the perceiver, in a word, representation.”[8] According to him there can be "No object without subject" because "everything objective is already conditioned as such in manifold ways by the knowing subject with the forms of its knowing, and presupposes these forms..."[9]
42
+
43
+ John Stuart Mill (and also Kant's pupil Herbart) argued that the predicative nature of existence was proved by sentences like "A centaur is a poetic fiction"[10] or "A greatest number is impossible" (Herbart).[11] Franz Brentano challenged this; so also (as is better known) did Frege. Brentano argued that we can join the concept represented by a noun phrase "an A" to the concept represented by an adjective "B" to give the concept represented by the noun phrase "a B-A". For example, we can join "a man" to "wise" to give "a wise man". But the noun phrase "a wise man" is not a sentence, whereas "some man is wise" is a sentence. Hence the copula must do more than merely join or separate concepts. Furthermore, adding "exists" to "a wise man", to give the complete sentence "a wise man exists" has the same effect as joining "some man" to "wise" using the copula. So the copula has the same effect as "exists". Brentano argued that every categorical proposition can be translated into an existential one without change in meaning and that the "exists" and "does not exist" of the existential proposition take the place of the copula. He showed this by the following examples:
44
+
45
+ Frege developed a similar view (though later) in his great work The Foundations of Arithmetic, as did Charles Sanders Peirce (but Peirce held that the possible and the real are not limited to the actual, individually existent). The Frege-Brentano view is the basis of the dominant position in modern Anglo-American philosophy: that existence is asserted by the existential quantifier (as expressed by Quine's slogan "To be is to be the value of a variable." — On What There Is, 1948).[12]
46
+
47
+ In mathematical logic, there are two quantifiers, "some" and "all", though as Brentano (1838–1917) pointed out, we can make do with just one quantifier and negation. The first of these quantifiers, "some", is also expressed as "there exists". Thus, in the sentence "There exists a man", the term "man" is asserted to be part of existence. But we can also assert, "There exists a triangle." Is a "triangle"—an abstract idea—part of existence in the same way that a "man"—a physical body—is part of existence? Do abstractions such as goodness, blindness, and virtue exist in the same sense that chairs, tables, and houses exist? What categories, or kinds of thing, can be the subject or the predicate of a proposition?
48
+
49
+ Worse, does "existence" exist?[13]
50
+
51
+ In some statements, existence is implied without being mentioned. The statement "A bridge crosses the Thames at Hammersmith" cannot just be about a bridge, the Thames, and Hammersmith. It must be about "existence" as well. On the other hand, the statement "A bridge crosses the Styx at Limbo" has the same form, but while in the first case we understand a real bridge in the real world made of stone or brick, what "existence" would mean in the second case is less clear.
52
+
53
+ The nominalist approach is to argue that certain noun phrases can be "eliminated" by rewriting a sentence in a form that has the same meaning but does not contain the noun phrase. Thus Ockham argued that "Socrates has wisdom", which apparently asserts the existence of a reference for "wisdom", can be rewritten as "Socrates is wise", which contains only the referring phrase "Socrates".[14] This method became widely accepted in the twentieth century by the analytic school of philosophy.
54
+
55
+ However, this argument may be inverted by realists in arguing that since the sentence "Socrates is wise" can be rewritten as "Socrates has wisdom", this proves the existence of a hidden referent for "wise".
56
+
57
+ A further problem is that human beings seem to process information about fictional characters in much the same way that they process information about real people. For example, in the 2008 United States presidential election, a politician and actor named Fred Thompson ran for the Republican Party nomination. In polls, potential voters identified Fred Thompson as a "law and order" candidate. Thompson plays a fictional character on the television series Law and Order. The people who make the comment are aware that Law and Order is fiction, but at some level, they may process fiction as if it were fact, a process included in what is called the Paradox of Fiction.[dubious – discuss][15] Another example of this is the common experience of actresses who play the villain in a soap opera being accosted in public as if they are to blame for the actions of the characters they play.
58
+
59
+ A scientist might make a clear distinction between objects that exist, and assert that all objects that exist are made up of either matter or energy. But in the layperson's worldview, existence includes real, fictional, and even contradictory objects. Thus if we reason from the statement "Pegasus flies" to the statement "Pegasus exists", we are not asserting that Pegasus is made up of atoms, but rather that Pegasus exists in the worldview of classical myth. When a mathematician reasons from the statement "ABC is a triangle" to the statement "triangles exist", the mathematician is not asserting that triangles are made up of atoms but rather that triangles exist within a particular mathematical model.
60
+
61
+ According to Bertrand Russell's Theory of Descriptions, the negation operator in a singular sentence can take either wide or narrow scope: we distinguish between "some S is not P" (where negation takes "narrow scope") and "it is not the case that 'some S is P'" (where negation takes "wide scope"). The problem with this view is that there appears to be no such scope distinction in the case of proper names. The sentences "Socrates is not bald" and "it is not the case that Socrates is bald" both appear to have the same meaning, and they both appear to assert or presuppose the existence of someone (Socrates) who is not bald, so that negation takes a narrow scope. However, Russell's theory analyses proper names into a logical structure which makes sense of this problem. According to Russell, Socrates can be analyzed into the form 'The Philosopher of Greece.' In the wide scope, this would then read: It is not the case that there existed a philosopher of Greece who was bald. In the narrow scope, it would read the Philosopher of Greece was not bald.
62
+
63
+ According to the direct-reference view, an early version of which was originally proposed by Bertrand Russell, and perhaps earlier by Gottlob Frege, a proper name strictly has no meaning when there is no object to which it refers. This view relies on the argument that the semantic function of a proper name is to tell us which object bears the name, and thus to identify some object. But no object can be identified if none exists. Thus, a proper name must have a bearer if it is to be meaningful.
64
+
65
+ According to the "two sense" view of existence, which derives from Alexius Meinong, existential statements fall into two classes.
66
+
67
+ The problem is then evaded as follows. "Pegasus flies" implies existence in the wide sense, for it implies that something flies. But it does not imply existence in the narrow sense, for we deny existence in this sense by saying that Pegasus does not exist. In effect, the world of all things divides, on this view, into those (like Socrates, the planet Venus, and New York City) that have existed in the narrow sense, and those (like Sherlock Holmes, the goddess Venus, and Minas Tirith) that do not.
68
+
69
+ However, common sense suggests the non-existence of such things as fictional characters or places.
70
+
71
+ Influenced by the views of Brentano's pupil Alexius Meinong, and by Edmund Husserl, Germanophone and Francophone philosophy took a different direction regarding the question of existence.
72
+
73
+ Anti-realism is the view of idealists who are skeptics about the physical world, maintaining either: (1) that nothing exists outside the mind, or (2) that we would have no access to a mind-independent reality even if it may exist. Realists, in contrast, hold that perceptions or sense data are caused by mind-independent objects. An "anti-realist" who denies that other minds exist (i. e., a solipsist) is different from an "anti-realist" who claims that there is no fact of the matter as to whether or not there are unobservable other minds (i. e., a logical behaviorist).
en/5348.html.txt ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sensation is the physical process during which sensory systems respond to stimuli and provide data for perception.[1] A sense is any of the systems involved in sensation. During sensation, sense organs engage in stimulus collection and transduction.[2] Sensation is often differentiated from the related and dependent concept of perception, which processes and integrates sensory information in order to give meaning to and understand detected stimuli, giving rise to subjective perceptual experience, or qualia.[3] Sensation and perception are central to and precede almost all aspects of cognition, behavior and thought.[1]
2
+
3
+ In organisms, a sensory organ consists of a group of related sensory cells that respond to a specific type of physical stimulus. Via cranial and spinal nerves, the different types of sensory receptor cells (mechanoreceptors, photoreceptors, chemoreceptors, thermoreceptors) in sensory organs transduct sensory information from sensory organs towards the central nervous system, to the sensory cortices in the brain, where sensory signals are further processed and interpreted (perceived).[1][4][5] Sensory systems, or senses, are often divided into external (exteroception) and internal (interoception) sensory systems.[6][7] Sensory modalities or submodalities refer to the way sensory information is encoded or transduced.[4] Multimodality integrates different senses into one unified perceptual experience. For example, information from one sense has the potential to influence how information from another is perceived.[2] Sensation and perception are studied by a variety of related fields, most notably psychophysics, neurobiology, cognitive psychology, and cognitive science.[1]
4
+
5
+ Humans have a multitude of sensory systems. Human external sensation is based on the sensory organs of the eyes, ears, skin, inner ear, nose, and mouth. The corresponding sensory systems of the visual system (sense of vision), auditory system (sense of hearing), somatosensory system (sense of touch), vestibular system (sense of balance), olfactory system (sense of smell), and gustatory system (sense of taste) contribute, respectively, to the perceptions of vision, hearing, touch, spatial orientation, smell, and taste (flavor).[2][1] Internal sensation, or interoception, detects stimuli from internal organs and tissues. Many internal sensory and perceptual systems exist in humans, including proprioception (body position) and nociception (pain). Further internal chemoreception and osmoreception based sensory systems lead to various perceptions, such as hunger, thirst, suffocation, and nausea, or different involuntary behaviors, such as vomiting.[6][7][8]
6
+
7
+ Nonhuman animals experience sensation and perception, with varying levels of similarity to and difference from humans and other animal species. For example, mammals, in general, have a stronger sense of smell than humans. Some animal species lack one or more human sensory system analogues, some have sensory systems that are not found in humans, while others process and interpret the same sensory information in very different ways. For example, some animals are able to detect electrical[9] and magnetic fields,[10] air moisture,[11] or polarized light,[12] while others sense and perceive through alternative systems, such as echolocation.[13][14] Recently, it has been suggested that plants and artificial agents may be able to detect and interpret environmental information in an analogous manner to animals.[15][16][17]
8
+
9
+ Sensory modality refers to the way that information is encoded, which is similar to the idea of transduction. The main sensory modalities can be described on the basis of how each is transduced. Listing all the different sensory modalities, which can number as many as 17, involves separating the major senses into more specific categories, or submodalities, of the larger sense. An individual sensory modality represents the sensation of a specific type of stimulus. For example, the general sensation and perception of touch, which is known as somatosensation, can be separated into light pressure, deep pressure, vibration, itch, pain, temperature, or hair movement, while the general sensation and perception of taste can be separated into submodalities of sweet, salty, sour, bitter, spicy. and umami, all of which are based on different chemicals binding to sensory neurons.[4]
10
+
11
+ Sensory receptors are the cells or structures that detect sensations. Stimuli in the environment activate specialized receptor cells in the peripheral nervous system. During transduction, physical stimulus is converted into action potential by receptors and transmitted towards the central nervous system for processing.[5] Different types of stimuli are sensed by different types of receptor cells. Receptor cells can be classified into types on the basis of three different criteria: cell type, position, and function. Receptors can be classified structurally on the basis of cell type and their position in relation to stimuli they sense. Receptors can further be classified functionally on the basis of the transduction of stimuli, or how the mechanical stimulus, light, or chemical changed the cell membrane potential.[4]
12
+
13
+ One way to classify receptors is based on their location relative to the stimuli. An exteroceptor is a receptor that is located near a stimulus of the external environment, such as the somatosensory receptors that are located in the skin. An interoceptor is one that interprets stimuli from internal organs and tissues, such as the receptors that sense the increase in blood pressure in the aorta or carotid sinus.[4]
14
+
15
+ The cells that interpret information about the environment can be either (1) a neuron that has a free nerve ending, with dendrites embedded in tissue that would receive a sensation; (2) a neuron that has an encapsulated ending in which the sensory nerve endings are encapsulated in connective tissue that enhances their sensitivity; or (3) a specialized receptor cell, which has distinct structural components that interpret a specific type of stimulus. The pain and temperature receptors in the dermis of the skin are examples of neurons that have free nerve endings (1). Also located in the dermis of the skin are lamellated corpuscles, neurons with encapsulated nerve endings that respond to pressure and touch (2). The cells in the retina that respond to light stimuli are an example of a specialized receptor (3), a photoreceptor.[4]
16
+
17
+ A transmembrane protein receptor is a protein in the cell membrane that mediates a physiological change in a neuron, most often through the opening of ion channels or changes in the cell signaling processes. Transmembrane receptors are activated by chemicals called ligands. For example, a molecule in food can serve as a ligand for taste receptors. Other transmembrane proteins, which are not accurately called receptors, are sensitive to mechanical or thermal changes. Physical changes in these proteins increase ion flow across the membrane, and can generate an action potential or a graded potential in the sensory neurons.[4]
18
+
19
+ A third classification of receptors is by how the receptor transduces stimuli into membrane potential changes. Stimuli are of three general types. Some stimuli are ions and macromolecules that affect transmembrane receptor proteins when these chemicals diffuse across the cell membrane. Some stimuli are physical variations in the environment that affect receptor cell membrane potentials. Other stimuli include the electromagnetic radiation from visible light. For humans, the only electromagnetic energy that is perceived by our eyes is visible light. Some other organisms have receptors that humans lack, such as the heat sensors of snakes, the ultraviolet light sensors of bees, or magnetic receptors in migratory birds.[4]
20
+
21
+ Receptor cells can be further categorized on the basis of the type of stimuli they transduce. The different types of functional receptor cell types are mechanoreceptors, photoreceptors, chemoreceptors (osmoreceptor), thermoreceptors, and nociceptors. Physical stimuli, such as pressure and vibration, as well as the sensation of sound and body position (balance), are interpreted through a mechanoreceptor. Photoreceptors convert light (visible electromagnetic radiation) into signals. Chemical stimuli can be interpreted by a chemoreceptor that interprets chemical stimuli, such as an object's taste or smell, while osmoreceptors respond to a chemical solute concentrations of body fluids. Nociception (pain) interprets the presence of tissue damage, from sensory information from mechano-, chemo-, and thermoreceptors.[18] Another physical stimulus that has its own type of receptor is temperature, which is sensed through a thermoreceptor that is either sensitive to temperatures above (heat) or below (cold) normal body temperature.[4]
22
+
23
+ Each sense organ (eyes or nose, for instance) requires a minimal amount of stimulation in order to detect a stimulus. This minimum amount of stimulus is called the absolute threshold.[2] The absolute threshold is defined as the minimum amount of stimulation necessary for the detection of a stimulus 50% of the time.[1] Absolute threshold is measured by using a method called signal detection. This process involves presenting stimuli of varying intensities to a subject in order to determine the level at which the subject can reliably detect stimulation in a given sense.[2]
24
+
25
+ Differential threshold or just noticeable difference (JDS) is the smallest detectable difference between two stimuli, or the smallest difference in stimuli that can be judged to be different from each other.[1] Weber's Law is an empirical law that states that the difference threshold is a constant fraction of the comparison stimulus.[1] According to Weber's Law, bigger stimuli require larger differences to be noticed.[2]
26
+
27
+ Magnitude estimation is a psychophysical method in which subjects assign perceived values of given stimuli. The relationship between stimulus intensity and perceptive intensity is described by Steven's power law.[1]
28
+
29
+ Signal detection theory quantifies the experience of the subject to the presentation of a stimulus in the presence of noise. There is internal noise and there is external noise when it comes to signal detection. The internal noise originates from static in the nervous system. For example, an individual with closed eyes in a dark room still sees something - a blotchy pattern of grey with intermittent brighter flashes -, this is internal noise. External noise is the result of noise in the environment that can interfere with the detection of the stimulus of interest. Noise is only a problem if the magnitude of the noise is large enough to interfere with signal collection. The nervous system calculates a criterion, or an internal threshold, for the detection of a signal in the presence of noise. If a signal is judged to be above the criterion, thus the signal is differentiated from the noise, the signal is sensed and perceived. Errors in signal detection can potentially lead to false positives and false negatives. The sensory criterion might be shifted based on the importance of the detecting the signal. Shifting of the criterion may influence the likelihood of false positives and false negatives.[1]
30
+
31
+ Subjective visual and auditory experiences appear to be similar across humans subjects. The same cannot be said about taste. For example, there is a molecule called propylthiouracil (PROP) that some humans experience as bitter, some as almost tasteless, while others experience it as somewhere between tasteless and bitter. There is a genetic basis for this difference between perception given the same sensory stimulus. This subjective difference in taste perception has implications for individuals' food preferences, and consequently, health.[1]
32
+
33
+ When a stimulus is constant and unchanging, perceptual sensory adaptation occurs. During this process, the subject becomes less sensitive to the stimulus.[2]
34
+
35
+ Biological auditory (hearing), vestibular and spatial, and visual systems (vision) appear to break down real-world complex stimuli into sine wave components, through the mathematical process called Fourier analysis. Many neurons have a strong preference for certain sine frequency components in contrast to others. The way that simpler sounds and images are encoded during sensation can provide insight into how perception of real-world objects happens.[1]
36
+
37
+ Perception occurs when nerves that lead from the sensory organs (e.g. eye) to the brain are stimulated, even if that stimulation is unrelated to the target signal of the sensory organ. For example, in the case of the eye, it does not matter whether light or something else stimulates the optic nerve, that stimulation will results in visual perception, even if there was no visual stimulus to begin with. (To prove this point to yourself (and if you are a human), close your eyes (preferably in a dark room) and press gently on the outside corner of one eye through the eyelid. You will see a visual spot toward the inside of your visual field, near your nose.)[1]
38
+
39
+ All stimuli received by the receptors are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area (cortex) of the brain. Just as different nerves are dedicated to sensory and motors tasks, different areas of the brain (cortices) are similarly dedicated to different sensory and perceptual tasks. More complex processing is accomplished across primary cortical regions that spread beyond the primary cortices. Every nerve, sensory or motor, has its own signal transmission speed. For example, nerves in the frog's legs have a 90 ft/s (99 km/h) signal transmission speed, while sensory nerves in humans, transmit sensory information at speeds between 165 ft/s (181 km/h) and 330 ft/s (362 km/h).[1]
40
+
41
+ Perceptual experience is often multimodal. Multimodality integrates different senses into one unified perceptual experience. Information from one sense has the potential to influence how information from another is perceived.[2] Multimodal perception is qualitatively different from unimodal perception. There has been a growing body of evidence since the mid-1990s on the neural correlates of multimodal perception.[20]
42
+
43
+ Historical inquiries into the underlying mechanisms of sensation and perception have lead early researchers to subscribe to various philosophical interpretations of perception and the mind, including panpsychism, dualism, and materialism. The majority of modern scientists who study sensation and perception take on a materialistic view of the mind.[1]
44
+
45
+ Some examples of human absolute thresholds for the 9-21 external senses.[21]
46
+
47
+ Humans respond more strongly to multimodal stimuli compared to the sum of each single modality together, an effect called the superadditive effect of multisensory integration.[2] Neurons that respond to both visual and auditory stimuli have been identified in the superior temporal sulcus.[20] Additionally, multimodal “what” and “where” pathways have been proposed for auditory and tactile stimuli.[22]Siddhu
48
+
49
+ External receptors that respond to stimuli from outside the body are called extoreceptors.[23] Human external sensation is based on the sensory organs of the eyes, ears, skin, vestibular system, nose, and mouth, which contribute, respectively, to the sensory perceptions of vision, hearing, touch, spatial orientation, smell, and taste. Smell and taste are both responsible for identifying molecules and thus both are types of chemoreceptors. Both olfaction (smell) and gustation (taste) require the transduction of chemical stimuli into electrical potentials.[2][1]
50
+
51
+ The visual system, or sense of sight, is based on the transduction of light stimuli received through the eyes and contributes to visual perception. The visual system detects light on photoreceptors in the retina of each eye that generates electrical nerve impulses for the perception of varying colors and brightness. There are two types of photoreceptors: rods and cones. Rods are very sensitive to light but do not distinguish colors. Cones distinguish colors but are less sensitive to dim light.[4]
52
+
53
+ At the molecular level, visual stimuli cause changes in the photopigment molecule that lead to changes in membrane potential of the photoreceptor cell. A single unit of light is called a photon, which is described in physics as a packet of energy with properties of both a particle and a wave. The energy of a photon is represented by its wavelength, with each wavelength of visible light corresponding to a particular color. Visible light is electromagnetic radiation with a wavelength between 380 and 720 nm. Wavelengths of electromagnetic radiation longer than 720 nm fall into the infrared range, whereas wavelengths shorter than 380 nm fall into the ultraviolet range. Light with a wavelength of 380 nm is blue whereas light with a wavelength of 720 nm is dark red. All other colors fall between red and blue at various points along the wavelength scale.[4]
54
+
55
+ The three types of cone opsins, being sensitive to different wavelengths of light, provide us with color vision. By comparing the activity of the three different cones, the brain can extract color information from visual stimuli. For example, a bright blue light that has a wavelength of approximately 450 nm would activate the “red” cones minimally, the “green” cones marginally, and the “blue” cones predominantly. The relative activation of the three different cones is calculated by the brain, which perceives the color as blue. However, cones cannot react to low-intensity light, and rods do not sense the color of light. Therefore, our low-light vision is—in essence—in grayscale. In other words, in a dark room, everything appears as a shade of gray. If you think that you can see colors in the dark, it is most likely because your brain knows what color something is and is relying on that memory.[4]
56
+
57
+ There is some disagreement as to whether the visual system consists of one, two, or three submodalities. Neuroanatomists generally regard it as two submodalities, given that different receptors are responsible for the perception of color and brightness. Some argue[citation needed] that stereopsis, the perception of depth using both eyes, also constitutes a sense, but it is generally regarded as a cognitive (that is, post-sensory) function of the visual cortex of the brain where patterns and objects in images are recognized and interpreted based on previously learned information. This is called visual memory.
58
+
59
+ The inability to see is called blindness. Blindness may result from damage to the eyeball, especially to the retina, damage to the optic nerve that connects each eye to the brain, and/or from stroke (infarcts in the brain). Temporary or permanent blindness can be caused by poisons or medications. People who are blind from degradation or damage to the visual cortex, but still have functional eyes, are actually capable of some level of vision and reaction to visual stimuli but not a conscious perception; this is known as blindsight. People with blindsight are usually not aware that they are reacting to visual sources, and instead just unconsciously adapt their behavior to the stimulus.
60
+
61
+ On February 14, 2013 researchers developed a neural implant that gives rats the ability to sense infrared light which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities.[24]
62
+
63
+ Visual Perception in Psychology
64
+
65
+ According to Gestalt Psychology, people perceive the whole of something even if it is not there. The Gestalt’s Law of Organization states that people have seven factors that help to group what is seen into patterns or groups: Common Fate, Similarity, Proximity, Closure, Symmetry, Continuity, and Past Experience.[25]
66
+
67
+ The Law of Common fate says that objects are led along the smoothest path. People follow the trend of motion as the lines/dots flow. [26]
68
+
69
+ The Law of Similarity refers to the grouping of images or objects that are similar to each other in some aspect. This could be due to shade, colour, size, shape, or other qualities you could distinguish.[27]
70
+
71
+ The Law of Proximity states that our minds like to group based on how close objects are to each other. We may see 42 objects in a group, but we can also perceive three groups of two lines with seven objects in each line. [26]
72
+
73
+ The Law of Closure is the idea that we as humans still see a full picture even if there are gaps within that picture. There could be gaps or parts missing from a section of a shape, but we would still perceive the shape as whole.[27]
74
+
75
+ The Law of Symmetry refers to a person's preference to see symmetry around a central point. An example would be when we use parentheses in writing. We tend to perceive all of the words in the parentheses as one section instead of individual words within the parentheses.[27]
76
+
77
+ The Law of Continuity tells us that objects are grouped together by their elements and then perceived as a whole. This usually happens when we see overlapping objects.We will see the overlapping objects with no interruptions.[27]
78
+
79
+ The Law of Past Experience refers to the tendency humans have to categorize objects according to past experiences under certain circumstances. If two objects are usually perceived together or within close proximity of each other the Law of Past Experience is usually seen.[26]
80
+
81
+ Hearing, or audition, is the transduction of sound waves into a neural signal that is made possible by the structures of the ear. The large, fleshy structure on the lateral aspect of the head is known as the auricle. At the end of the auditory canal is the tympanic membrane, or ear drum, which vibrates after it is struck by sound waves. The auricle, ear canal, and tympanic membrane are often referred to as the external ear. The middle ear consists of a space spanned by three small bones called the ossicles. The three ossicles are the malleus, incus, and stapes, which are Latin names that roughly translate to hammer, anvil, and stirrup. The malleus is attached to the tympanic membrane and articulates with the incus. The incus, in turn, articulates with the stapes. The stapes is then attached to the inner ear, where the sound waves will be transduced into a neural signal. The middle ear is connected to the pharynx through the Eustachian tube, which helps equilibrate air pressure across the tympanic membrane. The tube is normally closed but will pop open when the muscles of the pharynx contract during swallowing or yawning.[4]
82
+
83
+ Mechanoreceptors turn motion into electrical nerve pulses, which are located in the inner ear. Since sound is vibration, propagating through a medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical sense because these vibrations are mechanically conducted from the eardrum through a series of tiny bones to hair-like fibers in the inner ear, which detect mechanical motion of the fibers within a range of about 20 to 20,000 hertz,[28] with substantial variation between individuals. Hearing at high frequencies declines with an increase in age. Inability to hear is called deafness or hearing impairment. Sound can also be detected as vibrations conducted through the body by tactition. Lower frequencies that can be heard are detected this way. Some deaf people are able to determine the direction and location of vibrations picked up through the feet.[29]
84
+
85
+ Studies pertaining to Audition started to increase in number towards the latter end of the nineteenth century. During this time, many laboratories in the United States began to create new models, diagrams, and instruments that all pertained to the ear. [30]
86
+
87
+ There is a branch of Cognitive Psychology dedicated strictly to Audition. They call it Auditory Cognitive Psychology. The main point is to understand why humans are able to use sound in thinking outside of actually saying it. [31]
88
+
89
+ Relating to Auditory Cognitive Psychology is Psychoacoustics. Psychoacoustics is more pointed to people interested in music.[32] Haptics, a word used to refer to both taction and kinesthesia, has many parallels with psychoacoustics.[32] Most research around these two are focused on the instrument, the listener, and the player of the instrument.  [32]
90
+
91
+ Somatosensation is considered a general sense, as opposed to the special senses discussed in this section. Somatosensation is the group of sensory modalities that are associated with touch and interoception. The modalities of somatosensation include pressure, vibration, light touch, tickle, itch, temperature, pain, kinesthesia.[4] Somatosensation, also called tactition (adjectival form: tactile) is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc.). The touch sense of itching caused by insect bites or allergies involves special itch-specific neurons in the skin and spinal cord.[33] The loss or impairment of the ability to feel anything touched is called tactile anesthesia. Paresthesia is a sensation of tingling, pricking, or numbness of the skin that may result from nerve damage and may be permanent or temporary.
92
+
93
+ Two types of somatosensory signals that are transduced by free nerve endings are pain and temperature. These two modalities use thermoreceptors and nociceptors to transduce temperature and pain stimuli, respectively. Temperature receptors are stimulated when local temperatures differ from body temperature. Some thermoreceptors are sensitive to just cold and others to just heat. Nociception is the sensation of potentially damaging stimuli. Mechanical, chemical, or thermal stimuli beyond a set threshold will elicit painful sensations. Stressed or damaged tissues release chemicals that activate receptor proteins in the nociceptors. For example, the sensation of heat associated with spicy foods involves capsaicin, the active molecule in hot peppers.[4]
94
+
95
+ Low frequency vibrations are sensed by mechanoreceptors called Merkel cells, also known as type I cutaneous mechanoreceptors. Merkel cells are located in the stratum basale of the epidermis. Deep pressure and vibration is transduced by lamellated (Pacinian) corpuscles, which are receptors with encapsulated endings found deep in the dermis, or subcutaneous tissue. Light touch is transduced by the encapsulated endings known as tactile (Meissner) corpuscles. Follicles are also wrapped in a plexus of nerve endings known as the hair follicle plexus. These nerve endings detect the movement of hair at the surface of the skin, such as when an insect may be walking along the skin. Stretching of the skin is transduced by stretch receptors known as bulbous corpuscles. Bulbous corpuscles are also known as Ruffini corpuscles, or type II cutaneous mechanoreceptors.[4]
96
+
97
+ The heat receptors are sensitive to infrared radiation and can occur in specialized organs, for instance in pit vipers. The thermoceptors in the skin are quite different from the homeostatic thermoceptors in the brain (hypothalamus), which provide feedback on internal body temperature.
98
+
99
+ The vestibular sense, or sense of balance (equilibrium), is the sense that contributes to the perception of balance (equilibrium), spatial orientation, direction, or acceleration (equilibrioception). Along with audition, the inner ear is responsible for encoding information about equilibrium. A similar mechanoreceptor—a hair cell with stereocilia—senses head position, head movement, and whether our bodies are in motion. These cells are located within the vestibule of the inner ear. Head position is sensed by the utricle and saccule, whereas head movement is sensed by the semicircular canals. The neural signals generated in the vestibular ganglion are transmitted through the vestibulocochlear nerve to the brain stem and cerebellum.[4]
100
+
101
+ The semicircular canals are three ring-like extensions of the vestibule. One is oriented in the horizontal plane, whereas the other two are oriented in the vertical plane. The anterior and posterior vertical canals are oriented at approximately 45 degrees relative to the sagittal plane. The base of each semicircular canal, where it meets with the vestibule, connects to an enlarged region known as the ampulla. The ampulla contains the hair cells that respond to rotational movement, such as turning the head while saying “no.” The stereocilia of these hair cells extend into the cupula, a membrane that attaches to the top of the ampulla. As the head rotates in a plane parallel to the semicircular canal, the fluid lags, deflecting the cupula in the direction opposite to the head movement. The semicircular canals contain several ampullae, with some oriented horizontally and others oriented vertically. By comparing the relative movements of both the horizontal and vertical ampullae, the vestibular system can detect the direction of most head movements within three-dimensional (3D) space.[4]
102
+
103
+ The vestibular nerve conducts information from sensory receptors in three ampulla that sense motion of fluid in three semicircular canals caused by three-dimensional rotation of the head. The vestibular nerve also conducts information from the utricle and the saccule, which contain hair-like sensory receptors that bend under the weight of otoliths (which are small crystals of calcium carbonate) that provide the inertia needed to detect head rotation, linear acceleration, and the direction of gravitational force.
104
+
105
+ The gustatory system or the sense of taste is the sensory system that is partially responsible for the perception of taste (flavor).[34] A few recognized submodalities exist within taste: sweet, salty, sour, bitter, and umami. Very recent research has suggested that there may also be a sixth taste submodality for fats, or lipids.[4] The sense of taste is often confused with the perception of flavor, which is the results of the multimodal integration of gustatory (taste) and olfactory (smell) sensations.[35]
106
+
107
+ Within the structure of the lingual papillae are taste buds that contain specialized gustatory receptor cells for the transduction of taste stimuli. These receptor cells are sensitive to the chemicals contained within foods that are ingested, and they release neurotransmitters based on the amount of the chemical in the food. Neurotransmitters from the gustatory cells can activate sensory neurons in the facial, glossopharyngeal, and vagus cranial nerves.[4]
108
+
109
+ Salty and sour taste submodalities are triggered by the cations Na+ and H+, respectively. The other taste modalities result from food molecules binding to a G protein–coupled receptor. A G protein signal transduction system ultimately leads to depolarization of the gustatory cell. The sweet taste is the sensitivity of gustatory cells to the presence of glucose (or sugar substitutes) dissolved in the saliva. Bitter taste is similar to sweet in that food molecules bind to G protein–coupled receptors. The taste known as umami is often referred to as the savory taste. Like sweet and bitter, it is based on the activation of G protein–coupled receptors by a specific molecule.[4]
110
+
111
+ Once the gustatory cells are activated by the taste molecules, they release neurotransmitters onto the dendrites of sensory neurons. These neurons are part of the facial and glossopharyngeal cranial nerves, as well as a component within the vagus nerve dedicated to the gag reflex. The facial nerve connects to taste buds in the anterior third of the tongue. The glossopharyngeal nerve connects to taste buds in the posterior two thirds of the tongue. The vagus nerve connects to taste buds in the extreme posterior of the tongue, verging on the pharynx, which are more sensitive to noxious stimuli such as bitterness.[4]
112
+
113
+ Flavor depends on odor, texture, and temperature as well as on taste. Humans receive tastes through sensory organs called taste buds, or gustatory calyculi, concentrated on the upper surface of the tongue. Other tastes such as calcium[36][37] and free fatty acids[38] may also be basic tastes but have yet to receive widespread acceptance. The inability to taste is called ageusia.
114
+
115
+ There is a rare phenomenon when it comes to the Gustatory sense. It is called Lexical-Gustatory Synesthesia. Lexical-Gustatory Synesthesia is when people can “taste” words. [39] They have reported having flavor sensations they aren’t actually eating. When they read words, hear words, or even imagine words. They have reported not only simple flavors, but textures, complex flavors, and temperatures as well.  [40]
116
+
117
+ Like the sense of taste, the sense of smell, or the olfactiory system, is also responsive to chemical stimuli.[4] Unlike taste, there are hundreds of olfactory receptors (388 according to one source), each binding to a particular molecular feature. Odor molecules possess a variety of features and, thus, excite specific receptors more or less strongly. This combination of excitatory signals from different receptors makes up what humans perceive as the molecule's smell.[41]
118
+
119
+ The olfactory receptor neurons are located in a small region within the superior nasal cavity. This region is referred to as the olfactory epithelium and contains bipolar sensory neurons. Each olfactory sensory neuron has dendrites that extend from the apical surface of the epithelium into the mucus lining the cavity. As airborne molecules are inhaled through the nose, they pass over the olfactory epithelial region and dissolve into the mucus. These odorant molecules bind to proteins that keep them dissolved in the mucus and help transport them to the olfactory dendrites. The odorant–protein complex binds to a receptor protein within the cell membrane of an olfactory dendrite. These receptors are G protein–coupled, and will produce a graded membrane potential in the olfactory neurons.[4]
120
+
121
+ In the brain, olfaction is processed by the olfactory cortex. Olfactory receptor neurons in the nose differ from most other neurons in that they die and regenerate on a regular basis. The inability to smell is called anosmia. Some neurons in the nose are specialized to detect pheromones.[42] Loss of the sense of smell can result in food tasting bland. A person with an impaired sense of smell may require additional spice and seasoning levels for food to be tasted. Anosmia may also be related to some presentations of mild depression, because the loss of enjoyment of food may lead to a general sense of despair. The ability of olfactory neurons to replace themselves decreases with age, leading to age-related anosmia. This explains why some elderly people salt their food more than younger people do.[4]
122
+
123
+ Causes of Olfactory dysfunction can be caused by age, exposure to toxic chemicals, viral infections, epilepsy, some sort of neurodegenerative disease, head trauma, or as a result of another disorder. [5]
124
+
125
+ As studies in olfaction have continued, there has been a positive correlation to its dysfunction or degeneration and early signs of Alzheimers and sporadic Parkinson’s disease. Many patients don’t notice the decline in smell before being tested. In Parkinson’s Disease and Alzheimers, an olfactory deficit is present in 85 to 90% of the early onset cases. [5]There is evidence that the decline of this sense can precede the Alzheimers or Parkinson’s Disease by a couple years. Although the deficit is present in these two diseases, as well as others, it is important to make note that the severity or magnitude vary with every disease. This has brought to light some suggestions that olfactory testing could be used in some cases to aid in differentiating many of the neurodegenerative diseases. [5]
126
+
127
+ Those who were born without a sense of smell or have a damaged sense of smell usually complain about 1, or more, of 3 things. Our olfactory sense is also used as a warning against bad food. If the sense of smell is damaged or not there, it can lead to a person contracting food poisoning more often. Not having a sense of smell can also lead to damaged relationships or insecurities within the relationships because of the inability for the person to not smell body odor. Lastly, smell influences how food and drink taste. When the olfactory sense is damaged, the satisfaction from eating and drinking is not as prominent.
128
+
129
+ Proprioception, the kinesthetic sense, provides the parietal cortex of the brain with information on the movement and relative positions of the parts of the body. Neurologists test this sense by telling patients to close their eyes and touch their own nose with the tip of a finger. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses. Proprioception and touch are related in subtle ways, and their impairment results in surprising and deep deficits in perception and action.[43]
130
+
131
+ Nociception (physiological pain) signals nerve-damage or damage to tissue. The three types of pain receptors are cutaneous (skin), somatic (joints and bones), and visceral (body organs). It was previously believed that pain was simply the overloading of pressure receptors, but research in the first half of the 20th century indicated that pain is a distinct phenomenon that intertwines with all of the other senses, including touch. Pain was once considered an entirely subjective experience, but recent studies show that pain is registered in the anterior cingulate gyrus of the brain.[44] The main function of pain is to attract our attention to dangers and motivate us to avoid them. For example, humans avoid touching a sharp needle, or hot object, or extending an arm beyond a safe limit because it is dangerous, and thus hurts. Without pain, people could do many dangerous things without being aware of the dangers.
132
+
133
+ An internal sensation and perception also known as interoception[45] is "any sense that is normally stimulated from within the body".[46] These involve numerous sensory receptors in internal organs. Interoception is thought to be atypical in clinical conditions such as alexithymia.[47]
134
+ Some examples of specific receptors are:
135
+
136
+ Other living organisms have receptors to sense the world around them, including many of the senses listed above for humans. However, the mechanisms and capabilities vary widely.
137
+
138
+ An example of smell in non-mammals is that of sharks, which combine their keen sense of smell with timing to determine the direction of a smell. They follow the nostril that first detected the smell.[54] Insects have olfactory receptors on their antennae. Although it is unknown to the degree and magnitude which non-human animals can smell better than humans.[55]
139
+
140
+ Many animals (salamanders, reptiles, mammals) have a vomeronasal organ[56] that is connected with the mouth cavity. In mammals it is mainly used to detect pheromones of marked territory, trails, and sexual state. Reptiles like snakes and monitor lizards make extensive use of it as a smelling organ by transferring scent molecules to the vomeronasal organ with the tips of the forked tongue. In reptiles the vomeronasal organ is commonly referred to as Jacobsons organ. In mammals, it is often associated with a special behavior called flehmen characterized by uplifting of the lips. The organ is vestigial in humans, because associated neurons have not been found that give any sensory input in humans.[57]
141
+
142
+ Flies and butterflies have taste organs on their feet, allowing them to taste anything they land on. Catfish have taste organs across their entire bodies, and can taste anything they touch, including chemicals in the water.[58]
143
+
144
+ Cats have the ability to see in low light, which is due to muscles surrounding their irides–which contract and expand their pupils–as well as to the tapetum lucidum, a reflective membrane that optimizes the image.
145
+ Pit vipers, pythons and some boas have organs that allow them to detect infrared light, such that these snakes are able to sense the body heat of their prey. The common vampire bat may also have an infrared sensor on its nose.[59] It has been found that birds and some other animals are tetrachromats and have the ability to see in the ultraviolet down to 300 nanometers. Bees and dragonflies[60] are also able to see in the ultraviolet. Mantis shrimps can perceive both polarized light and multispectral images and have twelve distinct kinds of color receptors, unlike humans which have three kinds and most mammals which have two kinds.[61]
146
+
147
+ Cephalopods have the ability to change color using chromatophores in their skin. Researchers believe that opsins in the skin can sense different wavelengths of light and help the creatures choose a coloration that camouflages them, in addition to light input from the eyes.[62] Other researchers hypothesize that cephalopod eyes in species which only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into color vision,[63] explaining pupils shaped like the letter U, the letter W, or a dumbbell, as well as explaining the need for colorful mating displays.[64] Some cephalopods can distinguish the polarization of light.
148
+
149
+ Many invertebrates have a statocyst, which is a sensor for acceleration and orientation that works very differently from the mammalian's semi-circular canals.
150
+
151
+ In addition, some animals have senses that humans do not, including the following:
152
+
153
+ Magnetoception (or magnetoreception) is the ability to detect the direction one is facing based on the Earth's magnetic field. Directional awareness is most commonly observed in birds, which rely on their magnetic sense to navigate during migration.[65][65][66][permanent dead link][67][68] It has also been observed in insects such as bees. Cattle make use of magnetoception to align themselves in a north–south direction.[69] Magnetotactic bacteria build miniature magnets inside themselves and use them to determine their orientation relative to the Earth's magnetic field.[70][71] There has been some recent (tentative) research suggesting that the Rhodopsin in the human eye, which responds particularly well to blue light, can facilitate magnetoception in humans.[72]
154
+
155
+ Certain animals, including bats and cetaceans, have the ability to determine orientation to other objects through interpretation of reflected sound (like sonar). They most often use this to navigate through poor lighting conditions or to identify and track prey. There is currently an uncertainty whether this is simply an extremely developed post-sensory interpretation of auditory perceptions or it actually constitutes a separate sense. Resolution of the issue will require brain scans of animals while they actually perform echolocation, a task that has proven difficult in practice.
156
+
157
+ Blind people report they are able to navigate and in some cases identify an object by interpreting reflected sounds (especially their own footsteps), a phenomenon known as human echolocation.
158
+
159
+ Electroreception (or electroception) is the ability to detect electric fields. Several species of fish, sharks, and rays have the capacity to sense changes in electric fields in their immediate vicinity. For cartilaginous fish this occurs through a specialized organ called the Ampullae of Lorenzini. Some fish passively sense changing nearby electric fields; some generate their own weak electric fields, and sense the pattern of field potentials over their body surface; and some use these electric field generating and sensing capacities for social communication. The mechanisms by which electroceptive fish construct a spatial representation from very small differences in field potentials involve comparisons of spike latencies from different parts of the fish's body.
160
+
161
+ The only orders of mammals that are known to demonstrate electroception are the dolphin and monotreme orders. Among these mammals, the platypus[73] has the most acute sense of electroception.
162
+
163
+ A dolphin can detect electric fields in water using electroreceptors in vibrissal crypts arrayed in pairs on its snout and which evolved from whisker motion sensors.[74] These electroreceptors can detect electric fields as weak as 4.6 microvolts per centimeter, such as those generated by contracting muscles and pumping gills of potential prey. This permits the dolphin to locate prey from the seafloor where sediment limits visibility and echolocation.
164
+
165
+ Spiders have been shown to detect electric fields to determine a suitable time to extend web for 'ballooning'.[75]
166
+
167
+ Body modification enthusiasts have experimented with magnetic implants to attempt to replicate this sense.[76] However, in general humans (and it is presumed other mammals) can detect electric fields only indirectly by detecting the effect they have on hairs. An electrically charged balloon, for instance, will exert a force on human arm hairs, which can be felt through tactition and identified as coming from a static charge (and not from wind or the like). This is not electroreception, as it is a post-sensory cognitive action.
168
+
169
+ Hygroreception is the ability to detect changes in the moisture content of the environment.[11][77]
170
+
171
+ The ability to sense infrared thermal radiation evolved independently in various families of snakes. Essentially, it allows these reptiles to "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes.[78] It was previously thought that the organs evolved primarily as prey detectors, but it is now believed that it may also be used in thermoregulatory decision making.[79] The facial pit underwent parallel evolution in pitvipers and some boas and pythons, having evolved once in pitvipers and multiple times in boas and pythons.[80] The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (Loreal pit), while boas and pythons have three or more comparatively smaller pits lining the upper and sometimes the lower lip, in or between the scales. Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure. Within the family Viperidae, the pit organ is seen only in the subfamily Crotalinae: the pitvipers. The organ is used extensively to detect and target endothermic prey such as rodents and birds, and it was previously assumed that the organ evolved specifically for that purpose. However, recent evidence shows that the pit organ may also be used for thermoregulation. According to Krochmal et al., pitvipers can use their pits for thermoregulatory decision-making while true vipers (vipers who do not contain heat-sensing pits) cannot.
172
+
173
+ In spite of its detection of IR light, the pits' IR detection mechanism is not similar to photoreceptors – while photoreceptors detect light via photochemical reactions, the protein in the pits of snakes is in fact a temperature-sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than a chemical reaction to light.[81] This is consistent with the thin pit membrane, which allows incoming IR radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as vascularize the pit membrane in order to rapidly cool the ion channel back to its original "resting" or "inactive" temperature.[81]
174
+
175
+ Pressure detection uses the organ of Weber, a system consisting of three appendages of vertebrae transferring changes in shape of the gas bladder to the middle ear. It can be used to regulate the buoyancy of the fish. Fish like the weather fish and other loaches are also known to respond to low pressure areas but they lack a swim bladder.
176
+
177
+ Current detection is a detection system of water currents, consisting mostly of vortices, found in the lateral line of fish and aquatic forms of amphibians. The lateral line is also sensitive to low-frequency vibrations. The mechanoreceptors are hair cells, the same mechanoreceptors for vestibular sense and hearing. It is used primarily for navigation, hunting, and schooling. The receptors of the electrical sense are modified hair cells of the lateral line system.
178
+
179
+ Polarized light direction/detection is used by bees to orient themselves, especially on cloudy days. Cuttlefish, some beetles, and mantis shrimp can also perceive the polarization of light. Most sighted humans can in fact learn to roughly detect large areas of polarization by an effect called Haidinger's brush, however this is considered an entoptic phenomenon rather than a separate sense.
180
+
181
+ Slit sensillae of spiders detect mechanical strain in the exoskeleton, providing information on force and vibrations.
182
+
183
+ By using a variety of sense receptors, plants sense light, temperature, humidity, chemical substances, chemical gradients, reorientation, magnetic fields, infections, tissue damage and mechanical pressure. The absence of a nervous system notwithstanding, plants interpret and respond to these stimuli by a variety of hormonal and cell-to-cell communication pathways that result in movement, morphological changes and physiological state alterations at the organism level, that is, result in plant behavior. Such physiological and cognitive functions are generally not believed to give rise to mental phenomena or qualia, however, as these are typically considered the product of nervous system activity. The emergence of mental phenomena from the activity of systems functionally or computationally analogous to that of nervous systems is, however, a hypothetical possibility explored by some schools of thought in the philosophy of mind field, such as functionalism and computationalism.
184
+
185
+ However, plants could perceive the world around them,[15] and might be able to emit airborne sounds similar to "screaming" when stressed. Those noises could not be detectable by human ears, but organisms with a hearing range that can hear ultrasonic frequencies—like mice, bats or perhaps other plants—could hear the plants' cries from as far as 15 feet (4.6 m) away.[82]
186
+
187
+ Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them.[16][17][83] Computers take in and respond to their environment through attached hardware. Until recently, input was limited to a keyboard, joystick or a mouse, but advances in technology, both in hardware and software, have allowed computers to take in sensory input in a way similar to humans.[16][17]
188
+
189
+ In the time of William Shakespeare, there were commonly reckoned to be five wits or five senses.[85] At that time, the words "sense" and "wit" were synonyms,[85] so the senses were known as the five outward wits.[86][87] This traditional concept of five senses is common today.
190
+
191
+ The traditional five senses are enumerated as the "five material faculties" (pañcannaṃ indriyānaṃ avakanti) in Hindu literature. They appear in allegorical representation as early as in the Katha Upanishad (roughly 6th century BC), as five horses drawing the "chariot" of the body, guided by the mind as "chariot driver".
192
+
193
+ Depictions of the five traditional senses as allegory became a popular subject for seventeenth-century artists, especially among Dutch and Flemish Baroque painters. A typical example is Gérard de Lairesse's Allegory of the Five Senses (1668), in which each of the figures in the main group alludes to a sense: Sight is the reclining boy with a convex mirror, hearing is the cupid-like boy with a triangle, smell is represented by the girl with flowers, taste is represented by the woman with the fruit, and touch is represented by the woman holding the bird.
194
+
195
+ In Buddhist philosophy, Ayatana or "sense-base" includes the mind as a sense organ, in addition to the traditional five. This addition to the commonly acknowledged senses may arise from the psychological orientation involved in Buddhist thought and practice. The mind considered by itself is seen as the principal gateway to a different spectrum of phenomena that differ from the physical sense data. This way of viewing the human sense system indicates the importance of internal sources of sensation and perception that complements our experience of the external world.[citation needed]
en/5349.html.txt ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Toothpaste is a paste or gel dentifrice used with a toothbrush to clean and maintain the aesthetics and health of teeth. Toothpaste is used to promote oral hygiene: it is an abrasive that aids in removing dental plaque and food from the teeth, assists in suppressing halitosis, and delivers active ingredients (most commonly fluoride) to help prevent tooth decay (dental caries) and gum disease (gingivitis).[1] Salt and sodium bicarbonate (baking soda) are among materials that can be substituted for commercial toothpaste. Large amounts of swallowed toothpaste can be toxic.[2]
4
+
5
+ A 2016 systematic review indicates that using toothpaste when brushing the teeth has no impact on the level of plaque removal.[3]
6
+
7
+ In addition to 20%–42% water, toothpastes are derived from a variety of components, the three main ones being abrasives, fluoride, and detergents.
8
+
9
+ Abrasives constitute at least 50% of a typical toothpaste. These insoluble particles are designed to help remove plaque from the teeth. The removal of plaque and calculus prevents the accumulation of tartar and is widely claimed to help minimize cavities and periodontal disease, although the clinical significance of this benefit is debated.[4] Representative abrasives include particles of aluminum hydroxide (Al(OH)3), calcium carbonate (CaCO3), various calcium hydrogen phosphates, various silicas and zeolites, and hydroxyapatite (Ca5(PO4)3OH).
10
+
11
+ Abrasives, like the dental polishing agents used in dentists' offices, also cause a small amount of enamel erosion which is termed "polishing" action. Some brands contain powdered white mica, which acts as a mild abrasive, and also adds a cosmetically pleasing glittery shimmer to the paste. The polishing of teeth removes stains from tooth surfaces, but has not been shown to improve dental health over and above the effects of the removal of plaque and calculus.[5]
12
+
13
+ The abrasive effect of toothpaste is indicated by its RDA value. Too high RDA values are deleterious. Some dentists recommend toothpaste with an RDA value no higher than 50 for daily use.
14
+
15
+ Fluoride in various forms is the most popular active ingredient in toothpaste to prevent cavities. Fluoride is present in small amounts in plants, animals, and some natural water sources. The additional fluoride in toothpaste has beneficial effects on the formation of dental enamel and bones. Sodium fluoride (NaF) is the most common source of fluoride, but stannous fluoride (SnF2), olaflur (an organic salt of fluoride), and sodium monofluorophosphate (Na2PO3F) are also used. Stannous fluoride has been shown to be more effective than sodium fluoride in reducing the incidence of dental caries[6] and controlling gingivitis, but causes somewhat more surface stains.[7]
16
+
17
+ Much of the toothpaste sold in the United States has 1,000 to 1,100 parts per million fluoride. In European countries, such as the UK or Greece, the fluoride content is often higher; a NaF content of 0.312% w/w (1,450 ppm fluoride) is common. All of these concentrations are likely to prevent tooth decay, according to a 2019 Cochrane review.[8] Concentrations below 1,000 ppm are not likely to be preventive, and the preventive effect increases with concentration. Clinical trials support the use of high fluoride dentifrices,[9] as it was found to reduce the amount of plaque accumulated, decrease the number of mutans streptococci and lactobacilli and possibly promote calcium fluoride deposits to a higher degree than after the use of traditional fluoride containing dentifrices.[10] However, these effects must be balanced with the increased risk of harm at higher concentrations.[9]
18
+
19
+ Many, although not all, toothpastes contain sodium lauryl sulfate (SLS) or related surfactants (detergents). SLS is found in many other personal care products as well, such as shampoo, and is mainly a foaming agent, which enables uniform distribution of toothpaste, improving its cleansing power.[5]
20
+
21
+ Triclosan, an antibacterial agent, is a common toothpaste ingredient in the United Kingdom. Triclosan or zinc chloride prevent gingivitis and, according to the American Dental Association, helps reduce tartar and bad breath.[1][11] A 2006 review of clinical research concluded there was evidence for the effectiveness of 0.30% triclosan in reducing plaque and gingivitis.[12] Another Cochrane review in 2013 has found that triclosan achieved a 22% reduction in plaque, and in gingivitis, a 48% reduction in bleeding gums. However, there was insufficient evidence to show a difference in fighting periodontitis and there was no evidence either of any harmful effects associated with the use of triclosan toothpastes for more than 3 years. The evidence relating to plaque and gingivitis was considered to be of moderate quality while for periodontitis was low quality.[13]
22
+
23
+ Toothpaste comes in a variety of colors and flavors, intended to encourage use of the product. The three most common flavorants are peppermint, spearmint, and wintergreen. Toothpaste flavored with peppermint-anise oil is popular in the Mediterranean region. These flavors are provided by the respective oils, e.g. peppermint oil.[5] More exotic flavors include Anethole anise, apricot, bubblegum, cinnamon, fennel, lavender, neem, ginger, vanilla, lemon, orange, and pine. Alternatively, unflavored toothpastes exist.
24
+
25
+ Hydroxyapatite nanocrystals and a variety of calcium phosphates are included in formulations for remineralization,[14] i.e. the reformation of enamel.
26
+
27
+ Agents are added to suppress the tendency of toothpaste to dry into a powder. Included are various sugar alcohols, such as glycerol, sorbitol, or xylitol, or related derivatives, such as 1,2-propylene glycol and polyethyleneglycol.[15] Strontium chloride or potassium nitrate is included in some toothpastes to reduce sensitivity. Two systemic meta-analysis reviews reported that arginine, and calcium sodium phosphosilicate - CSPS containing toothpastes are also effective in alleviating dentinal hypersensitivity respectively.[16][17] Another randomized clinical trial found superior effects when both formulas were combined together.[18]
28
+
29
+ Sodium polyphosphate is added to minimize the formation of tartar.[citation needed] Other example to components in toothpastes is the Biotene, which has proved its efficiency in relieving the symptoms of dry mouth in people who suffer from xerostomia according to the results of two randomized clinical trials.[19][20]
30
+
31
+ Chlorohexidine mouthwash has been popular for its positive effect on controlling plaque and gingivitis,[21] however, a systemic review studied the effects of chlorohexidine toothpastes and found insufficient evidence to support its use, tooth surface discoloration was observed as a side effect upon using it, which is considered a negative side effect that can affect patients' compliance.[22]
32
+
33
+ Sodium hydroxide, also known as lye or caustic soda, is listed as an inactive ingredient in some toothpaste, for example Colgate Total.
34
+
35
+ Some studies have demonstrated that toothpastes with xylitol as an ingredient are more effective at preventing dental caries in permanent teeth of children than toothpastes containing fluoride alone. Furthermore, xylitol has not been found to cause any harmful effects. Further investigation into the efficacy of toothpastes containing this product is however required as the currently available studies are of low quality and therefore the results of such studies must be applied carefully.[23]
36
+
37
+ Fluoride-containing toothpaste can be acutely toxic if swallowed in large amounts,[24][25] but instances are exceedingly rare and result from prolonged and excessive use of toothpaste (i.e. several tubes per week).[26] Approximately 15 mg/kg body weight is the acute lethal dose, even though as small amount as 5 mg/kg may be fatal to some children.[27]
38
+
39
+ The risk of using fluoride is low enough that the use of full-strength toothpaste (1350–1500 ppm fluoride) is advised for all ages. However, smaller volumes are used for young children, for example, a smear of toothpaste until three years old.[25] A major concern of dental fluorosis is for children under 12 months ingesting excessive fluoride through toothpaste. Nausea and vomiting are also problems which might arise with topical fluoride ingestion.[27]
40
+
41
+ The inclusion of sweet-tasting but toxic diethylene glycol in Chinese-made toothpaste led to a recall in 2007 involving multiple toothpaste brands in several nations.[28] The world outcry made Chinese officials ban the practice of using diethylene glycol in toothpaste.[29]
42
+
43
+ Reports have suggested triclosan, an active ingredient in many kinds of toothpastes, can combine with chlorine in tap water to form chloroform,[30] which the United States Environmental Protection Agency classifies as a probable human carcinogen. An animal study revealed the chemical might modify hormone regulation, and many other lab researches proved bacteria might be able to develop resistance to triclosan in a way which can help them to resist antibiotics also.[31]
44
+
45
+ PEG is a common ingredient in some of the formulas of toothpastes; it is a hydrophilic polymer that acts as a dispersant in toothpastes. Also, it is used in many cosmetic and pharmaceutical formulas, for example: ointments, osmotic laxatives, some of the non steroidal anti-inflammatory drugs, other medications and household products.[32] However, 37 cases of PEG hypersensitivity (delayed and immediate) to PEG-containing substances have been reported since 1977,[33] suggesting that they have unrecognized allergenic potential.[33]
46
+
47
+ With the exception of toothpaste intended to be used on pets such as dogs and cats, and toothpaste used by astronauts, most toothpaste is not intended to be swallowed, and doing so may cause nausea or diarrhea. Tartar fighting toothpastes have been debated.[34] Case reports of plasma cell gingivitis have been reported with the use of herbal toothpaste containing cinnamon.[35] Sodium lauryl sulfate (SLS) has been proposed to increase the frequency of mouth ulcers in some people, as it can dry out the protective layer of oral tissues, causing the underlying tissues to become damaged.[36] In studies conducted by the university of Oslo on recurrent aphthous ulcers, it was found that SLS has a denaturing effect on the oral mucin layer, with high affinity for proteins, thereby increasing epithelial permeability.[37] In a double-blind cross-over study, a significantly higher frequency of aphthous ulcers was demonstrated when patients brushed with an SLS-containing versus a detergent-free toothpaste. Also patients with Oral Lichen Planus who avoided SLS-containing toothpaste benefited.[38][39]
48
+
49
+ After using toothpaste, orange juice and other juices have an unpleasant taste. Sodium lauryl sulfate alters taste perception. It can break down phospholipids that inhibit taste receptors for sweetness, giving food a bitter taste. In contrast, apples are known to taste more pleasant after using toothpaste.[40] Distinguishing between the hypotheses that the bitter taste of orange juice results from stannous fluoride or from sodium lauryl sulfate is still an unresolved issue and it is thought that the menthol added for flavor may also take part in the alteration of taste perception when binding to lingual cold receptors.[citation needed]
50
+
51
+ Many toothpastes make whitening claims. Some of these toothpastes contain peroxide, the same ingredient found in tooth bleaching gels. The abrasive in these toothpastes, not the peroxide, removes the stains.[41] Whitening toothpaste cannot alter the natural color of teeth or reverse discoloration by penetrating surface stains or decay. To remove surface stains, whitening toothpaste may include abrasives to gently polish the teeth or additives such as sodium tripolyphosphate to break down or dissolve stains. When used twice a day, whitening toothpaste typically takes two to four weeks to make teeth appear whiter. Whitening toothpaste is generally safe for daily use, but excessive use might damage tooth enamel. Teeth whitening gels represent an alternative.[42] A recent systematic review in 2017 concluded that nearly all dentifrices that are specifically formulated for tooth whitening were shown to have a beneficial effect in reducing extrinsic stains, irrespective of whether or not a chemical discoloration agent was added.[43] However, the whitening process can permanently reduce the strength of the teeth, as the process scrapes away a protective outer layer of enamel.[44]
52
+
53
+ Companies such as Tom's of Maine, among others, manufacture natural and herbal toothpastes and market them to consumers who wish to avoid the artificial ingredients commonly found in regular toothpastes. Many herbal toothpastes do not contain fluoride or sodium lauryl sulfate. The ingredients found in natural toothpastes vary widely but often include baking soda, aloe, eucalyptus oil, myrrh, plant extract (strawberry extract), and essential oils. A systemic review in 2014 found insufficient evidence to determine whether the aloe vera herbal dentifrice can reduce plaque or improve gingival health, as the randomized studies were found to be flawed with high risk of bias.[45]
54
+
55
+ According to a study by the Delhi Institute of Pharmaceutical Sciences and Research, many of the herbal toothpastes being sold in India were adulterated with nicotine.[46]
56
+
57
+ Charcoal has also been incorporated in toothpaste formulas; however, there is no evidence to determine its safety and effectiveness.[47]
58
+
59
+ Striped toothpaste was invented by Leonard Marraffino in 1955. The patent (US patent 2,789,731, issued 1957) was subsequently sold to Unilever, who marketed the novelty under the Stripe brand-name in the early 1960s. This was followed by the introduction of the Signal brand in Europe in 1965 (UK patent 813,514). Although Stripe was initially very successful, it never again achieved the 8% market share that it cornered during its second year.
60
+
61
+ Marraffino's design, which remains in use for single-color stripes, is simple. The main material, usually white, sits at the crimp end of the toothpaste tube and makes up most of its bulk. A thin pipe, through which that carrier material will flow, descends from the nozzle to it. The stripe-material (this was red in Stripe) fills the gap between the carrier material and the top of the tube. The two materials are not in separate compartments, however, they are sufficiently viscous that they will not mix. When pressure is applied to the toothpaste tube, the main material squeezes down the thin pipe to the nozzle. Simultaneously, the pressure applied to the main material causes pressure to be forwarded to the stripe material, which thereby issues out through small holes (in the side of the pipe) onto the main carrier material as it is passing those holes.
62
+
63
+ In 1990, Colgate-Palmolive was granted a patent (USPTO 4,969,767) for two differently colored stripes. In this scheme, the inner pipe has a cone-shaped plastic guard around it, and about halfway up its length. Between the guard and the nozzle-end of the tube is a space for the material for one color, which issues out of holes in the pipe. On the other side of the guard is space for second stripe-material, which has its own set of holes.
64
+
65
+ Striped toothpaste should not be confused with layered toothpaste. Layered toothpaste requires a multi-chamber design (e.g. USPTO 5,020,694), in which two or three layers extrude out of the nozzle. This scheme, like that of pump dispensers (USPTO 4,461,403), is more complicated (and thus, more expensive to manufacture) than either the Marraffino design or the Colgate design.
66
+
67
+ Since 5000 BC, the Egyptians made a tooth powder, which consisted of powdered ashes of ox hooves, myrrh, powdered and burnt eggshells, and pumice. The Greeks, and then the Romans, improved the recipes by adding abrasives such as crushed bones and oyster shells.[48] In the 9th century, Iraqi musician and fashion designer Ziryab invented a type of toothpaste, which he popularized throughout Islamic Spain. The exact ingredients of this toothpaste are unknown, but it was reported to have been both "functional and pleasant to taste".[49] It is not known whether these early toothpastes were used alone, were to be rubbed onto the teeth with rags, or were to be used with early toothbrushes, such as neem-tree twigs and miswak. During Japan's Edo period, inventor Hiraga Gennai's Hika rakuyo (1769), contained advertisements for Sosekiko, a "toothpaste in a box."[50] Toothpastes or powders came into general use in the 19th century.
68
+
69
+ Tooth powders for use with toothbrushes came into general use in the 19th century in Britain. Most were homemade, with chalk, pulverized brick, or salt as ingredients. An 1866 Home Encyclopedia recommended pulverized charcoal, and cautioned that many patented tooth powders that were commercially marketed did more harm than good.
70
+
71
+ Arm & Hammer marketed a baking soda-based toothpowder in the United States until approximately 2000, and Colgate currently markets toothpowder in India and other countries.
72
+
73
+ An 18th-century American and British toothpaste recipe called for burned bread. Another formula around this time called for dragon's blood (a resin), cinnamon, and burned alum.[51]
74
+
75
+ By 1900, a paste made of hydrogen peroxide and baking soda was recommended for use with toothbrushes. Pre-mixed toothpastes were first marketed in the 19th century, but did not surpass the popularity of tooth-powder until World War I.
76
+
77
+ Together with Willoughby D. Miller, Newell Sill Jenkins developed a toothpaste and named it Kolynos, the first toothpaste containing disinfectants.[52] The name's origin is from Greek Kolyo nosos (κωλύω νόσος), meaning "disease prevention". Numerous attempts to produce the toothpaste by pharmacists in Europe have been uneconomic. After returning to the US, he continued experimenting with Harry Ward Foote (1875-1942), professor of chemistry at Sheffield Chemical Laboratory of Yale University.[53] After 17 years of development of Kolynos and clinical trials, Jenkins retired and transferred the production and distribution to his son Leonard A. Jenkins, who brought the first toothpaste tubes on the market on April 13, 1908. Within a few years the company expanded in North America, Latin America, Europe and the Far East. A branch operation opened in London in 1909. In 1937, Kolynos was produced in 22 countries and sold in 88 countries. Kolynos has been sold mainly in South America and in Hungary. Colgate-Palmolive took over the production of American Home Products in 1995 at a cost of one billion US dollars.[54]
78
+
79
+ Fluoride was first added to toothpastes in the 1890s. Tanagra, containing calcium fluoride as the active ingredient, was sold by Karl F. Toellner Company, of Bremen, Germany, based upon the early work of chemist Albert Deninger.[55] An analogous invention by Roy Cross, of Kansas City, Missouri, was initially criticized by the American Dental Association (ADA) in 1937. Fluoride toothpastes developed in the 1950s received the ADA's approval. To develop the first ADA-approved fluoride toothpaste, Procter & Gamble started a research program in the early 1940s. In 1950, Procter & Gamble developed a joint research project team headed by Joseph C. Muhler at Indiana University to study new toothpaste with fluoride. In 1955, Procter & Gamble's Crest launched its first clinically proven fluoride-containing toothpaste. On August 1, 1960, the ADA reported that "Crest has been shown to be an effective anticavity (decay preventative) dentifrice that can be of significant value when used in a conscientiously applied program of oral hygiene and regular professional care."
80
+
81
+ In 1980, the Japanese company, Sangi Co., Ltd., launched APADENT, the world's first remineralizing toothpaste to use a nano-form of hydroxyapatite, the main component of tooth enamel, rather than fluoride, to remineralize areas of mineral loss below the surface of tooth enamel (incipient caries lesions). After many years of laboratory experiments and field trials,[56] its hydroxyapatite ingredient was approved as an active anti-caries agent by the Japanese Ministry of Health in 1993, and given the name Medical Hydroxyapatite to distinguish it from other forms of hydroxyapatite used in toothpaste, such as dental abrasives.
82
+
83
+ In 2006, BioRepair appeared in Europe with the first European toothpaste containing synthetic hydroxylapatite as an alternative to fluoride for the remineralization and reparation of tooth enamel. The "biomimetic hydroxylapatite" is intended to protect the teeth by creating a new layer of synthetic enamel around the tooth instead of hardening the existing layer with fluoride that chemically changes it into fluorapatite.[57]
84
+
85
+ In 1880, Doctor Washington Sheffield of New London, CT manufactured toothpaste into a collapsible tube, Dr. Sheffield's Creme Dentifrice. He had the idea after his son traveled to Paris and saw painters using paint from tubes. In York in 1896, Colgate & Company Dental Cream was packaged in collapsible tubes imitating Sheffield. The original collapsible toothpaste tubes were made of lead.[58][59]
86
+
87
+ Modern toothpaste gel
88
+
89
+ Promotional poster for the Kolynos toothpaste from the 1940s
90
+
91
+ Colgate Dental Cream (Toothpaste) With Gardol, c. 1950s
en/535.html.txt ADDED
@@ -0,0 +1,391 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Baltimore (/ˈbɔːltɪmɔːr/ BAWL-tim-or, locally: /ˈbɔːlmər/) is the most populous city in the U.S. state of Maryland, as well as the 30th most populous city in the United States, with a population of 593,490 in 2019. Baltimore is the largest independent city in the country and was established by the Constitution of Maryland[10] in 1851. As of 2017, the population of the Baltimore metropolitan area was estimated to be just under 2.802 million, making it the 21st largest metropolitan area in the country.[11] Baltimore is located about 40 miles (64 km) northeast of Washington, D.C.,[12] making it a principal city in the Washington–Baltimore combined statistical area (CSA), the fourth-largest CSA in the nation, with a calculated 2018 population of 9,797,063.[13]
4
+
5
+ The city's Inner Harbor was once the second leading port of entry for immigrants to the United States. In addition, Baltimore was a major manufacturing center.[14] After a decline in major manufacturing, heavy industry, and restructuring of the rail industry, Baltimore has shifted to a service-oriented economy. Johns Hopkins Hospital (founded 1889), Johns Hopkins Children's Center, and Johns Hopkins University (founded 1876) are the city's top two employers.[15]
6
+
7
+ With hundreds of identified districts, Baltimore has been dubbed a "city of neighborhoods." Famous residents have included writers Edgar Allan Poe, Edith Hamilton, Frederick Douglass, W.E.B. Du Bois, Ogden Nash, Gertrude Stein, F. Scott Fitzgerald, Dashiell Hammett, Upton Sinclair, Tom Clancy, Ta-Nehisi Coates, and H. L. Mencken; musicians James "Eubie" Blake, Billie Holiday, Cab Calloway, Tori Amos, Frank Zappa, Tupac Shakur, Dan Deacon, Robbie Basho, Bill Frisell, Philip Glass, Cass Elliot, and Ric Ocasek; actors and filmmakers John Waters, Barry Levinson, Divine, David Hasselhoff, Don Messick, John Kassir, Jada Pinkett Smith, Edith Massey[16] and Mo'Nique; artist Jeff Koons; baseball player Babe Ruth; swimmer Michael Phelps; radio host Ira Glass; television host Mike Rowe; Supreme Court Justice Thurgood Marshall; Speaker of the United States House of Representatives Nancy Pelosi; and United States Secretary of Housing and Urban Development Ben Carson. During the War of 1812, Francis Scott Key wrote "The Star-Spangled Banner" in Baltimore after the bombardment of Fort McHenry. His poem was set to music and popularized as a song; in 1931 it was designated as the American national anthem.[17]
8
+
9
+ Baltimore has more public statues and monuments per capita than any other city in the country,[18] and is home to some of the earliest National Register Historic Districts in the nation, including Fell's Point, Federal Hill, and Mount Vernon. These were added to the National Register between 1969 and 1971, soon after historic preservation legislation was passed. Nearly one third of the city's buildings (over 65,000) are designated as historic in the National Register, which is more than any other U.S. city.[19][20]
10
+
11
+ The city has 66 National Register Historic Districts and 33 local historic districts. Over 65,000 properties are designated as historic buildings and listed in the NRHP, more than any other U.S. city.[19] The historical records of the government of Baltimore are located at the Baltimore City Archives.
12
+
13
+ The city is named after Cecil Calvert, second Lord Baltimore[21] of the Irish House of Lords and founding proprietor of the Province of Maryland.[22][23] Baltimore Manor was the name of the estate in County Longford on which the Calvert family lived in Ireland.[23][24] Baltimore is an anglicization of the Irish name Baile an Tí Mhóir, meaning "town of the big house."[23]
14
+
15
+ The Baltimore area had been inhabited by Native Americans since at least the 10th millennium BC, when Paleo-Indians first settled in the region.[25] One Paleo-Indian site and several Archaic period and Woodland period archaeological sites have been identified in Baltimore, including four from the Late Woodland period.[25] During the Late Woodland period, the archaeological culture that is called the "Potomac Creek complex" resided in the area from Baltimore south to the Rappahannock River in present-day Virginia.[26]
16
+
17
+ In the early 1600s, the immediate Baltimore vicinity was sparsely populated, if at all, by Native Americans. The Baltimore County area northward was used as hunting grounds by the Susquehannock living in the lower Susquehanna River valley. This Iroquoian-speaking people "controlled all of the upper tributaries of the Chesapeake" but "refrained from much contact with Powhatan in the Potomac region" and south into Virginia.[27]
18
+ Pressured by the Susquehannock, the Piscataway tribe, an Algonquian-speaking people, stayed well south of the Baltimore area and inhabited primarily the north bank of the Potomac River in what are now Charles and southern Prince George's counties in the coastal areas south of the Fall Line.[28][29][30]
19
+
20
+ European colonization of Maryland began with the arrival of an English ship at St. Clement's Island in the Potomac River on March 25, 1634.[31] Europeans began to settle the area further north, beginning to populate the area of Baltimore County.[32] The original county seat, known today as "Old Baltimore", was located on Bush River within the present-day Aberdeen Proving Ground.[33][34][35] The colonists engaged in sporadic warfare with the Susquehanna, whose numbers dwindled primarily from new infectious diseases, such as smallpox, endemic among the Europeans.[32] In 1661 David Jones claimed the area known today as Jonestown on the east bank of the Jones Falls stream.[36]
21
+
22
+ The colonial General Assembly of Maryland created the Port of Baltimore at old Whetstone Point (now Locust Point) in 1706 for the tobacco trade. The Town of Baltimore, on the west side of the Jones Falls, was founded and laid out on July 30, 1729. By 1752 the town had just 27 homes, including a church and two taverns.[37] Jonestown and Fells Point had been settled to the east. The three settlements, covering 60 acres (24 ha), became a commercial hub, and in 1768 were designated as the county seat.[38]
23
+
24
+ Being a colony, the Baltimore street names were laid out to demonstrate loyalty to the mother country. For example, King George, King, Queen, and Caroline streets.[37]
25
+
26
+ Baltimore grew swiftly in the 18th century, its plantations producing grain and tobacco for sugar-producing colonies in the Caribbean. The profit from sugar encouraged the cultivation of cane in the Caribbean and the importation of food by planters there.[39] As noted, Baltimore was as the county seat, and in 1768 a courthouse was built to serve both the city and county. Its square was a center of community meetings and discussions.
27
+
28
+ Baltimore established its public market system in 1763.[40] Lexington Market, founded in 1782, is known as one of the oldest continuously operating public markets in the United States today.[41] Lexington Market was also a center of slave trading. Slaves were sold at numerous sites through the downtown area, with sales advertised in the Baltimore Sun.[42] Both tobacco and sugar cane were labor-intensive crops.
29
+
30
+ Baltimore in 1774 established the first Post Office system in what became the United States,[43] and the first water company chartered in the newly independent nation (Baltimore Water Company, 1792).[44][45]
31
+
32
+ Baltimore played a key part in events leading to and including the American Revolution. City leaders such as Jonathan Plowman Jr. led many residents in joining the resistance to British taxes, and merchants signed agreements to refuse to trade with Britain.[46] The Second Continental Congress met in the Henry Fite House from December 1776 to February 1777, effectively making the city the capital of the United States during this period.[47]
33
+
34
+ The Town of Baltimore, Jonestown, and Fells Point were incorporated as the City of Baltimore in 1796–1797. The city remained a part of surrounding Baltimore County and continued to serve as its county seat from 1768 to 1851, after which it became an independent city.[48]
35
+
36
+ The Battle of Baltimore against the British in 1814 inspired the composition of the USA's national anthem, "The Star-Spangled Banner," and the construction of the Battle Monument which became the city's official emblem. A distinctive local culture started to take shape, and a unique skyline peppered with churches and monuments developed. Baltimore acquired its moniker "The Monumental City" after an 1827 visit to Baltimore by President John Quincy Adams. At an evening function Adams gave the following toast: "Baltimore: the Monumental City—May the days of her safety be as prosperous and happy, as the days of her dangers have been trying and triumphant."[50][51]
37
+
38
+ Baltimore pioneered the use of gas lighting in 1816, and its population grew rapidly in the following decades, with concomitant development of culture and infrastructure. The construction of the federally funded National Road (which later became part of U.S. Route 40) and the private Baltimore and Ohio Railroad (B. & O.) made Baltimore a major shipping and manufacturing center by linking the city with major markets in the Midwest. By 1820 its population had reached 60,000, and its economy had shifted from its base in tobacco plantations to sawmilling, shipbuilding, and textile production. These industries benefited from war but successfully shifted into infrastructure development during peacetime.[52]
39
+
40
+ Baltimore suffered one of the worst riots of the antebellum South in 1835, when bad investments led to the Baltimore bank riot.[53] Soon after the city created the world's first dental college, the Baltimore College of Dental Surgery, in 1840, and shared in the world's first telegraph line, between Baltimore and Washington DC in 1844.
41
+
42
+ Maryland, a slave state with abundant popular support for secession in some areas, remained part of the Union during the American Civil War, due in part to the Union's strategic occupation of the city in 1861.[55][56] Another factor was the fact that the Union's capitol, Washington, was in the state of Maryland (geographically if not politically), and well situated to impede Baltimore and Maryland's communication or commerce with the Confederacy. Baltimore saw the first casualties of the war on April 19, 1861, when Union Soldiers en route from the President Street Station to Camden Yards clashed with a secessionist mob in the Pratt Street riot.
43
+
44
+ In the midst of the Long Depression which followed the Panic of 1873, the Baltimore & Ohio Railroad company attempted to lower its workers' wages, leading to strikes and riots in the city and beyond. Strikers clashed with the National Guard, leaving 10 dead and 25 wounded.[57]
45
+
46
+ On February 7, 1904, the Great Baltimore Fire destroyed over 1,500 buildings in 30 hours, leaving more than 70 blocks of the downtown area burned to the ground. Damages were estimated at $150 million—in 1904 dollars.[58] As the city rebuilt during the next two years, lessons learned from the fire led to improvements in firefighting equipment standards.[59]
47
+
48
+ Baltimore lawyer Milton Dashiell advocated for an ordinance to bar African-Americans from moving into the Eutaw Place neighborhood in northwest Baltimore. He proposed to recognize majority white residential blocks and majority black residential blocks, and to prevent people from moving into housing on such blocks where they would be a minority. The Baltimore Council passed the ordinance, and it became law on December 20, 1910, with Democratic Mayor J. Barry Mahool's signature.[60] The Baltimore segregation ordinance was the first of its kind in the United States. Many other southern cities followed with their own segregation ordinances, though the US Supreme Court ruled against them in Buchanan v. Warley (1917).[61]
49
+
50
+ The city grew in area by annexing new suburbs from the surrounding counties through 1918, when the city acquired portions of Baltimore County and Anne Arundel County.[62] A state constitutional amendment, approved in 1948, required a special vote of the citizens in any proposed annexation area, effectively preventing any future expansion of the city's boundaries.[63] Streetcars enabled the development of distant neighborhoods areas such as Edmonson Village whose residents could easily commute to work downtown.[64]
51
+
52
+ Driven by migration from the deep South and by white suburbanization, the relative size of the city's black population grew from 23.8% in 1950 to 46.4% in 1970.[65] Encouraged by real estate blockbusting techniques, recently settled white areas rapidly became all-black neighborhoods, in a rapid process which was nearly total by 1970.[66]
53
+
54
+ The Baltimore riot of 1968, coinciding with riots in other cities, followed the assassination of Martin Luther King, Jr. on April 4, 1968. Public order was not restored until April 12, 1968. The Baltimore riot cost the city an estimated $10 million (US$ 74 million in 2020). A total of 11,000 Maryland National Guard and federal troops were ordered into the city.[67] The city experienced challenges again in 1974 when teachers, municipal workers, and police officers conducted strikes.[68]
55
+
56
+ Following the death of Freddie Gray in April 2015, the city experienced major protests and international media attention, as well as a clash between local youth and police which resulted in a state of emergency declaration and curfew.[69]
57
+
58
+ Baltimore has suffered from a high homicide rate for several decades, peaking in 1993, and again in 2015.[70][71] These deaths have taken a severe toll, especially within the local black community.[72]
59
+
60
+ By the beginning of the 1970s, Baltimore's downtown area known as the Inner Harbor had been neglected and was occupied by a collection of abandoned warehouses. The nickname "Charm City" came from a 1975 meeting of advertisers seeking to improve the city's reputation.[73][74] Efforts to redevelop the area started with the construction of the Maryland Science Center, which opened in 1976, the Baltimore World Trade Center (1977), and the Baltimore Convention Center (1979). Harborplace, an urban retail and restaurant complex, opened on the waterfront in 1980, followed by the National Aquarium, Maryland's largest tourist destination, and the Baltimore Museum of Industry in 1981. In 1995, the city opened the American Visionary Art Museum on Federal Hill. During the epidemic of HIV/AIDS in the United States, Baltimore City Health Department official Robert Mehl persuaded the city's mayor to form a committee to address food problems; the Baltimore-based charity Moveable Feast grew out of this initiative in 1990.[75][76][77] By 2010, the organization's region of service had expanded from merely Baltimore to include all of the Eastern Shore of Maryland.[78] In 1992, the Baltimore Orioles baseball team moved from Memorial Stadium to Oriole Park at Camden Yards, located downtown near the harbor. Pope John Paul II held an open-air mass at Camden Yards during his papal visit to the United States in October 1995. Three years later the Baltimore Ravens football team moved into M&T Bank Stadium next to Camden Yards.[79]
61
+
62
+ Baltimore has seen the reopening of the Hippodrome Theatre in 2004,[80] the opening of the Reginald F. Lewis Museum of Maryland African American History & Culture in 2005, and the establishment of the National Slavic Museum in 2012. On April 12, 2012, Johns Hopkins held a dedication ceremony to mark the completion of one of the United States' largest medical complexes – the Johns Hopkins Hospital in Baltimore – which features the Sheikh Zayed Cardiovascular and Critical Care Tower and The Charlotte R. Bloomberg Children's Center. The event, held at the entrance to the $1.1 billion 1.6 million-square-foot-facility, honored the many donors including Sheikh Khalifa bin Zayed Al Nahyan, first president of the United Arab Emirates, and Michael Bloomberg.[81][82]
63
+
64
+ On September 19, 2016 the Baltimore City Council approved a $660 million bond deal for the $5.5 billion Port Covington redevelopment project championed by Under Armour founder Kevin Plank and his real estate company Sagamore Development. Port Covington surpassed the Harbor Point development as the largest tax-increment financing deal in Baltimore's history and among the largest urban redevelopment projects in the country.[83] The waterfront development that includes the new headquarters for Under Armour, as well as shops, housing, offices, and manufacturing spaces is projected to create 26,500 permanent jobs with a $4.3 billion annual economic impact.[84] Goldman Sachs invested $233 million into the redevelopment project.[85]
65
+
66
+ Baltimore is in north-central Maryland on the Patapsco River close to where it empties into the Chesapeake Bay. The city is also located on the fall line between the Piedmont Plateau and the Atlantic coastal plain, which divides Baltimore into "lower city" and "upper city". The city's elevation ranges from sea level at the harbor to 480 feet (150 m) in the northwest corner near Pimlico.[6]
67
+
68
+ According to the 2010 Census, the city has a total area of 92.1 square miles (239 km2), of which 80.9 sq mi (210 km2) is land and 11.1 sq mi (29 km2) is water.[86] The total area is 12.1 percent water.
69
+
70
+ Baltimore is almost completely surrounded by Baltimore County, but is politically independent of it. It is bordered by Anne Arundel County to the south.
71
+
72
+ Baltimore exhibits examples from each period of architecture over more than two centuries, and work from architects such as Benjamin Latrobe, George A. Frederick, John Russell Pope, Mies van der Rohe and I. M. Pei.
73
+
74
+ The city is rich in architecturally significant buildings in a variety of styles. The Baltimore Basilica (1806–1821) is a neoclassical design by Benjamin Latrobe, and also the oldest Catholic cathedral in the United States. In 1813 Robert Cary Long, Sr., built for Rembrandt Peale the first substantial structure in the United States designed expressly as a museum. Restored, it is now the Municipal Museum of Baltimore, or popularly the Peale Museum.
75
+
76
+ The McKim Free School was founded and endowed by John McKim, although the building was erected by his son Isaac in 1822 after a design by William Howard and William Small. It reflects the popular interest in Greece when the nation was securing its independence, as well as a scholarly interest in recently published drawings of Athenian antiquities.
77
+
78
+ The Phoenix Shot Tower (1828), at 234.25 feet (71.40 m) tall, was the tallest building in the United States until the time of the Civil War, and is one of few remaining structures of its kind.[87] It was constructed without the use of exterior scaffolding. The Sun Iron Building, designed by R.C. Hatfield in 1851, was the city's first iron-front building and was a model for a whole generation of downtown buildings. Brown Memorial Presbyterian Church, built in 1870 in memory of financier George Brown, has stained glass windows by Louis Comfort Tiffany and has been called "one of the most significant buildings in this city, a treasure of art and architecture" by Baltimore Magazine.[88][89]
79
+
80
+ The 1845 Greek Revival-style Lloyd Street Synagogue is one of the oldest synagogues in the United States. The Johns Hopkins Hospital, designed by Lt. Col. John S. Billings in 1876, was a considerable achievement for its day in functional arrangement and fireproofing.
81
+
82
+ I.M. Pei's World Trade Center (1977) is the tallest equilateral pentagonal building in the world at 405 feet (123 m) tall.
83
+
84
+ The Harbor East area has seen the addition of two new towers which have completed construction: a 24-floor tower that is the new world headquarters of Legg Mason, and a 21-floor Four Seasons Hotel complex.
85
+
86
+ The streets of Baltimore are organized in a grid pattern, lined with tens of thousands of brick and formstone-faced rowhouses. In The Baltimore Rowhouse, Mary Ellen Hayward and Charles Belfoure considered the rowhouse as the architectural form defining Baltimore as "perhaps no other American city."[90] In the mid-1790s, developers began building entire neighborhoods of the British-style rowhouses, which became the dominant house type of the city early in the 19th century.[91]
87
+
88
+ Formstone facings, now a common feature on Baltimore rowhouses, were an addition patented in 1937 by Albert Knight. John Waters characterized formstone as "the polyester of brick" in a 30-minute documentary film, Little Castles: A Formstone Phenomenon.[92]
89
+
90
+ Oriole Park at Camden Yards is a Major League Baseball park, opened in 1992, which was built as a retro style baseball park. Camden Yards, along with the National Aquarium, have helped revive the Inner Harbor from what once was an industrial district full of dilapidated warehouses into a bustling commercial district full of bars, restaurants and retail establishments. Today, the Inner Harbor has some of the most desirable real estate in the Mid-Atlantic.[93]
91
+
92
+ After an international competition, the University of Baltimore School of Law awarded the German firm Behnisch Architekten 1st prize for its design, which was selected for the school's new home. After the building's opening in 2013, the design won additional honors including an ENR National "Best of the Best" Award.[94]
93
+
94
+ Baltimore's newly rehabilitated Everyman Theatre was honored by the Baltimore Heritage at the 2013 Preservation Awards Celebration in 2013. Everyman Theatre will receive an Adaptive Reuse and Compatible Design Award as part of Baltimore Heritage's 2013 historic preservation awards ceremony. Baltimore Heritage is Baltimore's nonprofit historic and architectural preservation organization, which works to preserve and promote Baltimore's historic buildings and neighborhoods.[95]
95
+
96
+ Baltimore is officially divided into nine geographical regions: North, Northeast, East, Southeast, South, Southwest, West, Northwest, and Central, with each district patrolled by a respective Baltimore Police Department. Interstate 83 and Charles Street down to Hanover Street and Ritchie Highway serve as the east–west dividing line and Eastern Avenue to Route 40 as the north–south dividing line; however, Baltimore Street is north–south dividing line for the U.S. Postal Service.[107] It is not uncommon for locals to divide the city simply by East or West Baltimore, using Charles Street or I-83 as a dividing line or into North and South using Baltimore Street as a dividing line.[citation needed]
97
+
98
+ Central Baltimore, originally called the Middle District,[108] stretches north of the Inner Harbor up to the edge of Druid Hill Park. Downtown Baltimore has mainly served as a commercial district with limited residential opportunities; however, between 2000 and 2010, the downtown population grew 130 percent as old commercial properties have been replaced by residential property.[109] Still the city's main commercial area and business district, it includes Baltimore's sports complexes: Oriole Park at Camden Yards, M&T Bank Stadium, and the Royal Farms Arena; and the shops and attractions in the Inner Harbor: Harborplace, the Baltimore Convention Center, the National Aquarium, Maryland Science Center, Pier Six Pavilion, and Power Plant Live.[107]
99
+
100
+ The University of Maryland, Baltimore, the University of Maryland Medical Center, and Lexington Market are also in the central district, as well as the Hippodrome and many nightclubs, bars, restaurants, shopping centers and various other attractions.[107][108] The northern portion of Central Baltimore, between downtown and the Druid Hill Park, is home to many of the city's cultural opportunities. Maryland Institute College of Art, the Peabody Institute (music conservatory), George Peabody Library, Enoch Pratt Free Library – Central Library, the Lyric Opera House, the Joseph Meyerhoff Symphony Hall, the Walters Art Museum, the Maryland Historical Society and its Enoch Pratt Mansion, and several galleries are located in this region.[110]
101
+
102
+ North Baltimore lies directly north of Central Baltimore and is bounded on the east by The Alameda and on the west by Pimlico Road. Loyola University Maryland, Johns Hopkins University Homewood Campus, St. Mary's Seminary and University and Notre Dame of Maryland University are located in this district. Baltimore Polytechnic Institute high school for mathematics, science and engineering, and adjacent Western High School, the oldest remaining public girls secondary school in America, share a joint campus at West Cold Spring Lane and Falls Road.[citation needed]
103
+
104
+ Several historic and notable neighborhoods are in this district: Govans (1755), Roland Park (1891), Guilford (1913), Homeland (1924), Hampden, Woodberry, Old Goucher (the original campus of Goucher College), and Jones Falls. Along the York Road corridor going north are the large neighborhoods of Charles Village, Waverly, and Mount Washington. The Station North Arts and Entertainment District is also located in North Baltimore.[111]
105
+
106
+ South Baltimore, a mixed industrial and residential area, consists of the "Old South Baltimore" peninsula below the Inner Harbor and east of the old B&O Railroad's Camden line tracks and Russell Street downtown. It is a culturally, ethnically, and socioeconomically diverse waterfront area with neighborhoods such as Locust Point and Riverside around a large park of the same name.[112] Just south of the Inner Harbor, the historic Federal Hill neighborhood, is home to many working professionals, pubs and restaurants. At the end of the peninsula is historic Fort McHenry, a National Park since the end of World War I, when the old U.S. Army Hospital surrounding the 1798 star-shaped battlements was torn down.[113]
107
+
108
+ The area south of the Vietnam Veterans (Hanover Street) Bridge and the Patapsco River was annexed to the city in 1919 from being independent towns in Anne Arundel County.[citation needed] Across the Hanover Street Bridge are residential areas such as Cherry Hill,[114] Brooklyn, and Curtis Bay, with Fort Armistead bordering the city's south side from Anne Arundel County.[citation needed]
109
+
110
+ Northeast is primarily a residential neighborhood, home to Morgan State University, bounded by the city line of 1919 on its northern and eastern boundaries, Sinclair Lane, Erdman Avenue, and Pulaski Highway to the south and The Alameda on to the west. Also in this wedge of the city on 33rd Street is Baltimore City College high school, third oldest active public secondary school in the United States, founded downtown in 1839.[115] Across Loch Raven Boulevard is the former site of the old Memorial Stadium home of the Baltimore Colts, Baltimore Orioles, and Baltimore Ravens, now replaced by a YMCA athletic and housing complex.[116][117] Lake Montebello is in Northeast Baltimore.[108]
111
+
112
+ Located below Sinclair Lane and Erdman Avenue, above Orleans Street, East Baltimore is mainly made up of residential neighborhoods. This section of East Baltimore is home to Johns Hopkins Hospital, Johns Hopkins University School of Medicine, and Johns Hopkins Children's Center on Broadway. Notable neighborhoods include: Armistead Gardens, Broadway East, Barclay, Ellwood Park, Greenmount, and McElderry Park.[108]
113
+
114
+ This area was the on-site film location for Homicide: Life on the Street, The Corner and The Wire.[118]
115
+
116
+ Southeast Baltimore, located below Fayette Street, bordering the Inner Harbor and the Northwest Branch of the Patapsco River to the west, the city line of 1919 on its eastern boundaries and the Patapsco River to the south, is a mixed industrial and residential area. Patterson Park, the "Best Backyard in Baltimore,"[119] as well as the Highlandtown Arts District, and Johns Hopkins Bayview Medical Center are located in Southeast Baltimore. The Shops at Canton Crossing opened in 2013.[120] The Canton neighborhood, is located along Baltimore's prime waterfront. Other historic neighborhoods include: Fells Point, Patterson Park, Butchers Hill, Highlandtown, Greektown, Harbor East, Little Italy, and Upper Fell's Point.[108]
117
+
118
+ Northwestern is bounded by the county line to the north and west, Gwynns Falls Parkway on the south and Pimlico Road on the east, is home to Pimlico Race Course, Sinai Hospital, and the headquarters of the NAACP. Its neighborhoods are mostly residential and are dissected by Northern Parkway. The area has been the center of Baltimore's Jewish community since after World War II. Notable neighborhoods include: Pimlico, Mount Washington, and Cheswolde, and Park Heights.[121]
119
+
120
+ West Baltimore is west of downtown and the Martin Luther King, Jr. Boulevard and is bounded by Gwynns Falls Parkway, Fremont Avenue, and West Baltimore Street. The Old West Baltimore Historic District includes the neighborhoods of Harlem Park, Sandtown-Winchester, Druid Heights, Madison Park, and Upton.[122][123] Originally a predominantly German neighborhood, by the last half of the 1800s, Old West Baltimore was home to a substantial section of the city's African American population. It became the largest neighborhood for the city's black community and its cultural, political, and economic center.[122] Coppin State University, Mondawmin Mall, and Edmondson Village are located in this district. The area's crime problems have provided subject material for television series, such as The Wire.[124] Local organizations, such as the Sandtown Habitat for Humanity and the Upton Planning Committee, have been steadily transforming parts of formerly blighted areas of West Baltimore into clean, safe communities.[125][126]
121
+
122
+ Southwest Baltimore is bound by the Baltimore County line to the west, West Baltimore Street to the north, and Martin Luther King Jr. Boulevard and Russell Street/Baltimore-Washington Parkway (Maryland Route 295) to the east. Notable neighborhoods in Southwest Baltimore include: Pigtown, Carrollton Ridge, Ridgely's Delight, Leakin Park, Violetville, Lakeland, and Morrell Park.[108]
123
+
124
+ St. Agnes Hospital on Wilkens and Caton[108] avenues is located in this district with the neighboring Cardinal Gibbons High School, which is the former site of Babe Ruth's alma mater, St. Mary's Industrial School.[citation needed] Also through this segment of Baltimore ran the beginnings of the historic National Road, which was constructed beginning in 1806 along Old Frederick Road and continuing into the county on Frederick Road into Ellicott City, Maryland.[citation needed] Other sides in this district are: Carroll Park, one of the city's largest parks, the colonial Mount Clare Mansion, and Washington Boulevard, which dates to pre-Revolutionary War days as the prime route out of the city to Alexandria, Virginia, and Georgetown on the Potomac River.[citation needed]
125
+
126
+ Belair-Edison
127
+
128
+ Woodberry
129
+
130
+ Reservoir Hill
131
+
132
+ Station North
133
+
134
+ Fells Point
135
+
136
+ Roland Park
137
+
138
+ Mount Vernon
139
+
140
+ The City of Baltimore is bordered by the following communities, all unincorporated census-designated places.
141
+
142
+ Baltimore lies in the humid subtropical climate in the Köppen climate classification, with long hot summers, cool to mild winters, and summer peak to annual precipitation.[127][128] Baltimore is part of USDA plant hardiness zones 7b and 8a.[129] Summers are normally hot, with occasional late day thunderstorms. July the hottest month, has a mean temperature of 80.3 °F (26.8 °C). Winters are chilly to mild but variable, with sporadic snowfall: January has a daily average of 35.8 °F (2.1 °C),[130] though temperatures reach 50 °F (10 °C) rather often, but can drop below 20 °F (−7 °C) when Arctic air masses affect the area.[130]
143
+
144
+ Spring and autumn are warm, with spring being the wettest season in terms of the number of precipitation days. Summers are hot and humid with a daily average in July of 80.7 °F (27.1 °C),[130] and the combination of heat and humidity leads to rather frequent thunderstorms. A southeasterly bay breeze off the Chesapeake often occurs on summer afternoons when hot air rises over inland areas; prevailing winds from the southwest interacting with this breeze as well as the city proper's UHI can seriously exacerbate air quality.[131][132] In late summer and early autumn the track of hurricanes or their remnants may cause flooding in downtown Baltimore, despite the city being far removed from the typical coastal storm surge areas.[133]
145
+
146
+ The average seasonal snowfall is 20.1 inches (51 cm),[134] but it varies greatly depending on the winter, with some seasons seeing minimal snow while others see several major Nor'easters.[a] Due to lessened urban heat island (UHI) as compared to the city proper and distance from the moderating Chesapeake Bay, the outlying and inland parts of the Baltimore metro area are usually cooler, especially at night, than the city proper and the coastal towns. Thus, in the northern and western suburbs, winter snowfall is more significant, and some areas average more than 30 in (76 cm) of snow per winter.[136] It is by no means uncommon for the rain-snow line to set up in the metro area.[137] Freezing rain and sleet occurs a few times each winter in the area, as warm air overrides cold air at the low to mid-levels of the atmosphere. When the wind blows from the east, the cold air gets dammed against the mountains to the west and the result is freezing rain or sleet.
147
+
148
+ Extreme temperatures range from −7 °F (−22 °C) on February 9, 1934, and February 10, 1899,[b] up to 108 °F (42 °C) on July 22, 2011.[138][139] On average, 100 °F (38 °C)+ temperatures occur on 0.9 days annually, 90 °F (32 °C)+ on 37 days, and there are 10 days where the high fails to reach the freezing mark.[130]
149
+
150
+ According to the United States Census, there were 593,490 people living in Baltimore City in 238,436 households as of July 1, 2019. The population decreased by 4.4% since the 2010 Census.[149] Baltimore's population has declined at each census since its peak in 1950.[109]
151
+
152
+ In 2011, then-Mayor Stephanie Rawlings-Blake said her main goal was to increase the city's population by improving city services to reduce the number of people leaving the city and by passing legislation protecting immigrants' rights to stimulate growth.[150] For the first time in decades, in July 2012, the U.S. Census Bureau's census estimate showed the population grew by 1,100 residents, a 0.2% increase from the previous year.[151] Baltimore is sometimes identified as a sanctuary city.[152] Mayor Jack Young said in 2019 that Baltimore will not assist ICE agents with immigration raids.[153]
153
+
154
+ Gentrification has increased since the 2000 census, primarily in East Baltimore, downtown, and Central Baltimore.[154] Downtown Baltimore and its surrounding neighborhoods are seeing a resurgence of young professionals and immigrants, mirroring major cities across the country.[151]
155
+
156
+ After New York City, Baltimore was the second city in the United States to reach a population of 100,000.[155][156] From the 1830 through 1850 U.S. censuses, Baltimore was the second most-populous city,[156][157] before being surpassed by Philadelphia in 1860.[158] It was among the top 10 cities in population in the United States in every census up to the 1980 census,[159] and after World War II had a population of nearly 1 million.
157
+
158
+ According to the 2010 Census[update], Baltimore's population is 63.7% Black, 29.6% White, 2.3% Asian, and 0.4%, American Indian and Alaska Native. Across races, 4.2% of the population are of Hispanic, Latino, or Spanish origin.[148] Females made up 53.4% of the population. The median age was 35 years old, with 22.4% under 18 years old, 65.8% from 18 to 64 years old, and 11.8% 65 or older.[148]
159
+
160
+ In 2005, approximately 30,778 people (6.5%) identified as gay, lesbian, or bisexual.[163] In 2012, same-sex marriage in Maryland was legalized, going into effect January 1, 2013.[164]
161
+
162
+ In 2009, the median household income was $42,241 and the median income per capita was $25,707, compared to the national median income of $53,889 per household and $28,930 per capita. In Baltimore, 23.7% of the population lived below the poverty line, compared to 13.5% nationwide.[148]
163
+
164
+ Housing in Baltimore is relatively inexpensive for large, coastal cities of its size. The median sale price for homes in Baltimore in 2012 was $95,000.[165] Despite the housing collapse, and along with the national trends, Baltimore residents still face slowly increasing rent (up 3% in the summer of 2010).[166]
165
+
166
+ The homeless population in Baltimore is steadily increasing; it exceeded 4,000 people in 2011. The increase in the number of young homeless people was particularly severe.[167]
167
+
168
+ As of 2015, life expectancy in Baltimore was 74 to 75 years, compared to the U.S. average of 78 to 80. Fourteen neighborhoods had lower life expectancies than North Korea. The life expectancy in Downtown/Seton Hill was comparable to that of Yemen.[168]
169
+
170
+ A little under half (47%) of people in Baltimore report affiliating with a religion. Catholicism is the largest religious affiliation, comprising 12% percent of the population, followed by the Baptist Church (7%), then Judaism (4.3%). Around 11.4% identify with other Christian denominations.[169][170]
171
+
172
+ As of 2010[update], 91% (526,705) of Baltimore residents five years old and older spoke only English at home. Close to 4% (21,661) spoke Spanish. Other languages, such as African languages, French, and Chinese are spoken by less than 1% of the population.[171]
173
+
174
+ Crime in Baltimore, generally concentrated in areas high in poverty, has been far above the national average for many years. Overall reported crime has dropped by 60% from the mid 1990s to the mid 2010s, but homicide rates remain high and exceed the national average. The worst years for crime in Baltimore overall were from 1993 to 1996; with 96,243 crimes reported in 1995. Baltimore's 344 homicides in 2015 represented the highest homicide rate in the city's recorded history—52.5 per 100,000 people, surpassing the record set in 1993—and the second-highest for U.S. cities behind St. Louis and ahead of Detroit. To put that in perspective, New York City, a city with a 2015 population of 8,491,079, recorded a total of 339 homicides in 2015. Baltimore had a 2015 population of 621,849; which means that in 2015 Baltimore had a homicide rate 14 times higher than New York City's. Of Baltimore's 344 homicides in 2015, 321 (93.3%) of the victims were African-American.[citation needed] Chicago, which saw 762 homicides in 2016 compared to Baltimore's 318, still had a homicide rate (27.2) that was half of Baltimore's because Chicago has a population four times greater than Baltimore's.[citation needed] As of 2018, the murder rate in Baltimore was higher than that of El Salvador, Guatemala, or Honduras.[172] Drug use and deaths by drug use (particularly drugs used intravenously, such as heroin) are a related problem which has crippled Baltimore for decades. Among cities greater than 400,000, Baltimore ranked 2nd in its opiate drug death rate in the United States behind Dayton, Ohio. The DEA reported that 10% of Baltimore's population – about 64,000 people – are addicted to heroin.[173][174][175][176][177]
175
+
176
+ In 2011, Baltimore police reported 196 homicides, the lowest number in the city since a count of 197 homicides in 1978 and far lower than the peak homicide count of 353 slayings in 1993. City leaders at the time credited a sustained focus on repeat violent offenders and increased community engagement for the continued drop, reflecting a nationwide decline in crime.[178][179]
177
+
178
+ On August 8, 2014, Baltimore's new youth curfew law went into effect. It prohibits unaccompanied children under age 14 from being on the streets after 9 p.m. and those aged 14–16 from being out after 10 p.m. during the week and 11 p.m. on weekends and during the summer. The goal is to keep children out of dangerous places and reduce crime.[180]
179
+
180
+ Crime in Baltimore reached another peak in 2015 when the year's tally of 344 homicides was second only to the record 353 in 1993, when Baltimore had about 100,000 more residents. The killings in 2015 were on pace with recent years in the early months of 2015 but skyrocketed after the unrest and rioting of late April. In five of the next eight months, killings topped 30–40 per month. Nearly 90 percent of 2015's homicides were the result of shootings, renewing calls for new gun laws. In 2016, according to annual crime statistics released by the Baltimore Police Department, there were 318 murders in the city.[181] This total marked a 7.56 percent decline in homicides from 2015.
181
+
182
+ In an interview with The Guardian, on November 2, 2017,[182] David Simon, himself a former police reporter for The Baltimore Sun, ascribed the most recent surge in murders to the high-profile decision by Baltimore state's attorney, Marilyn Mosby, to charge six city police officers following the death of Freddie Gray after he fell into a coma while in police custody in April 2015. "What Mosby basically did was send a message to the Baltimore police department: 'I'm going to put you in jail for making a bad arrest.' So officers figured it out: 'I can go to jail for making the wrong arrest, so I'm not getting out of my car to clear a corner,' and that's exactly what happened post-Freddie Gray." In Baltimore, "arrest numbers have plummeted from more than 40,000 in 2014, the year before Gray's death and the subsequent charges against the officers, to about 18,000 [as of November 2017]. This happened even as homicides soared from 211 in 2014 to 344 in 2015 – an increase of 63%."[182]
183
+
184
+ Once a predominantly industrial town, with an economic base focused on steel processing, shipping, auto manufacturing (General Motors Baltimore Assembly), and transportation, the city experienced deindustrialization which cost residents tens of thousands of low-skill, high-wage jobs.[183] The city now relies on a low-wage service economy, which accounts for 31% of jobs in the city.[184][185]
185
+ Around the turn of the 20th century, Baltimore was the leading US manufacturer of rye whiskey and straw hats. It also led in refining of crude oil, brought to the city by pipeline from Pennsylvania.[186][187][188]
186
+
187
+ As of March 2018[update] the U.S. Bureau of Labor Statistics calculates Baltimore's unemployment rate at 5.8%[189] while one quarter of Baltimore residents (and 37% of Baltimore children) live in poverty.[190] The 2012 closure of a major steel plant at Sparrows Point is expected to have a further impact on employment and the local economy.[191] The Census Bureau reported in 2013 that 207,000 workers commute into Baltimore city each day.[192] Downtown Baltimore is the primary economic asset within Baltimore City and the region with 29.1 million square feet of office space. The tech sector is rapidly growing as the Baltimore metro ranks 8th in the CBRE Tech Talent Report among 50 U.S. metro areas for high growth rate and number of tech professionals.[193] Forbes ranked Baltimore fourth among America's "new tech hot spots".[194]
188
+
189
+ The city is home to the Johns Hopkins Hospital. Other large companies in Baltimore include Under Armour,[195] BRT Laboratories, Cordish Company,[196] Legg Mason, McCormick & Company, T. Rowe Price, and Royal Farms.[197] A sugar refinery owned by American Sugar Refining is one of Baltimore's cultural icons. Nonprofits based in Baltimore include Lutheran Services in America and Catholic Relief Services.
190
+
191
+ Almost a quarter of the jobs in the Baltimore region were in science, technology, engineering and math as of mid 2013, in part attributed to the city's extensive undergraduate and graduate schools; maintenance and repair experts were included in this count.[198]
192
+
193
+ The center of international commerce for the region is the World Trade Center Baltimore. It houses the Maryland Port Administration and U.S. headquarters for major shipping lines. Baltimore is ranked 9th for total dollar value of cargo and 13th for cargo tonnage for all U.S. ports. In 2014, total cargo moving through the port totaled 29.5 million tons, down from 30.3 million tons in 2013. The value of cargo traveling through the port in 2014 came to $52.5 billion, down from $52.6 billion in 2013. The Port of Baltimore generates $3 billion in annual wages and salary, as well as supporting 14,630 direct jobs and 108,000 jobs connected to port work. In 2014, the port also generated more than $300 million in taxes. It serves over 50 ocean carriers making nearly 1,800 annual visits. Among all U.S. ports, Baltimore is first in handling automobiles, light trucks, farm and construction machinery; and imported forest products, aluminum, and sugar. The port is second in coal exports. The Port of Baltimore's cruise industry, which offers year-round trips on several lines supports over 500 jobs and brings in over $90 million to Maryland's economy annually. Growth at the port continues with the Maryland Port Administration plans to turn the southern tip of the former steel mill into a marine terminal, primarily for car and truck shipments, but also for anticipated new business coming to Baltimore after the completion of the Panama Canal expansion project.[199]
194
+
195
+ Baltimore's history and attractions have allowed the city to become a strong tourist destination on the East Coast. In 2014, the city hosted 24.5 million visitors, who spent $5.2 billion.[200] The Baltimore Visitor Center, which is operated by Visit Baltimore, is located on Light Street in the Inner Harbor. Much of the city's tourism centers around the Inner Harbor, with the National Aquarium being Maryland's top tourist destination. Baltimore Harbor's restoration has made it "a city of boats," with several historic ships and other attractions on display and open for the public to visit. The USS Constellation, the last Civil War-era vessel afloat, is docked at the head of the Inner Harbor; the USS Torsk, a submarine that holds the Navy's record for dives (more than 10,000); and the Coast Guard cutter Taney, the last surviving U.S. warship that was in Pearl Harbor during the Japanese attack on December 7, 1941, and which engaged Japanese Zero aircraft during the battle.[201]
196
+
197
+ Also docked is the lightship Chesapeake, which for decades marked the entrance to Chesapeake Bay; and the Seven Foot Knoll Lighthouse, the oldest surviving screw-pile lighthouse on Chesapeake Bay, which once marked the mouth of the Patapsco River and the entrance to Baltimore. All of these attractions are owned and maintained by the Historic Ships in Baltimore organization. The Inner Harbor is also the home port of Pride of Baltimore II, the state of Maryland's "goodwill ambassador" ship, a reconstruction of a famous Baltimore Clipper ship.[201]
198
+
199
+ Other tourist destinations include sporting venues such as Oriole Park at Camden Yards, M&T Bank Stadium, and Pimlico Race Course, Fort McHenry, the Mount Vernon, Federal Hill, and Fells Point neighborhoods, Lexington Market, Horseshoe Casino, and museums such as the Walters Art Museum, the Baltimore Museum of Industry, the Babe Ruth Birthplace and Museum, the Maryland Science Center, and the B&O Railroad Museum.
200
+
201
+ The Baltimore Convention Center is home to BronyCon, the world's largest convention for fans of My Little Pony: Friendship is Magic. The convention had over 6,300 attendees in 2017, and 10,011 attendees during its peak in 2015.[citation needed]
202
+
203
+ Baltimore Visitor Center in Inner Harbor
204
+
205
+ Fountain near visitor center in Inner Harbor
206
+
207
+ Sunset views from Baltimore's Inner Harbor
208
+
209
+ Baltimore is the home of the National Aquarium, one of the world's largest.
210
+
211
+ Historically a working-class port town, Baltimore has sometimes been dubbed a "city of neighborhoods", with 72 designated historic districts[202] traditionally occupied by distinct ethnic groups. Most notable today are three downtown areas along the port: the Inner Harbor, frequented by tourists due to its hotels, shops, and museums; Fells Point, once a favorite entertainment spot for sailors but now refurbished and gentrified (and featured in the movie Sleepless in Seattle); and Little Italy, located between the other two, where Baltimore's Italian-American community is based – and where U.S. House Speaker Nancy Pelosi grew up. Further inland, Mount Vernon is the traditional center of cultural and artistic life of the city; it is home to a distinctive Washington Monument, set atop a hill in a 19th-century urban square, that predates the more well-known monument in Washington, D.C. by several decades. Baltimore also has a significant German American population,[203] and was the second largest port of immigration to the United States, behind Ellis Island in New York and New Jersey. Between 1820 and 1989, almost 2 million who were German, Polish, English, Irish, Russian, Lithuanian, French, Ukrainian, Czech, Greek and Italian came to Baltimore, most between the years 1861 to 1930. By 1913, when Baltimore was averaging forty thousand immigrants per year, World War I closed off the flow of immigrants. By 1970, Baltimore's heyday as an immigration center was a distant memory. There also was a Chinatown dating back to at least the 1880s which consisted of no more than 400 Chinese residents. A local Chinese-American association remains based there, but only one Chinese restaurant as of 2009.
212
+
213
+ Baltimore has quite a history when it comes to making beer, an art that thrived in Baltimore from the 1800s to the 1950s with over 100 old breweries in the city's past.[204] The best remaining example of that history is the old American Brewery Building on North Gay Street and the National Brewing Company building in the Brewer's Hill neighborhood. In the 1940s the National Brewing Company introduced the nation's first six-pack. National's two most prominent brands, were National Bohemian Beer colloquially "Natty Boh" and Colt 45. Listed on the Pabst website as a "Fun Fact", Colt 45 was named after running back #45 Jerry Hill of the 1963 Baltimore Colts and not the .45 caliber handgun ammunition round. Both brands are still made today, albeit outside of Maryland, and served all around the Baltimore area at bars, as well as Orioles and Ravens games.[205] The Natty Boh logo appears on all cans, bottles, and packaging; and merchandise featuring him can still easily be found in shops in Maryland, including several in Fells Point.
214
+
215
+ Each year the Artscape takes place in the city in the Bolton Hill neighborhood, due to its proximity to Maryland Institute College of Art. Artscape styles itself as the "largest free arts festival in America".[206] Each May, the Maryland Film Festival takes place in Baltimore, using all five screens of the historic Charles Theatre as its anchor venue. Many movies and television shows have been filmed in Baltimore. The Wire was set and filmed in Baltimore. House of Cards and Veep are set in Washington, D.C. but filmed in Baltimore.[207]
216
+
217
+ Baltimore has cultural museums in many areas of study. The Baltimore Museum of Art, and the Walters Art Museum are internationally renowned for its collection of art. The Baltimore Museum of Art has the largest holding of works by Henri Matisse in the world.[208] The American Visionary Art Museum has been designated by Congress as America's national museum for visionary art.[209] The National Great Blacks In Wax Museum is the first African American wax museum in the country, featuring more than 150 life-size and lifelike wax figures.[44]
218
+
219
+ Baltimore is known for its Maryland blue crabs, crab cake, Old Bay Seasoning, pit beef, and the "chicken box." The city has many restaurants in or around the Inner Harbor. The most known and acclaimed are the Charleston, Woodberry Kitchen, and the Charm City Cakes bakery featured on the Food Network's Ace of Cakes. The Little Italy neighborhood's biggest draw is the food. Fells Point also is a foodie neighborhood for tourists and locals and is where the oldest continuously running tavern in the country, "The Horse You Came in on Saloon," is located.[210] Many of the city's upscale restaurants can be found in Harbor East. Five public markets are located across the city. The Baltimore Public Market System is the oldest continuously operating public market system in the United States.[211] Lexington Market is one of the longest-running markets in the world and longest running in the country, having been around since 1782. The market continues to stand at its original site. Baltimore is the last place in America where one can still find arabbers, vendors who sell fresh fruits and vegetables from a horse-drawn cart that goes up and down neighborhood streets.[212] Food- and drink-rating site Zagat ranked Baltimore second in a list of the 17 best food cities in the country in 2015.[213]
220
+
221
+ Baltimore city, along with its surrounding regions, is home to a unique local dialect. It is part of Mid-Atlantic American English and is noted to be very similar to the Philadelphia accent, albeit with more southern influences.[214][215]
222
+
223
+ The so-called "Bawlmerese" accent is known for its characteristic pronunciation of its long "o" vowel, in which an "eh" sound is added before the long "o" sound (/oʊ/ shifts to [ɘʊ], or even [eʊ]).[216] It also adopts Philadelphia's pattern of the short "a" sound, such that the tensed vowel in words like "bath" or "ask" does not match the more relaxed one in "sad" or "act".[214]
224
+
225
+ Baltimore native John Waters parodies the city and its dialect extensively in his films. Most of them are filmed and/or set in Baltimore, including the 1972 cult classic Pink Flamingos, as well as Hairspray and its Broadway musical remake.
226
+
227
+ Baltimore has three state-designated arts and entertainment (A & E) districts. The Station North Arts and Entertainment District, Highlandtown Arts District, and the Bromo Arts & Entertainment District. The Baltimore Office of Promotion & The Arts, a non-profit organization, produces events and arts programs as well as manages several facilities. It is the official Baltimore City Arts Council. BOPA coordinates Baltimore's major events including New Year's Eve and July 4 celebrations at the Inner Harbor, Artscape which is America's largest free arts festival, Baltimore Book Festival, Baltimore Farmers' Market & Bazaar, School 33 Art Center's Open Studio Tour and the Dr. Martin Luther King, Jr. Parade.[217]
228
+
229
+ The Baltimore Symphony Orchestra is an internationally renowned orchestra, founded in 1916 as a publicly funded municipal organization. The current Music Director is Marin Alsop, a protégé of Leonard Bernstein. Centerstage is the premier theater company in the city and a regionally well-respected group. The Lyric Opera House is the home of Lyric Opera Baltimore, which operates there as part of the Patricia and Arthur Modell Performing Arts Center. The Baltimore Consort has been a leading early music ensemble for over twenty-five years. The France-Merrick Performing Arts Center, home of the restored Thomas W. Lamb-designed Hippodrome Theatre, has afforded Baltimore the opportunity to become a major regional player in the area of touring Broadway and other performing arts presentations. Renovating Baltimore's historic theatres have become widespread throughout the city such as the Everyman, Centre, Senator and most recent Parkway theatre. Other buildings have been reused such as the former Mercantile Deposit and Trust Company bank building. It is now the Chesapeake Shakespeare Company Theater.
230
+
231
+ Baltimore also boasts a wide array of professional (non-touring) and community theater groups. Aside from Center Stage, resident troupes in the city include The Vagabond Players, the oldest continuously operating community theater group in the country, Everyman Theatre, Single Carrot Theatre, and Baltimore Theatre Festival. Community theaters in the city include Fells Point Community Theatre and the Arena Players Inc., which is the nation's oldest continuously operating African American community theater.[218] In 2009, the Baltimore Rock Opera Society, an all-volunteer theatrical company, launched its first production.[219]
232
+
233
+ Baltimore is home to the Pride of Baltimore Chorus, a three-time international silver medalist women's chorus, affiliated with Sweet Adelines International. The Maryland State Boychoir is located in the northeastern Baltimore neighborhood of Mayfield.
234
+
235
+ Baltimore is the home of non-profit chamber music organization Vivre Musicale. VM won a 2011–2012 award for Adventurous Programming from the American Society of Composers, Authors and Publishers and Chamber Music America.[220]
236
+
237
+ The Peabody Institute, located in the Mount Vernon neighborhood, is the oldest conservatory of music in the United States.[221] Established in 1857, it is one of the most prestigious in the world,[221] along with Juilliard, Eastman, and the Curtis Institute. The Morgan State University Choir is also one of the nation's most prestigious university choral ensembles.[222] The city is home to the Baltimore School for the Arts, a public high school in the Mount Vernon neighborhood of Baltimore. The institution is nationally recognized for its success in preparation for students entering music (vocal/instrumental), theatre (acting/theater production), dance, and visual arts.
238
+
239
+ Baltimore has a long and storied baseball history, including its distinction as the birthplace of Babe Ruth in 1895. The original 19th century Baltimore Orioles were one of the most successful early franchises, featuring numerous hall of famers during its years from 1882 to 1899.
240
+ As one of the eight inaugural American League franchises, the Baltimore Orioles played in the AL during the 1901 and 1902 seasons. The team moved to New York City before the 1903 season and was renamed the New York Highlanders, which later became the New York Yankees.
241
+ Ruth played for the minor league Baltimore Orioles team, which was active from 1903 to 1914. After playing one season in 1915 as the Richmond Climbers, the team returned the following year to Baltimore, where it played as the Orioles until 1953.[223]
242
+
243
+ The team currently known as the Baltimore Orioles has represented Major League Baseball locally since 1954 when the St. Louis Browns moved to the city of Baltimore. The Orioles advanced to the World Series in 1966, 1969, 1970, 1971, 1979 and 1983, winning three times (1966, 1970 and 1983), while making the playoffs all but one year (1972) from 1969 through 1974.
244
+
245
+ In 1995, local player (and later Hall of Famer) Cal Ripken, Jr. broke Lou Gehrig's streak of 2,130 consecutive games played, for which Ripken was named Sportsman of the Year by Sports Illustrated magazine.[citation needed] Six former Orioles players, including Ripken (2007), and two of the team's managers have been inducted into the Baseball Hall of Fame.
246
+
247
+ Since 1992, the Orioles' home ballpark has been Oriole Park at Camden Yards, which has been hailed as one of the league's best since it opened.[citation needed]
248
+
249
+ Prior to an NFL team moving to Baltimore, there had been several attempts at a professional football team prior to the 1950s. Most were minor league or semi-professional teams. The first major league to base a team in Baltimore was the All-America Football Conference (AAFC), which had a team named the Baltimore Colts. The AAFC Colts played for three seasons in the AAFC (1947, 1948, and 1949), and when the AAFC folded following the 1949 season, moved to the NFL for a single year (1950) before going bankrupt. Three years later, the NFL's Dallas Texans would itself fold, and its assets and player contracts purchased by an ownership team headed by Baltimore businessman Carroll Rosenbloom, who moved the team to Baltimore, establishing a new team also named the Baltimore Colts. During the 1950s and 1960s, the Colts were one of the NFLs more successful franchises, led by Pro Football Hall of Fame quarterback Johnny Unitas who set a then-record of 47 consecutive games with a touchdown pass. The Colts advanced to the NFL Championship twice (1958 & 1959) and Super Bowl twice (1969 & 1971), winning all except Super Bowl III in 1969. After the 1983 season, the team left Baltimore for Indianapolis in 1984, where they became the Indianapolis Colts.
250
+
251
+ The NFL returned to Baltimore when the former Cleveland Browns moved to Baltimore to become the Baltimore Ravens in 1996. Since then, the Ravens won a Super Bowl championship in 2000 and 2012, six AFC North division championships (2003, 2006, 2011, 2012, 2018, and 2019), and appeared in four AFC Championship Games (2000, 2008, 2011 and 2012).
252
+
253
+ Baltimore also hosted a Canadian Football League franchise, the Baltimore Stallions for the 1994 and 1995 seasons. Following the 1995 season, and ultimate end to the Canadian Football League in the United States experiment, the team was sold and relocated to Montreal.
254
+
255
+ The first professional sports organization in the United States, The Maryland Jockey Club, was formed in Baltimore in 1743. Preakness Stakes, the second race in the United States Triple Crown of Thoroughbred Racing, has been held every May at Pimlico Race Course in Baltimore since 1873.
256
+
257
+ College lacrosse is a common sport in the spring, as the Johns Hopkins Blue Jays men's lacrosse team has won 44 national championships, the most of any program in history. In addition, Loyola University won its first men's NCAA lacrosse championship in 2012.
258
+
259
+ The Baltimore Blast are a professional arena soccer team that play in the Major Arena Soccer League at the SECU Arena on the campus of Towson University. The Blast have won nine championships in various leagues, including the MASL. A previous entity of the Blast played in the Major Indoor Soccer League from 1980 to 1992, winning one championship.
260
+
261
+ The FC Baltimore 1729 is a semi-professional soccer club playing for NPSL league, with the goal of bringing a community-oriented competitive soccer experience to the city of Baltimore. Their inaugural season started on May 11, 2018, and they play home games at CCBC Essex Field.
262
+
263
+ The Baltimore Blues are a semi-professional rugby league club which began competition in the USA Rugby League in 2012.[224] The Baltimore Bohemians are an American soccer club. They compete in the USL Premier Development League, the fourth tier of the American Soccer Pyramid. Their inaugural season started in the spring of 2012.
264
+
265
+ The Baltimore Grand Prix debuted along the streets of the Inner Harbor section of the city's downtown on September 2–4, 2011. The event played host to the American Le Mans Series on Saturday and the IndyCar Series on Sunday. Support races from smaller series were also held, including Indy Lights. After three consecutive years, on September 13, 2013, it was announced that the event would not be held in 2014 or 2015 due to scheduling conflicts.[225]
266
+
267
+ The athletic equipment company, Under Armour is also based out of Baltimore. Founded in 1996 by Kevin Plank, a University of Maryland alumnus, the company's headquarters are located in Tide Point, adjacent to Fort McHenry and the Domino Sugar factory. The Baltimore Marathon is the flagship race of several races. The marathon begins at the Camden Yards sports complex and travels through many diverse neighborhoods of Baltimore, including the scenic Inner Harbor waterfront area, historic Federal Hill, Fells Point, and Canton, Baltimore. The race then proceeds to other important focal points of the city such as Patterson Park, Clifton Park, Lake Montebello, the Charles Village neighborhood and the western edge of downtown. After winding through 42.195 kilometres (26.219 mi) of Baltimore, the race ends at virtually the same point at which it starts.
268
+
269
+ The Baltimore Brigade was an Arena Football League team based in Baltimore that from 2017 to 2019 played at Royal Farms Arena along the Inner City near the Edward A. Garmatz Courthouse.
270
+
271
+ The City of Baltimore boasts over 4,900 acres (1,983 ha) of parkland.[226] The Baltimore City Department of Recreation and Parks manages the majority of parks and recreational facilities in the city including Patterson Park, Federal Hill Park, and Druid Hill Park.[227] The city is also home to Fort McHenry National Monument and Historic Shrine, a coastal star-shaped fort best known for its role in the War of 1812. As of 2015[update], The Trust for Public Land, a national land conservation organization, ranks Baltimore 40th among the 75 largest U.S. cities.[226]
272
+
273
+ Baltimore is an independent city, and not part of any county. For most governmental purposes under Maryland law, Baltimore City is treated as a county-level entity. The United States Census Bureau uses counties as the basic unit for presentation of statistical information in the United States, and treats Baltimore as a county equivalent for those purposes.
274
+
275
+ Baltimore has been a Democratic stronghold for over 150 years, with Democrats dominating every level of government. In virtually all elections, the Democratic primary is the real contest.[228] No Republican has been elected to the City Council since 1939, or as mayor since 1963.
276
+
277
+ The city hosted the first six Democratic National Conventions, from 1832 through 1852, and hosted the DNC again in 1860, 1872, and 1912.[229][230]
278
+
279
+ Jack Young is the current mayor of Baltimore. He took office on May 2, 2019 upon the resignation of Catherine Pugh. Prior to Pugh's official resignation, Young was the president of the Baltimore City Council and had been the acting mayor since April 2.[231]
280
+
281
+ Catherine Pugh became the Democratic nominee for mayor in 2016 and won the mayoral election in 2016 with 57.1% of the vote; Pugh took office as mayor on December 6, 2016.[232] Pugh took a leave of absence in April 2019 due to health concerns, then officially resigned from office on May 2.[233] The resignation coincided with a scandal over a "self-dealing" book-sales arrangement.[234]
282
+
283
+ Stephanie Rawlings-Blake assumed the office of Mayor on February 4, 2010, when predecessor Dixon's resignation became effective.[235] Rawlings-Blake had been serving as City Council President at the time. She was elected to a full term in 2011, defeating Pugh in the primary election and receiving 84% of the vote.[236]
284
+
285
+ Sheila Dixon became the first female mayor of Baltimore on January 17, 2007. As the former City Council President, she assumed the office of Mayor when former Mayor Martin O'Malley took office as Governor of Maryland.[237] On November 6, 2007, Dixon won the Baltimore mayoral election. Mayor Dixon's administration ended less than three years after her election, the result of a criminal investigation that began in 2006 while she was still City Council President. She was convicted on a single misdemeanor charge of embezzlement on December 1, 2009. A month later, Dixon made an Alford plea to a perjury charge and agreed to resign from office; Maryland, like most states, does not allow convicted felons to hold office.[238][239]
286
+
287
+ Grassroots pressure for reform, voiced as Question P, restructured the city council in November 2002, against the will of the mayor, the council president, and the majority of the council. A coalition of union and community groups, organized by the Association of Community Organizations for Reform Now (ACORN), backed the effort.[240]
288
+
289
+ The Baltimore City Council is now made up of 14 single-member districts and one elected at-large council president. Bernard C. "Jack" Young has been the council president since February 2010, when he was unanimously elected by the other council members to replace Stephanie Rawlings-Blake, who had become mayor.[241] Edward Reisinger, the 10th district representative, is the council's current vice president.[242]
290
+
291
+ The Baltimore City Police Department, founded 1784 as a "Night City Watch" and day Constables system and later reorganized as a City Department in 1853, with a following reorganization under State of Maryland supervision in 1859, with appointments made by the Governor of Maryland after a disturbing period of civic and elections violence with riots in the later part of the decade, is the current primary law enforcement agency serving the citizens of the City of Baltimore. Campus and building security for the city's public schools is provided by the Baltimore City Public Schools Police, established in the 1970s.
292
+
293
+ In the period of 2011–2015, 120 lawsuits were brought against Baltimore police for alleged brutality and misconduct. The Freddie Gray settlement of $6.4 million exceeds the combined total settlements of the 120 lawsuits, as state law caps such payments.[243]
294
+
295
+ The Maryland Transportation Authority Police under the Maryland Department of Transportation, (originally established as the "Baltimore Harbor Tunnel Police" when opened in 1957) is the primary law enforcement agency on the Fort McHenry Tunnel Thruway (Interstate 95), the Baltimore Harbor Tunnel Thruway (Interstate 895), which go under the Northwest Branch of the Patapsco River, and Interstate 395, which has three ramp bridges crossing the Middle Branch of the Patapsco River which are under MdTA jurisdiction, the Baltimore-Washington International Airport, (BWI) and have limited concurrent jurisdiction with the Baltimore City Police Department under a "memorandum of understanding".
296
+
297
+ Law enforcement on the fleet of transit buses and transit rail systems serving Baltimore is the responsibility of the Maryland Transit Administration Police, which is part of the Maryland Transit Administration of the state Department of Transportation. The MTA Police also share jurisdiction authority with the Baltimore City Police, governed by a memorandum of understanding.[244]
298
+
299
+ As the enforcement arm of the Baltimore circuit and district court system, the Baltimore City Sheriff's Office, created by state constitutional amendment in 1844, is responsible for the security of city courthouses and property, service of court-ordered writs, protective and peace orders, warrants, tax levies, prisoner transportation and traffic enforcement. Deputy Sheriffs are sworn law enforcement officials, with full arrest authority granted by the constitution of Maryland, the Maryland Police and Correctional Training Commission and the Sheriff of the City of Baltimore.[245]
300
+
301
+ The United States Coast Guard, operating out of their shipyard and facility (since 1899) at Arundel Cove on Curtis Creek, (off Pennington Avenue extending to Hawkins Point Road/Fort Smallwood Road) in the Curtis Bay section of southern Baltimore City and adjacent northern Anne Arundel County. The U.S.C.G. also operates and maintains a presence on Baltimore and Maryland waterways in the Patapsco River and Chesapeake Bay. "Sector Baltimore" is responsible for commanding law enforcement and search & rescue units as well as aids to navigation.
302
+
303
+ The city of Baltimore is protected by the over 1,800 professional firefighters of the Baltimore City Fire Department (BCFD), which was founded in December 1858 and began operating the following year. Replacing several warring independent volunteer companies since the 1770s and the confusion resulting from a riot involving the "Know-Nothing" political party two years before, the establishment of a unified professional fire fighting force was a major advance in urban governance. The BCFD operates out of 37 fire stations located throughout the city and has a long history and sets of traditions in its various houses and divisions.
304
+
305
+ Since the legislative redistricting in 2002, Baltimore has had six legislative districts located entirely within its boundaries, giving the city six seats in the 47-member Maryland Senate and 18 in the 141-member Maryland House of Delegates.[246][247] During the previous 10-year period, Baltimore had four legislative districts within the city limits, but four others overlapped the Baltimore County line.[248] As of January 2011[update], all of Baltimore's state senators and delegates were Democrats.[246] Approval of the next redistricting plan is expected to become effective in time for Maryland's 2012 congressional primary election on February 14, 2012.[249]
306
+
307
+ Three of the state's eight congressional districts include portions of Baltimore: the 2nd, represented by Dutch Ruppersberger; the 3rd, represented by John Sarbanes; and the 7th, represented by Kweisi Mfume. All three are Democrats; a Republican has not represented a significant portion of Baltimore in Congress since John Boynton Philip Clayton Hill represented the 3rd District in 1927, and has not represented any of Baltimore since the Eastern Shore-based 1st District lost its share of Baltimore after the 2000 census; it was represented by Republican Wayne Gilchrest at the time.
308
+
309
+ Maryland's senior United States Senator, Ben Cardin, is from Baltimore. He is one of three people in the last four decades to have represented the 3rd District before being elected to the United States Senate. Paul Sarbanes represented the 3rd from 1971 until 1977, when he was elected to the first of five terms in the Senate. Sarbanes was succeeded by Barbara Mikulski, who represented the 3rd from 1977 to 1987. Mikulski was succeeded by Cardin, who held the seat until handing it to John Sarbanes upon his election to the Senate in 2007.[250]
310
+
311
+ The Postal Service's Baltimore Main Post Office is located at 900 East Fayette Street in the Jonestown area.[252]
312
+
313
+ The national headquarters for the United States Social Security Administration is located in Woodlawn, just outside of Baltimore.
314
+
315
+ Baltimore is the home of numerous places of higher learning, both public and private. 100,000 college students from around the country attend Baltimore City's 12 accredited two-year or four-year colleges and universities.[253][254] Among them are:
316
+
317
+ The city's public schools are managed by Baltimore City Public Schools and include schools that have been well known in the area: Carver Vocational-Technical High School, the first African American vocational high school and center that was established in the state of Maryland; Digital Harbor High School, one of the secondary schools that emphasizes information technology; Lake Clifton Eastern High School, which is the largest school campus in Baltimore City of physical size; the historic Frederick Douglass High School, which is the second oldest African American high school in the United States;[256] Baltimore City College, the third oldest public high school in the country;[257] and Western High School, the oldest public all-girls school in the nation.[258] Baltimore City College (also known as "City") and Baltimore Polytechnic Institute (also known as "Poly") share the nation's second-oldest high school football rivalry.[259]
318
+
319
+ The city of Baltimore has a higher-than-average percentage of households without a car. In 2015, 30.7 percent of Baltimore households lacked a car, which decreased slightly to 28.9 percent in 2016. The national average was 8.7 percent in 2016. Baltimore averaged 1.65 cars per household in 2016, compared to a national average of 1.8.[260]
320
+
321
+ Baltimore's highway growth has done much to influence the development of the city and its suburbs. The first limited-access highway serving Baltimore was the Baltimore–Washington Parkway, which opened in stages between 1950 and 1954. Maintenance of it is split: the half closest to Baltimore is maintained by the state of Maryland, and the half closest to Washington by the National Park Service. Trucks are only permitted to use the northern part of the parkway. Trucks (tractor-trailers) continued to use U.S. Route 1 (US 1) until Interstate 95 (I-95) between Baltimore and Washington opened in 1971.
322
+
323
+ The Interstate highways serving Baltimore are I-70, I-83 (the Jones Falls Expressway), I-95, I-395, I-695 (the Baltimore Beltway), I-795 (the Northwest Expressway), I-895 (the Harbor Tunnel Thruway), and I-97. The city's mainline Interstate highways—I-95, I-83, and I-70—do not directly connect to each other, and in the case of I-70 end at a park and ride lot just inside the city limits, because of freeway revolts in Baltimore. These revolts were led primarily by Barbara Mikulski, a former United States senator for Maryland, which resulted in the abandonment of the original plan. There are two tunnels traversing Baltimore Harbor within the city limits: the four-bore Fort McHenry Tunnel (opened in 1985 and serving I-95) and the two-bore Harbor Tunnel (opened in 1957 and serving I-895). The Baltimore Beltway crosses south of Baltimore Harbor over the Francis Scott Key Bridge.
324
+
325
+ The first interstate highway built in Baltimore was I-83, called the Jones Falls Expressway (first portion built in the early 1960s). Running from the downtown toward the northwest (NNW), it was built through a natural corridor, which meant that no residents or housing were directly affected. A planned section from what is now its southern terminus to I-95 was abandoned. Its route through parkland received criticism.
326
+
327
+ Planning for the Baltimore Beltway antedates the creation of the Interstate Highway System. The first portion completed was a small strip connecting the two sections of I-83, the Baltimore-Harrisburg Expressway and the Jones Falls Expressway.
328
+
329
+ The only U.S. Highways in the city are US 1, which bypasses downtown, and US 40, which crosses downtown from east to west. Both run along major surface streets; however, US 40 utilizes a small section of a freeway cancelled in the 1970s in the west side of the city originally intended for Interstate 170. State routes in the city also travel along surface streets, with the exception of Maryland Route 295, which carries the Baltimore–Washington Parkway.
330
+
331
+ The Baltimore City Department of Transportation (BCDOT) is responsible for several functions of the road transportation system in Baltimore, including repairing roads, sidewalks, and alleys; road signs; street lights; and managing the flow of transportation systems.[261] In addition, the agency is in charge of vehicle towing and traffic cameras.[262][263] BCDOT maintains all streets within the city of Baltimore. These include all streets that are marked as state and U.S. highways as well as the portions of I-83 and I-70 within the city limits. The only highways within the city that are not maintained by BCDOT are I-95, I-395, I-695, and I-895; those four highways are maintained by the Maryland Transportation Authority.[264]
332
+
333
+ Public transit in Baltimore is mostly provided by the Maryland Transit Administration (abbreviated "MTA Maryland") and Charm City Circulator. MTA Maryland operates a comprehensive bus network, including many local, express, and commuter buses, a light rail network connecting Hunt Valley in the north to BWI Airport and Cromwell (Glen Burnie) in the south, and a subway line between Owings Mills and Johns Hopkins Hospital.[265] A proposed rail line, known as the Red Line, which would link the Social Security Administration to Johns Hopkins Bayview Medical Center and perhaps the Canton and Dundalk communities, was cancelled as of June 2015[update] by Governor Larry Hogan; a proposal to extend Baltimore's existing subway line to Morgan State University, known as the Green Line, is in the planning stages.[266]
334
+
335
+ The Charm City Circulator (CCC), a shuttle bus service operated by Veolia Transportation for the Baltimore Department of Transportation, began operating in the downtown area in January 2010. Funded partly by a 16 percent increase in the city's parking fees, the circulator provides free bus service seven days a week, picking up passengers every 15 minutes at designated stops during service hours.[267][268]
336
+
337
+ The CCC's first bus line, the Orange route, travels between Hollins Market and Harbor East. Its Purple route, launched June 7, 2010, operates between Fort Avenue and 33rd St. The Green route runs between Johns Hopkins and City Hall.[268][269] The Charm City Circulator operates a fleet of diesel and hybrid vehicles built by DesignLine, Orion, and Van Hool.[267]
338
+
339
+ Baltimore also has a water taxi service, operated by Baltimore Water Taxi. The water taxi's six routes provide service throughout the city's harbor, and was purchased by Under Armour CEO Kevin Plank's Sagamore Ventures in 2016.[270]
340
+
341
+ In June 2017, The BaltimoreLink started operating; it is the redesign of the region's initial bus system. The BaltimoreLink runs through downtown Baltimore every 10 minutes via color-coded, high-frequency CityLink routes.[271]
342
+
343
+ Baltimore is a top destination for Amtrak along the Northeast Corridor. Baltimore's Penn Station is one of the busiest in the country. In FY 2014, Penn Station was ranked the seventh-busiest rail station in the United States by number of passengers served each year.[272] The building sits on a raised "island" of sorts between two open trenches, one for the Jones Falls Expressway and the other for the tracks of the Northeast Corridor (NEC). The NEC approaches from the south through the two-track, 7,660 feet (2,330 m) Baltimore and Potomac Tunnel, which opened in 1873 and whose 30 mph (50 km/h) limit, sharp curves, and steep grades make it one of the NEC's worst bottlenecks. The NEC's northern approach is the 1873 Union Tunnel, which has one single-track bore and one double-track bore.
344
+
345
+ Just outside the city, Baltimore/Washington International (BWI) Thurgood Marshall Airport Rail Station is another stop. Amtrak's Acela Express, Palmetto, Carolinian, Silver Star, Silver Meteor, Vermonter, Crescent, and Northeast Regional trains are the scheduled passenger train services that stop in the city. Additionally, MARC commuter rail service connects the city's two main intercity rail stations, Camden Station and Penn Station, with Washington, D.C.'s Union Station as well as stops in between. The MARC consists of 3 lines; the Brunswick, Camden and Penn. On December 7, 2013 the Penn Line began weekend service.[273]
346
+
347
+ Baltimore is served by two airports, both operated by the Maryland Aviation Administration, which is part of the Maryland Department of Transportation.[274] Baltimore–Washington International Thurgood Marshall Airport, generally known as "BWI," lies about 10 miles (16 km) to the south of Baltimore in neighboring Anne Arundel County. The airport is named after Thurgood Marshall, a Baltimore native who was the first African American to serve on the Supreme Court of the United States. In terms of passenger traffic, BWI is the 22nd busiest airport in the United States.[275] As of calendar year 2014, BWI is the largest, by passenger count, of three major airports serving the Baltimore–Washington Metropolitan Area. It is accessible by I-95 and the Baltimore–Washington Parkway via Interstate 195, the Baltimore Light Rail, and Amtrak and MARC Train at BWI Rail Station.
348
+
349
+ Baltimore is also served by Martin State Airport, a general aviation facility, to the northeast in Baltimore County. Martin State Airport is linked to downtown Baltimore by Maryland Route 150 (Eastern Avenue) and by MARC Train at its own station.
350
+
351
+ Baltimore has a comprehensive system of bicycle routes in the city. These routes are not numbered, but are typically denoted with green signs displaying a silhouette of a bicycle upon an outline of the city's border, and denote the distance to destinations, much like bicycle routes in the rest of the state. The roads carrying bicycle routes are also labelled with either bike lanes, sharrows, or Share the Road signs. Many of these routes pass through the downtown area. The network of bicycle lanes in the city continues to expand, with over 140 miles (230 km) added between 2006 and 2014.[276] Alongside bike lanes, Baltimore has also built bike boulevards, starting with Guilford Avenue in 2012.
352
+
353
+ Baltimore currently has three major trail systems within the city. The Gwynns Falls Trail runs from the Inner Harbor to the I-70 Park and Ride, passing through Gwynns Falls Park and possessing numerous branches. There are also many pedestrian hiking trails traversing the park. The Jones Falls Trail currently runs from the Inner Harbor to the Cylburn Arboretum; however, it is currently undergoing expansion. Long-term plans call for it to extend to the Mount Washington Light Rail Stop, and possibly as far north as the Falls Road stop to connect to the Robert E. Lee boardwalk north of the city. It will also incorporate a spur alongside Western Run. The two aforementioned trails carry sections of the East Coast Greenway through the city. There is also the Herring Run Trail, which runs from Harford Road east to its end beyond Sinclair Lane, utilizing Herring Run Park; long-term plans also call for its extension to Morgan State University and north to points beyond. Other major bicycle projects include a protected cycle track installed on both Maryland Avenue and Mount Royal Avenue, expected to become the backbone of a downtown bicycle network. Installation for the cycletracks is expected in 2014 and 2016, respectively.
354
+
355
+ In addition to the bicycle trails and cycletracks, Baltimore has the Stony Run Trail, a walking path that will eventually connect from the Jones Falls north to Northern Parkway, utilizing much of the old Ma and Pa Railroad corridor inside the city. In 2011, the city undertook a campaign to reconstruct many sidewalk ramps in the city, coinciding with mass resurfacing of the city's streets. A 2011 study by Walk Score ranked Baltimore the 14th most walkable of fifty largest U.S. cities.[277]
356
+
357
+ The port was founded in 1706, preceding the founding of Baltimore. The Maryland colonial legislature made the area near Locust Point as the port of entry for the tobacco trade with England. Fells Point, the deepest point in the natural harbor, soon became the colony's main ship building center, later on becoming leader in the construction of clipper ships.[278]
358
+
359
+ After Baltimore's founding, mills were built behind the wharves. The California Gold Rush led to many orders for fast vessels; many overland pioneers also relied upon canned goods from Baltimore. After the Civil War, a coffee ship was designed here for trade with Brazil. At the end of the nineteenth century, European ship lines had terminals for immigrants. The Baltimore and Ohio Railroad made the port a major transshipment point.[279]:17,75 Currently the port has major roll-on/roll-off facilities, as well as bulk facilities, especially steel handling.[280]
360
+
361
+ Water taxis also operate in the Inner Harbor. Governor Ehrlich participated in naming the port after Helen Delich Bentley during the 300th anniversary of the port.[281]
362
+
363
+ In 2007, Duke Realty Corporation began a new development near the Port of Baltimore, named the Chesapeake Commerce Center. This new industrial park is located on the site of a former General Motors plant. The total project comprises 184 acres (0.74 km2) in eastern Baltimore City, and the site will yield 2,800,000 square feet (260,000 m2) of warehouse/distribution and office space. Chesapeake Commerce Center has direct access to two major Interstate highways (I-95 and I-895) and is located adjacent to two of the major Port of Baltimore terminals. The Port of Baltimore is one of two seaports on the U.S. East Coast with a 50-foot (15 m) dredge to accommodate the largest shipping vessels.[282]
364
+
365
+ Along with cargo terminals, the port also has a passenger cruise terminal, which offers year-round trips on several lines, including Royal Caribbean's Grandeur of the Seas and Carnival's Pride. Overall five cruise lines have operated out of the port to the Bahamas and the Caribbean, while some ships traveled to New England and Canada. The terminal has become an embarkation point where passengers have the opportunity to park and board next to the ship visible from Interstate 95.[283] Passengers from Pennsylvania, New York and New Jersey make up a third of the volume, with travelers from Maryland, Virginia, the District and even Ohio and the Carolinas making up the rest.[284]
366
+
367
+ Baltimore's Inner Harbor, known for its skyline waterscape and its tourist-friendly areas, was horribly polluted. The waterway was often filled with garbage after heavy rainstorms, failing its 2014 water quality report card. The Waterfront Partnership of Baltimore took steps to remediate the waterways, in hopes that the harbor would be fishable and swimmable once again.
368
+
369
+ Installed in May 2014, the water wheel trash interceptor known as Mr. Trash Wheel sits at the mouth of the Jones Falls River in Baltimore's Inner Harbor. A February 2015 agreement with a local waste-to-energy plant is believed to make Baltimore the first city to use reclaimed waterway debris to generate electricity.[285]
370
+
371
+ Mr. Trash Wheel is a permanent water wheel trash interceptor to clean up the city's polluted Inner Harbor.[286] The Jones Falls river watershed drains 58 square miles (150 km2) of land outside of Baltimore and is a significant source of trash that enters the harbor. Garbage collected by Mr. Trash Wheel could come from anywhere in the Jones Falls Watershed area.[287] The wheel moves continuously, removing garbage and dumping it into an attached dumpster using only hydro and solar renewable power to keep its wheel turning. It has the capability to collect 50,000 pounds (22,700 kg) of trash per day, and has removed more than 350 tons of litter from Baltimore's landmark and tourist attraction in its first 18 months, estimated as consisting of approximately 200,000 bottles, 173,000 potato chip bags and 6.7 million cigarette butts.[288][289] The Water Wheel has been very successful at trash removal, visibly decreasing the amount of garbage that collects in the harbor, especially after a rainfall.
372
+
373
+ After the success of Mr. Trash Wheel, the Waterfront Partnership raised money to build a second Water Wheel at the end of Harris Creek, an entirely piped stream that flows beneath Baltimore's Canton neighborhood and empties into the Baltimore Harbor. Harris Creek is known to carry tons of trash every year.[290][291][292] The planned new Water Wheel was inaugurated in December 2016, and dubbed "Professor Trash Wheel".[293] Professor Trash Wheel prevents waste from exiting the Harbor and accessing the Chesapeake Bay and Atlantic Ocean. A number of additional projects are going on in Baltimore City and County that should result in better water quality scores. These projects include the Blue Alleys project, expanded street sweeping, and stream restoration.[286]
374
+
375
+ In August 2010, the National Aquarium assembled, planted, and launched a floating wetland island designed by Biohabitats in Baltimore's Inner Harbor.[294] Hundreds of years ago Baltimore's harbor shoreline would have been lined with tidal wetlands. Floating wetlands provide many environmental benefits to water quality and habitat enhancement, which is why the Waterfront Partnership of Baltimore has included them in their Healthy Harbor Initiative pilot projects.[295] Biohabitats also developed a concept to transform a dilapidated wharf into a living pier that cleans Harbor water, provides habitat and is an aesthetic attraction. Currently under design, the top of the pier will become a constructed tidal wetland.[296]
376
+
377
+ Baltimore's main newspaper is The Baltimore Sun. It was sold by its Baltimore owners in 1986 to the Times Mirror Company,[297] which was bought by the Tribune Company in 2000.[298] The Baltimore News-American, another long-running paper that competed with the Sun, ceased publication in 1986.[299]
378
+
379
+ The city is home to the Baltimore Afro-American, an influential African American newspaper founded in 1892.[300][301]
380
+
381
+ In 2006, The Baltimore Examiner was launched to compete with The Sun. It was part of a national chain that includes The San Francisco Examiner and The Washington Examiner. In contrast to the paid subscription Sun, The Examiner was a free newspaper funded solely by advertisements. Unable to turn a profit and facing a deep recession, The Baltimore Examiner ceased publication on February 15, 2009.[citation needed]
382
+
383
+ Despite being located 40 miles northeast of Washington, D.C., Baltimore is a major media market in its own right, with all major English language television networks represented in the city. WJZ-TV 13 is a CBS owned and operated station, and WBFF 45 is the flagship of Sinclair Broadcast Group, the largest station owner in the country. Other major television stations in Baltimore include WMAR-TV 2 (ABC), WBAL-TV 11 (NBC), WUTB 24 (MyNetworkTV), WNUV 54 (CW), and WMPB 67 (PBS).
384
+
385
+ Nielsen ranked Baltimore as the 26th-largest television market for the 2008–2009 viewing season and the 27th-largest for 2009–2010.[302] Arbitron's Fall 2010 rankings identified Baltimore as the 22nd largest radio market.[303]
386
+
387
+ Baltimore has ten sister cities, as designated by Sister Cities International:[304][305]
388
+
389
+ Baltimore's own Sister City Committees recognize eight of these sister cities, indicated above with a "B" notation.[306]
390
+
391
+ Three additional sister cities have "emeritus status":[304]
en/5350.html.txt ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sensation is the physical process during which sensory systems respond to stimuli and provide data for perception.[1] A sense is any of the systems involved in sensation. During sensation, sense organs engage in stimulus collection and transduction.[2] Sensation is often differentiated from the related and dependent concept of perception, which processes and integrates sensory information in order to give meaning to and understand detected stimuli, giving rise to subjective perceptual experience, or qualia.[3] Sensation and perception are central to and precede almost all aspects of cognition, behavior and thought.[1]
2
+
3
+ In organisms, a sensory organ consists of a group of related sensory cells that respond to a specific type of physical stimulus. Via cranial and spinal nerves, the different types of sensory receptor cells (mechanoreceptors, photoreceptors, chemoreceptors, thermoreceptors) in sensory organs transduct sensory information from sensory organs towards the central nervous system, to the sensory cortices in the brain, where sensory signals are further processed and interpreted (perceived).[1][4][5] Sensory systems, or senses, are often divided into external (exteroception) and internal (interoception) sensory systems.[6][7] Sensory modalities or submodalities refer to the way sensory information is encoded or transduced.[4] Multimodality integrates different senses into one unified perceptual experience. For example, information from one sense has the potential to influence how information from another is perceived.[2] Sensation and perception are studied by a variety of related fields, most notably psychophysics, neurobiology, cognitive psychology, and cognitive science.[1]
4
+
5
+ Humans have a multitude of sensory systems. Human external sensation is based on the sensory organs of the eyes, ears, skin, inner ear, nose, and mouth. The corresponding sensory systems of the visual system (sense of vision), auditory system (sense of hearing), somatosensory system (sense of touch), vestibular system (sense of balance), olfactory system (sense of smell), and gustatory system (sense of taste) contribute, respectively, to the perceptions of vision, hearing, touch, spatial orientation, smell, and taste (flavor).[2][1] Internal sensation, or interoception, detects stimuli from internal organs and tissues. Many internal sensory and perceptual systems exist in humans, including proprioception (body position) and nociception (pain). Further internal chemoreception and osmoreception based sensory systems lead to various perceptions, such as hunger, thirst, suffocation, and nausea, or different involuntary behaviors, such as vomiting.[6][7][8]
6
+
7
+ Nonhuman animals experience sensation and perception, with varying levels of similarity to and difference from humans and other animal species. For example, mammals, in general, have a stronger sense of smell than humans. Some animal species lack one or more human sensory system analogues, some have sensory systems that are not found in humans, while others process and interpret the same sensory information in very different ways. For example, some animals are able to detect electrical[9] and magnetic fields,[10] air moisture,[11] or polarized light,[12] while others sense and perceive through alternative systems, such as echolocation.[13][14] Recently, it has been suggested that plants and artificial agents may be able to detect and interpret environmental information in an analogous manner to animals.[15][16][17]
8
+
9
+ Sensory modality refers to the way that information is encoded, which is similar to the idea of transduction. The main sensory modalities can be described on the basis of how each is transduced. Listing all the different sensory modalities, which can number as many as 17, involves separating the major senses into more specific categories, or submodalities, of the larger sense. An individual sensory modality represents the sensation of a specific type of stimulus. For example, the general sensation and perception of touch, which is known as somatosensation, can be separated into light pressure, deep pressure, vibration, itch, pain, temperature, or hair movement, while the general sensation and perception of taste can be separated into submodalities of sweet, salty, sour, bitter, spicy. and umami, all of which are based on different chemicals binding to sensory neurons.[4]
10
+
11
+ Sensory receptors are the cells or structures that detect sensations. Stimuli in the environment activate specialized receptor cells in the peripheral nervous system. During transduction, physical stimulus is converted into action potential by receptors and transmitted towards the central nervous system for processing.[5] Different types of stimuli are sensed by different types of receptor cells. Receptor cells can be classified into types on the basis of three different criteria: cell type, position, and function. Receptors can be classified structurally on the basis of cell type and their position in relation to stimuli they sense. Receptors can further be classified functionally on the basis of the transduction of stimuli, or how the mechanical stimulus, light, or chemical changed the cell membrane potential.[4]
12
+
13
+ One way to classify receptors is based on their location relative to the stimuli. An exteroceptor is a receptor that is located near a stimulus of the external environment, such as the somatosensory receptors that are located in the skin. An interoceptor is one that interprets stimuli from internal organs and tissues, such as the receptors that sense the increase in blood pressure in the aorta or carotid sinus.[4]
14
+
15
+ The cells that interpret information about the environment can be either (1) a neuron that has a free nerve ending, with dendrites embedded in tissue that would receive a sensation; (2) a neuron that has an encapsulated ending in which the sensory nerve endings are encapsulated in connective tissue that enhances their sensitivity; or (3) a specialized receptor cell, which has distinct structural components that interpret a specific type of stimulus. The pain and temperature receptors in the dermis of the skin are examples of neurons that have free nerve endings (1). Also located in the dermis of the skin are lamellated corpuscles, neurons with encapsulated nerve endings that respond to pressure and touch (2). The cells in the retina that respond to light stimuli are an example of a specialized receptor (3), a photoreceptor.[4]
16
+
17
+ A transmembrane protein receptor is a protein in the cell membrane that mediates a physiological change in a neuron, most often through the opening of ion channels or changes in the cell signaling processes. Transmembrane receptors are activated by chemicals called ligands. For example, a molecule in food can serve as a ligand for taste receptors. Other transmembrane proteins, which are not accurately called receptors, are sensitive to mechanical or thermal changes. Physical changes in these proteins increase ion flow across the membrane, and can generate an action potential or a graded potential in the sensory neurons.[4]
18
+
19
+ A third classification of receptors is by how the receptor transduces stimuli into membrane potential changes. Stimuli are of three general types. Some stimuli are ions and macromolecules that affect transmembrane receptor proteins when these chemicals diffuse across the cell membrane. Some stimuli are physical variations in the environment that affect receptor cell membrane potentials. Other stimuli include the electromagnetic radiation from visible light. For humans, the only electromagnetic energy that is perceived by our eyes is visible light. Some other organisms have receptors that humans lack, such as the heat sensors of snakes, the ultraviolet light sensors of bees, or magnetic receptors in migratory birds.[4]
20
+
21
+ Receptor cells can be further categorized on the basis of the type of stimuli they transduce. The different types of functional receptor cell types are mechanoreceptors, photoreceptors, chemoreceptors (osmoreceptor), thermoreceptors, and nociceptors. Physical stimuli, such as pressure and vibration, as well as the sensation of sound and body position (balance), are interpreted through a mechanoreceptor. Photoreceptors convert light (visible electromagnetic radiation) into signals. Chemical stimuli can be interpreted by a chemoreceptor that interprets chemical stimuli, such as an object's taste or smell, while osmoreceptors respond to a chemical solute concentrations of body fluids. Nociception (pain) interprets the presence of tissue damage, from sensory information from mechano-, chemo-, and thermoreceptors.[18] Another physical stimulus that has its own type of receptor is temperature, which is sensed through a thermoreceptor that is either sensitive to temperatures above (heat) or below (cold) normal body temperature.[4]
22
+
23
+ Each sense organ (eyes or nose, for instance) requires a minimal amount of stimulation in order to detect a stimulus. This minimum amount of stimulus is called the absolute threshold.[2] The absolute threshold is defined as the minimum amount of stimulation necessary for the detection of a stimulus 50% of the time.[1] Absolute threshold is measured by using a method called signal detection. This process involves presenting stimuli of varying intensities to a subject in order to determine the level at which the subject can reliably detect stimulation in a given sense.[2]
24
+
25
+ Differential threshold or just noticeable difference (JDS) is the smallest detectable difference between two stimuli, or the smallest difference in stimuli that can be judged to be different from each other.[1] Weber's Law is an empirical law that states that the difference threshold is a constant fraction of the comparison stimulus.[1] According to Weber's Law, bigger stimuli require larger differences to be noticed.[2]
26
+
27
+ Magnitude estimation is a psychophysical method in which subjects assign perceived values of given stimuli. The relationship between stimulus intensity and perceptive intensity is described by Steven's power law.[1]
28
+
29
+ Signal detection theory quantifies the experience of the subject to the presentation of a stimulus in the presence of noise. There is internal noise and there is external noise when it comes to signal detection. The internal noise originates from static in the nervous system. For example, an individual with closed eyes in a dark room still sees something - a blotchy pattern of grey with intermittent brighter flashes -, this is internal noise. External noise is the result of noise in the environment that can interfere with the detection of the stimulus of interest. Noise is only a problem if the magnitude of the noise is large enough to interfere with signal collection. The nervous system calculates a criterion, or an internal threshold, for the detection of a signal in the presence of noise. If a signal is judged to be above the criterion, thus the signal is differentiated from the noise, the signal is sensed and perceived. Errors in signal detection can potentially lead to false positives and false negatives. The sensory criterion might be shifted based on the importance of the detecting the signal. Shifting of the criterion may influence the likelihood of false positives and false negatives.[1]
30
+
31
+ Subjective visual and auditory experiences appear to be similar across humans subjects. The same cannot be said about taste. For example, there is a molecule called propylthiouracil (PROP) that some humans experience as bitter, some as almost tasteless, while others experience it as somewhere between tasteless and bitter. There is a genetic basis for this difference between perception given the same sensory stimulus. This subjective difference in taste perception has implications for individuals' food preferences, and consequently, health.[1]
32
+
33
+ When a stimulus is constant and unchanging, perceptual sensory adaptation occurs. During this process, the subject becomes less sensitive to the stimulus.[2]
34
+
35
+ Biological auditory (hearing), vestibular and spatial, and visual systems (vision) appear to break down real-world complex stimuli into sine wave components, through the mathematical process called Fourier analysis. Many neurons have a strong preference for certain sine frequency components in contrast to others. The way that simpler sounds and images are encoded during sensation can provide insight into how perception of real-world objects happens.[1]
36
+
37
+ Perception occurs when nerves that lead from the sensory organs (e.g. eye) to the brain are stimulated, even if that stimulation is unrelated to the target signal of the sensory organ. For example, in the case of the eye, it does not matter whether light or something else stimulates the optic nerve, that stimulation will results in visual perception, even if there was no visual stimulus to begin with. (To prove this point to yourself (and if you are a human), close your eyes (preferably in a dark room) and press gently on the outside corner of one eye through the eyelid. You will see a visual spot toward the inside of your visual field, near your nose.)[1]
38
+
39
+ All stimuli received by the receptors are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area (cortex) of the brain. Just as different nerves are dedicated to sensory and motors tasks, different areas of the brain (cortices) are similarly dedicated to different sensory and perceptual tasks. More complex processing is accomplished across primary cortical regions that spread beyond the primary cortices. Every nerve, sensory or motor, has its own signal transmission speed. For example, nerves in the frog's legs have a 90 ft/s (99 km/h) signal transmission speed, while sensory nerves in humans, transmit sensory information at speeds between 165 ft/s (181 km/h) and 330 ft/s (362 km/h).[1]
40
+
41
+ Perceptual experience is often multimodal. Multimodality integrates different senses into one unified perceptual experience. Information from one sense has the potential to influence how information from another is perceived.[2] Multimodal perception is qualitatively different from unimodal perception. There has been a growing body of evidence since the mid-1990s on the neural correlates of multimodal perception.[20]
42
+
43
+ Historical inquiries into the underlying mechanisms of sensation and perception have lead early researchers to subscribe to various philosophical interpretations of perception and the mind, including panpsychism, dualism, and materialism. The majority of modern scientists who study sensation and perception take on a materialistic view of the mind.[1]
44
+
45
+ Some examples of human absolute thresholds for the 9-21 external senses.[21]
46
+
47
+ Humans respond more strongly to multimodal stimuli compared to the sum of each single modality together, an effect called the superadditive effect of multisensory integration.[2] Neurons that respond to both visual and auditory stimuli have been identified in the superior temporal sulcus.[20] Additionally, multimodal “what” and “where” pathways have been proposed for auditory and tactile stimuli.[22]Siddhu
48
+
49
+ External receptors that respond to stimuli from outside the body are called extoreceptors.[23] Human external sensation is based on the sensory organs of the eyes, ears, skin, vestibular system, nose, and mouth, which contribute, respectively, to the sensory perceptions of vision, hearing, touch, spatial orientation, smell, and taste. Smell and taste are both responsible for identifying molecules and thus both are types of chemoreceptors. Both olfaction (smell) and gustation (taste) require the transduction of chemical stimuli into electrical potentials.[2][1]
50
+
51
+ The visual system, or sense of sight, is based on the transduction of light stimuli received through the eyes and contributes to visual perception. The visual system detects light on photoreceptors in the retina of each eye that generates electrical nerve impulses for the perception of varying colors and brightness. There are two types of photoreceptors: rods and cones. Rods are very sensitive to light but do not distinguish colors. Cones distinguish colors but are less sensitive to dim light.[4]
52
+
53
+ At the molecular level, visual stimuli cause changes in the photopigment molecule that lead to changes in membrane potential of the photoreceptor cell. A single unit of light is called a photon, which is described in physics as a packet of energy with properties of both a particle and a wave. The energy of a photon is represented by its wavelength, with each wavelength of visible light corresponding to a particular color. Visible light is electromagnetic radiation with a wavelength between 380 and 720 nm. Wavelengths of electromagnetic radiation longer than 720 nm fall into the infrared range, whereas wavelengths shorter than 380 nm fall into the ultraviolet range. Light with a wavelength of 380 nm is blue whereas light with a wavelength of 720 nm is dark red. All other colors fall between red and blue at various points along the wavelength scale.[4]
54
+
55
+ The three types of cone opsins, being sensitive to different wavelengths of light, provide us with color vision. By comparing the activity of the three different cones, the brain can extract color information from visual stimuli. For example, a bright blue light that has a wavelength of approximately 450 nm would activate the “red” cones minimally, the “green” cones marginally, and the “blue” cones predominantly. The relative activation of the three different cones is calculated by the brain, which perceives the color as blue. However, cones cannot react to low-intensity light, and rods do not sense the color of light. Therefore, our low-light vision is—in essence—in grayscale. In other words, in a dark room, everything appears as a shade of gray. If you think that you can see colors in the dark, it is most likely because your brain knows what color something is and is relying on that memory.[4]
56
+
57
+ There is some disagreement as to whether the visual system consists of one, two, or three submodalities. Neuroanatomists generally regard it as two submodalities, given that different receptors are responsible for the perception of color and brightness. Some argue[citation needed] that stereopsis, the perception of depth using both eyes, also constitutes a sense, but it is generally regarded as a cognitive (that is, post-sensory) function of the visual cortex of the brain where patterns and objects in images are recognized and interpreted based on previously learned information. This is called visual memory.
58
+
59
+ The inability to see is called blindness. Blindness may result from damage to the eyeball, especially to the retina, damage to the optic nerve that connects each eye to the brain, and/or from stroke (infarcts in the brain). Temporary or permanent blindness can be caused by poisons or medications. People who are blind from degradation or damage to the visual cortex, but still have functional eyes, are actually capable of some level of vision and reaction to visual stimuli but not a conscious perception; this is known as blindsight. People with blindsight are usually not aware that they are reacting to visual sources, and instead just unconsciously adapt their behavior to the stimulus.
60
+
61
+ On February 14, 2013 researchers developed a neural implant that gives rats the ability to sense infrared light which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities.[24]
62
+
63
+ Visual Perception in Psychology
64
+
65
+ According to Gestalt Psychology, people perceive the whole of something even if it is not there. The Gestalt’s Law of Organization states that people have seven factors that help to group what is seen into patterns or groups: Common Fate, Similarity, Proximity, Closure, Symmetry, Continuity, and Past Experience.[25]
66
+
67
+ The Law of Common fate says that objects are led along the smoothest path. People follow the trend of motion as the lines/dots flow. [26]
68
+
69
+ The Law of Similarity refers to the grouping of images or objects that are similar to each other in some aspect. This could be due to shade, colour, size, shape, or other qualities you could distinguish.[27]
70
+
71
+ The Law of Proximity states that our minds like to group based on how close objects are to each other. We may see 42 objects in a group, but we can also perceive three groups of two lines with seven objects in each line. [26]
72
+
73
+ The Law of Closure is the idea that we as humans still see a full picture even if there are gaps within that picture. There could be gaps or parts missing from a section of a shape, but we would still perceive the shape as whole.[27]
74
+
75
+ The Law of Symmetry refers to a person's preference to see symmetry around a central point. An example would be when we use parentheses in writing. We tend to perceive all of the words in the parentheses as one section instead of individual words within the parentheses.[27]
76
+
77
+ The Law of Continuity tells us that objects are grouped together by their elements and then perceived as a whole. This usually happens when we see overlapping objects.We will see the overlapping objects with no interruptions.[27]
78
+
79
+ The Law of Past Experience refers to the tendency humans have to categorize objects according to past experiences under certain circumstances. If two objects are usually perceived together or within close proximity of each other the Law of Past Experience is usually seen.[26]
80
+
81
+ Hearing, or audition, is the transduction of sound waves into a neural signal that is made possible by the structures of the ear. The large, fleshy structure on the lateral aspect of the head is known as the auricle. At the end of the auditory canal is the tympanic membrane, or ear drum, which vibrates after it is struck by sound waves. The auricle, ear canal, and tympanic membrane are often referred to as the external ear. The middle ear consists of a space spanned by three small bones called the ossicles. The three ossicles are the malleus, incus, and stapes, which are Latin names that roughly translate to hammer, anvil, and stirrup. The malleus is attached to the tympanic membrane and articulates with the incus. The incus, in turn, articulates with the stapes. The stapes is then attached to the inner ear, where the sound waves will be transduced into a neural signal. The middle ear is connected to the pharynx through the Eustachian tube, which helps equilibrate air pressure across the tympanic membrane. The tube is normally closed but will pop open when the muscles of the pharynx contract during swallowing or yawning.[4]
82
+
83
+ Mechanoreceptors turn motion into electrical nerve pulses, which are located in the inner ear. Since sound is vibration, propagating through a medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical sense because these vibrations are mechanically conducted from the eardrum through a series of tiny bones to hair-like fibers in the inner ear, which detect mechanical motion of the fibers within a range of about 20 to 20,000 hertz,[28] with substantial variation between individuals. Hearing at high frequencies declines with an increase in age. Inability to hear is called deafness or hearing impairment. Sound can also be detected as vibrations conducted through the body by tactition. Lower frequencies that can be heard are detected this way. Some deaf people are able to determine the direction and location of vibrations picked up through the feet.[29]
84
+
85
+ Studies pertaining to Audition started to increase in number towards the latter end of the nineteenth century. During this time, many laboratories in the United States began to create new models, diagrams, and instruments that all pertained to the ear. [30]
86
+
87
+ There is a branch of Cognitive Psychology dedicated strictly to Audition. They call it Auditory Cognitive Psychology. The main point is to understand why humans are able to use sound in thinking outside of actually saying it. [31]
88
+
89
+ Relating to Auditory Cognitive Psychology is Psychoacoustics. Psychoacoustics is more pointed to people interested in music.[32] Haptics, a word used to refer to both taction and kinesthesia, has many parallels with psychoacoustics.[32] Most research around these two are focused on the instrument, the listener, and the player of the instrument.  [32]
90
+
91
+ Somatosensation is considered a general sense, as opposed to the special senses discussed in this section. Somatosensation is the group of sensory modalities that are associated with touch and interoception. The modalities of somatosensation include pressure, vibration, light touch, tickle, itch, temperature, pain, kinesthesia.[4] Somatosensation, also called tactition (adjectival form: tactile) is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc.). The touch sense of itching caused by insect bites or allergies involves special itch-specific neurons in the skin and spinal cord.[33] The loss or impairment of the ability to feel anything touched is called tactile anesthesia. Paresthesia is a sensation of tingling, pricking, or numbness of the skin that may result from nerve damage and may be permanent or temporary.
92
+
93
+ Two types of somatosensory signals that are transduced by free nerve endings are pain and temperature. These two modalities use thermoreceptors and nociceptors to transduce temperature and pain stimuli, respectively. Temperature receptors are stimulated when local temperatures differ from body temperature. Some thermoreceptors are sensitive to just cold and others to just heat. Nociception is the sensation of potentially damaging stimuli. Mechanical, chemical, or thermal stimuli beyond a set threshold will elicit painful sensations. Stressed or damaged tissues release chemicals that activate receptor proteins in the nociceptors. For example, the sensation of heat associated with spicy foods involves capsaicin, the active molecule in hot peppers.[4]
94
+
95
+ Low frequency vibrations are sensed by mechanoreceptors called Merkel cells, also known as type I cutaneous mechanoreceptors. Merkel cells are located in the stratum basale of the epidermis. Deep pressure and vibration is transduced by lamellated (Pacinian) corpuscles, which are receptors with encapsulated endings found deep in the dermis, or subcutaneous tissue. Light touch is transduced by the encapsulated endings known as tactile (Meissner) corpuscles. Follicles are also wrapped in a plexus of nerve endings known as the hair follicle plexus. These nerve endings detect the movement of hair at the surface of the skin, such as when an insect may be walking along the skin. Stretching of the skin is transduced by stretch receptors known as bulbous corpuscles. Bulbous corpuscles are also known as Ruffini corpuscles, or type II cutaneous mechanoreceptors.[4]
96
+
97
+ The heat receptors are sensitive to infrared radiation and can occur in specialized organs, for instance in pit vipers. The thermoceptors in the skin are quite different from the homeostatic thermoceptors in the brain (hypothalamus), which provide feedback on internal body temperature.
98
+
99
+ The vestibular sense, or sense of balance (equilibrium), is the sense that contributes to the perception of balance (equilibrium), spatial orientation, direction, or acceleration (equilibrioception). Along with audition, the inner ear is responsible for encoding information about equilibrium. A similar mechanoreceptor—a hair cell with stereocilia—senses head position, head movement, and whether our bodies are in motion. These cells are located within the vestibule of the inner ear. Head position is sensed by the utricle and saccule, whereas head movement is sensed by the semicircular canals. The neural signals generated in the vestibular ganglion are transmitted through the vestibulocochlear nerve to the brain stem and cerebellum.[4]
100
+
101
+ The semicircular canals are three ring-like extensions of the vestibule. One is oriented in the horizontal plane, whereas the other two are oriented in the vertical plane. The anterior and posterior vertical canals are oriented at approximately 45 degrees relative to the sagittal plane. The base of each semicircular canal, where it meets with the vestibule, connects to an enlarged region known as the ampulla. The ampulla contains the hair cells that respond to rotational movement, such as turning the head while saying “no.” The stereocilia of these hair cells extend into the cupula, a membrane that attaches to the top of the ampulla. As the head rotates in a plane parallel to the semicircular canal, the fluid lags, deflecting the cupula in the direction opposite to the head movement. The semicircular canals contain several ampullae, with some oriented horizontally and others oriented vertically. By comparing the relative movements of both the horizontal and vertical ampullae, the vestibular system can detect the direction of most head movements within three-dimensional (3D) space.[4]
102
+
103
+ The vestibular nerve conducts information from sensory receptors in three ampulla that sense motion of fluid in three semicircular canals caused by three-dimensional rotation of the head. The vestibular nerve also conducts information from the utricle and the saccule, which contain hair-like sensory receptors that bend under the weight of otoliths (which are small crystals of calcium carbonate) that provide the inertia needed to detect head rotation, linear acceleration, and the direction of gravitational force.
104
+
105
+ The gustatory system or the sense of taste is the sensory system that is partially responsible for the perception of taste (flavor).[34] A few recognized submodalities exist within taste: sweet, salty, sour, bitter, and umami. Very recent research has suggested that there may also be a sixth taste submodality for fats, or lipids.[4] The sense of taste is often confused with the perception of flavor, which is the results of the multimodal integration of gustatory (taste) and olfactory (smell) sensations.[35]
106
+
107
+ Within the structure of the lingual papillae are taste buds that contain specialized gustatory receptor cells for the transduction of taste stimuli. These receptor cells are sensitive to the chemicals contained within foods that are ingested, and they release neurotransmitters based on the amount of the chemical in the food. Neurotransmitters from the gustatory cells can activate sensory neurons in the facial, glossopharyngeal, and vagus cranial nerves.[4]
108
+
109
+ Salty and sour taste submodalities are triggered by the cations Na+ and H+, respectively. The other taste modalities result from food molecules binding to a G protein–coupled receptor. A G protein signal transduction system ultimately leads to depolarization of the gustatory cell. The sweet taste is the sensitivity of gustatory cells to the presence of glucose (or sugar substitutes) dissolved in the saliva. Bitter taste is similar to sweet in that food molecules bind to G protein–coupled receptors. The taste known as umami is often referred to as the savory taste. Like sweet and bitter, it is based on the activation of G protein–coupled receptors by a specific molecule.[4]
110
+
111
+ Once the gustatory cells are activated by the taste molecules, they release neurotransmitters onto the dendrites of sensory neurons. These neurons are part of the facial and glossopharyngeal cranial nerves, as well as a component within the vagus nerve dedicated to the gag reflex. The facial nerve connects to taste buds in the anterior third of the tongue. The glossopharyngeal nerve connects to taste buds in the posterior two thirds of the tongue. The vagus nerve connects to taste buds in the extreme posterior of the tongue, verging on the pharynx, which are more sensitive to noxious stimuli such as bitterness.[4]
112
+
113
+ Flavor depends on odor, texture, and temperature as well as on taste. Humans receive tastes through sensory organs called taste buds, or gustatory calyculi, concentrated on the upper surface of the tongue. Other tastes such as calcium[36][37] and free fatty acids[38] may also be basic tastes but have yet to receive widespread acceptance. The inability to taste is called ageusia.
114
+
115
+ There is a rare phenomenon when it comes to the Gustatory sense. It is called Lexical-Gustatory Synesthesia. Lexical-Gustatory Synesthesia is when people can “taste” words. [39] They have reported having flavor sensations they aren’t actually eating. When they read words, hear words, or even imagine words. They have reported not only simple flavors, but textures, complex flavors, and temperatures as well.  [40]
116
+
117
+ Like the sense of taste, the sense of smell, or the olfactiory system, is also responsive to chemical stimuli.[4] Unlike taste, there are hundreds of olfactory receptors (388 according to one source), each binding to a particular molecular feature. Odor molecules possess a variety of features and, thus, excite specific receptors more or less strongly. This combination of excitatory signals from different receptors makes up what humans perceive as the molecule's smell.[41]
118
+
119
+ The olfactory receptor neurons are located in a small region within the superior nasal cavity. This region is referred to as the olfactory epithelium and contains bipolar sensory neurons. Each olfactory sensory neuron has dendrites that extend from the apical surface of the epithelium into the mucus lining the cavity. As airborne molecules are inhaled through the nose, they pass over the olfactory epithelial region and dissolve into the mucus. These odorant molecules bind to proteins that keep them dissolved in the mucus and help transport them to the olfactory dendrites. The odorant–protein complex binds to a receptor protein within the cell membrane of an olfactory dendrite. These receptors are G protein–coupled, and will produce a graded membrane potential in the olfactory neurons.[4]
120
+
121
+ In the brain, olfaction is processed by the olfactory cortex. Olfactory receptor neurons in the nose differ from most other neurons in that they die and regenerate on a regular basis. The inability to smell is called anosmia. Some neurons in the nose are specialized to detect pheromones.[42] Loss of the sense of smell can result in food tasting bland. A person with an impaired sense of smell may require additional spice and seasoning levels for food to be tasted. Anosmia may also be related to some presentations of mild depression, because the loss of enjoyment of food may lead to a general sense of despair. The ability of olfactory neurons to replace themselves decreases with age, leading to age-related anosmia. This explains why some elderly people salt their food more than younger people do.[4]
122
+
123
+ Causes of Olfactory dysfunction can be caused by age, exposure to toxic chemicals, viral infections, epilepsy, some sort of neurodegenerative disease, head trauma, or as a result of another disorder. [5]
124
+
125
+ As studies in olfaction have continued, there has been a positive correlation to its dysfunction or degeneration and early signs of Alzheimers and sporadic Parkinson’s disease. Many patients don’t notice the decline in smell before being tested. In Parkinson’s Disease and Alzheimers, an olfactory deficit is present in 85 to 90% of the early onset cases. [5]There is evidence that the decline of this sense can precede the Alzheimers or Parkinson’s Disease by a couple years. Although the deficit is present in these two diseases, as well as others, it is important to make note that the severity or magnitude vary with every disease. This has brought to light some suggestions that olfactory testing could be used in some cases to aid in differentiating many of the neurodegenerative diseases. [5]
126
+
127
+ Those who were born without a sense of smell or have a damaged sense of smell usually complain about 1, or more, of 3 things. Our olfactory sense is also used as a warning against bad food. If the sense of smell is damaged or not there, it can lead to a person contracting food poisoning more often. Not having a sense of smell can also lead to damaged relationships or insecurities within the relationships because of the inability for the person to not smell body odor. Lastly, smell influences how food and drink taste. When the olfactory sense is damaged, the satisfaction from eating and drinking is not as prominent.
128
+
129
+ Proprioception, the kinesthetic sense, provides the parietal cortex of the brain with information on the movement and relative positions of the parts of the body. Neurologists test this sense by telling patients to close their eyes and touch their own nose with the tip of a finger. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses. Proprioception and touch are related in subtle ways, and their impairment results in surprising and deep deficits in perception and action.[43]
130
+
131
+ Nociception (physiological pain) signals nerve-damage or damage to tissue. The three types of pain receptors are cutaneous (skin), somatic (joints and bones), and visceral (body organs). It was previously believed that pain was simply the overloading of pressure receptors, but research in the first half of the 20th century indicated that pain is a distinct phenomenon that intertwines with all of the other senses, including touch. Pain was once considered an entirely subjective experience, but recent studies show that pain is registered in the anterior cingulate gyrus of the brain.[44] The main function of pain is to attract our attention to dangers and motivate us to avoid them. For example, humans avoid touching a sharp needle, or hot object, or extending an arm beyond a safe limit because it is dangerous, and thus hurts. Without pain, people could do many dangerous things without being aware of the dangers.
132
+
133
+ An internal sensation and perception also known as interoception[45] is "any sense that is normally stimulated from within the body".[46] These involve numerous sensory receptors in internal organs. Interoception is thought to be atypical in clinical conditions such as alexithymia.[47]
134
+ Some examples of specific receptors are:
135
+
136
+ Other living organisms have receptors to sense the world around them, including many of the senses listed above for humans. However, the mechanisms and capabilities vary widely.
137
+
138
+ An example of smell in non-mammals is that of sharks, which combine their keen sense of smell with timing to determine the direction of a smell. They follow the nostril that first detected the smell.[54] Insects have olfactory receptors on their antennae. Although it is unknown to the degree and magnitude which non-human animals can smell better than humans.[55]
139
+
140
+ Many animals (salamanders, reptiles, mammals) have a vomeronasal organ[56] that is connected with the mouth cavity. In mammals it is mainly used to detect pheromones of marked territory, trails, and sexual state. Reptiles like snakes and monitor lizards make extensive use of it as a smelling organ by transferring scent molecules to the vomeronasal organ with the tips of the forked tongue. In reptiles the vomeronasal organ is commonly referred to as Jacobsons organ. In mammals, it is often associated with a special behavior called flehmen characterized by uplifting of the lips. The organ is vestigial in humans, because associated neurons have not been found that give any sensory input in humans.[57]
141
+
142
+ Flies and butterflies have taste organs on their feet, allowing them to taste anything they land on. Catfish have taste organs across their entire bodies, and can taste anything they touch, including chemicals in the water.[58]
143
+
144
+ Cats have the ability to see in low light, which is due to muscles surrounding their irides–which contract and expand their pupils–as well as to the tapetum lucidum, a reflective membrane that optimizes the image.
145
+ Pit vipers, pythons and some boas have organs that allow them to detect infrared light, such that these snakes are able to sense the body heat of their prey. The common vampire bat may also have an infrared sensor on its nose.[59] It has been found that birds and some other animals are tetrachromats and have the ability to see in the ultraviolet down to 300 nanometers. Bees and dragonflies[60] are also able to see in the ultraviolet. Mantis shrimps can perceive both polarized light and multispectral images and have twelve distinct kinds of color receptors, unlike humans which have three kinds and most mammals which have two kinds.[61]
146
+
147
+ Cephalopods have the ability to change color using chromatophores in their skin. Researchers believe that opsins in the skin can sense different wavelengths of light and help the creatures choose a coloration that camouflages them, in addition to light input from the eyes.[62] Other researchers hypothesize that cephalopod eyes in species which only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into color vision,[63] explaining pupils shaped like the letter U, the letter W, or a dumbbell, as well as explaining the need for colorful mating displays.[64] Some cephalopods can distinguish the polarization of light.
148
+
149
+ Many invertebrates have a statocyst, which is a sensor for acceleration and orientation that works very differently from the mammalian's semi-circular canals.
150
+
151
+ In addition, some animals have senses that humans do not, including the following:
152
+
153
+ Magnetoception (or magnetoreception) is the ability to detect the direction one is facing based on the Earth's magnetic field. Directional awareness is most commonly observed in birds, which rely on their magnetic sense to navigate during migration.[65][65][66][permanent dead link][67][68] It has also been observed in insects such as bees. Cattle make use of magnetoception to align themselves in a north–south direction.[69] Magnetotactic bacteria build miniature magnets inside themselves and use them to determine their orientation relative to the Earth's magnetic field.[70][71] There has been some recent (tentative) research suggesting that the Rhodopsin in the human eye, which responds particularly well to blue light, can facilitate magnetoception in humans.[72]
154
+
155
+ Certain animals, including bats and cetaceans, have the ability to determine orientation to other objects through interpretation of reflected sound (like sonar). They most often use this to navigate through poor lighting conditions or to identify and track prey. There is currently an uncertainty whether this is simply an extremely developed post-sensory interpretation of auditory perceptions or it actually constitutes a separate sense. Resolution of the issue will require brain scans of animals while they actually perform echolocation, a task that has proven difficult in practice.
156
+
157
+ Blind people report they are able to navigate and in some cases identify an object by interpreting reflected sounds (especially their own footsteps), a phenomenon known as human echolocation.
158
+
159
+ Electroreception (or electroception) is the ability to detect electric fields. Several species of fish, sharks, and rays have the capacity to sense changes in electric fields in their immediate vicinity. For cartilaginous fish this occurs through a specialized organ called the Ampullae of Lorenzini. Some fish passively sense changing nearby electric fields; some generate their own weak electric fields, and sense the pattern of field potentials over their body surface; and some use these electric field generating and sensing capacities for social communication. The mechanisms by which electroceptive fish construct a spatial representation from very small differences in field potentials involve comparisons of spike latencies from different parts of the fish's body.
160
+
161
+ The only orders of mammals that are known to demonstrate electroception are the dolphin and monotreme orders. Among these mammals, the platypus[73] has the most acute sense of electroception.
162
+
163
+ A dolphin can detect electric fields in water using electroreceptors in vibrissal crypts arrayed in pairs on its snout and which evolved from whisker motion sensors.[74] These electroreceptors can detect electric fields as weak as 4.6 microvolts per centimeter, such as those generated by contracting muscles and pumping gills of potential prey. This permits the dolphin to locate prey from the seafloor where sediment limits visibility and echolocation.
164
+
165
+ Spiders have been shown to detect electric fields to determine a suitable time to extend web for 'ballooning'.[75]
166
+
167
+ Body modification enthusiasts have experimented with magnetic implants to attempt to replicate this sense.[76] However, in general humans (and it is presumed other mammals) can detect electric fields only indirectly by detecting the effect they have on hairs. An electrically charged balloon, for instance, will exert a force on human arm hairs, which can be felt through tactition and identified as coming from a static charge (and not from wind or the like). This is not electroreception, as it is a post-sensory cognitive action.
168
+
169
+ Hygroreception is the ability to detect changes in the moisture content of the environment.[11][77]
170
+
171
+ The ability to sense infrared thermal radiation evolved independently in various families of snakes. Essentially, it allows these reptiles to "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes.[78] It was previously thought that the organs evolved primarily as prey detectors, but it is now believed that it may also be used in thermoregulatory decision making.[79] The facial pit underwent parallel evolution in pitvipers and some boas and pythons, having evolved once in pitvipers and multiple times in boas and pythons.[80] The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (Loreal pit), while boas and pythons have three or more comparatively smaller pits lining the upper and sometimes the lower lip, in or between the scales. Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure. Within the family Viperidae, the pit organ is seen only in the subfamily Crotalinae: the pitvipers. The organ is used extensively to detect and target endothermic prey such as rodents and birds, and it was previously assumed that the organ evolved specifically for that purpose. However, recent evidence shows that the pit organ may also be used for thermoregulation. According to Krochmal et al., pitvipers can use their pits for thermoregulatory decision-making while true vipers (vipers who do not contain heat-sensing pits) cannot.
172
+
173
+ In spite of its detection of IR light, the pits' IR detection mechanism is not similar to photoreceptors – while photoreceptors detect light via photochemical reactions, the protein in the pits of snakes is in fact a temperature-sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than a chemical reaction to light.[81] This is consistent with the thin pit membrane, which allows incoming IR radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as vascularize the pit membrane in order to rapidly cool the ion channel back to its original "resting" or "inactive" temperature.[81]
174
+
175
+ Pressure detection uses the organ of Weber, a system consisting of three appendages of vertebrae transferring changes in shape of the gas bladder to the middle ear. It can be used to regulate the buoyancy of the fish. Fish like the weather fish and other loaches are also known to respond to low pressure areas but they lack a swim bladder.
176
+
177
+ Current detection is a detection system of water currents, consisting mostly of vortices, found in the lateral line of fish and aquatic forms of amphibians. The lateral line is also sensitive to low-frequency vibrations. The mechanoreceptors are hair cells, the same mechanoreceptors for vestibular sense and hearing. It is used primarily for navigation, hunting, and schooling. The receptors of the electrical sense are modified hair cells of the lateral line system.
178
+
179
+ Polarized light direction/detection is used by bees to orient themselves, especially on cloudy days. Cuttlefish, some beetles, and mantis shrimp can also perceive the polarization of light. Most sighted humans can in fact learn to roughly detect large areas of polarization by an effect called Haidinger's brush, however this is considered an entoptic phenomenon rather than a separate sense.
180
+
181
+ Slit sensillae of spiders detect mechanical strain in the exoskeleton, providing information on force and vibrations.
182
+
183
+ By using a variety of sense receptors, plants sense light, temperature, humidity, chemical substances, chemical gradients, reorientation, magnetic fields, infections, tissue damage and mechanical pressure. The absence of a nervous system notwithstanding, plants interpret and respond to these stimuli by a variety of hormonal and cell-to-cell communication pathways that result in movement, morphological changes and physiological state alterations at the organism level, that is, result in plant behavior. Such physiological and cognitive functions are generally not believed to give rise to mental phenomena or qualia, however, as these are typically considered the product of nervous system activity. The emergence of mental phenomena from the activity of systems functionally or computationally analogous to that of nervous systems is, however, a hypothetical possibility explored by some schools of thought in the philosophy of mind field, such as functionalism and computationalism.
184
+
185
+ However, plants could perceive the world around them,[15] and might be able to emit airborne sounds similar to "screaming" when stressed. Those noises could not be detectable by human ears, but organisms with a hearing range that can hear ultrasonic frequencies—like mice, bats or perhaps other plants—could hear the plants' cries from as far as 15 feet (4.6 m) away.[82]
186
+
187
+ Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them.[16][17][83] Computers take in and respond to their environment through attached hardware. Until recently, input was limited to a keyboard, joystick or a mouse, but advances in technology, both in hardware and software, have allowed computers to take in sensory input in a way similar to humans.[16][17]
188
+
189
+ In the time of William Shakespeare, there were commonly reckoned to be five wits or five senses.[85] At that time, the words "sense" and "wit" were synonyms,[85] so the senses were known as the five outward wits.[86][87] This traditional concept of five senses is common today.
190
+
191
+ The traditional five senses are enumerated as the "five material faculties" (pañcannaṃ indriyānaṃ avakanti) in Hindu literature. They appear in allegorical representation as early as in the Katha Upanishad (roughly 6th century BC), as five horses drawing the "chariot" of the body, guided by the mind as "chariot driver".
192
+
193
+ Depictions of the five traditional senses as allegory became a popular subject for seventeenth-century artists, especially among Dutch and Flemish Baroque painters. A typical example is Gérard de Lairesse's Allegory of the Five Senses (1668), in which each of the figures in the main group alludes to a sense: Sight is the reclining boy with a convex mirror, hearing is the cupid-like boy with a triangle, smell is represented by the girl with flowers, taste is represented by the woman with the fruit, and touch is represented by the woman holding the bird.
194
+
195
+ In Buddhist philosophy, Ayatana or "sense-base" includes the mind as a sense organ, in addition to the traditional five. This addition to the commonly acknowledged senses may arise from the psychological orientation involved in Buddhist thought and practice. The mind considered by itself is seen as the principal gateway to a different spectrum of phenomena that differ from the physical sense data. This way of viewing the human sense system indicates the importance of internal sources of sensation and perception that complements our experience of the external world.[citation needed]
en/5351.html.txt ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sensation is the physical process during which sensory systems respond to stimuli and provide data for perception.[1] A sense is any of the systems involved in sensation. During sensation, sense organs engage in stimulus collection and transduction.[2] Sensation is often differentiated from the related and dependent concept of perception, which processes and integrates sensory information in order to give meaning to and understand detected stimuli, giving rise to subjective perceptual experience, or qualia.[3] Sensation and perception are central to and precede almost all aspects of cognition, behavior and thought.[1]
2
+
3
+ In organisms, a sensory organ consists of a group of related sensory cells that respond to a specific type of physical stimulus. Via cranial and spinal nerves, the different types of sensory receptor cells (mechanoreceptors, photoreceptors, chemoreceptors, thermoreceptors) in sensory organs transduct sensory information from sensory organs towards the central nervous system, to the sensory cortices in the brain, where sensory signals are further processed and interpreted (perceived).[1][4][5] Sensory systems, or senses, are often divided into external (exteroception) and internal (interoception) sensory systems.[6][7] Sensory modalities or submodalities refer to the way sensory information is encoded or transduced.[4] Multimodality integrates different senses into one unified perceptual experience. For example, information from one sense has the potential to influence how information from another is perceived.[2] Sensation and perception are studied by a variety of related fields, most notably psychophysics, neurobiology, cognitive psychology, and cognitive science.[1]
4
+
5
+ Humans have a multitude of sensory systems. Human external sensation is based on the sensory organs of the eyes, ears, skin, inner ear, nose, and mouth. The corresponding sensory systems of the visual system (sense of vision), auditory system (sense of hearing), somatosensory system (sense of touch), vestibular system (sense of balance), olfactory system (sense of smell), and gustatory system (sense of taste) contribute, respectively, to the perceptions of vision, hearing, touch, spatial orientation, smell, and taste (flavor).[2][1] Internal sensation, or interoception, detects stimuli from internal organs and tissues. Many internal sensory and perceptual systems exist in humans, including proprioception (body position) and nociception (pain). Further internal chemoreception and osmoreception based sensory systems lead to various perceptions, such as hunger, thirst, suffocation, and nausea, or different involuntary behaviors, such as vomiting.[6][7][8]
6
+
7
+ Nonhuman animals experience sensation and perception, with varying levels of similarity to and difference from humans and other animal species. For example, mammals, in general, have a stronger sense of smell than humans. Some animal species lack one or more human sensory system analogues, some have sensory systems that are not found in humans, while others process and interpret the same sensory information in very different ways. For example, some animals are able to detect electrical[9] and magnetic fields,[10] air moisture,[11] or polarized light,[12] while others sense and perceive through alternative systems, such as echolocation.[13][14] Recently, it has been suggested that plants and artificial agents may be able to detect and interpret environmental information in an analogous manner to animals.[15][16][17]
8
+
9
+ Sensory modality refers to the way that information is encoded, which is similar to the idea of transduction. The main sensory modalities can be described on the basis of how each is transduced. Listing all the different sensory modalities, which can number as many as 17, involves separating the major senses into more specific categories, or submodalities, of the larger sense. An individual sensory modality represents the sensation of a specific type of stimulus. For example, the general sensation and perception of touch, which is known as somatosensation, can be separated into light pressure, deep pressure, vibration, itch, pain, temperature, or hair movement, while the general sensation and perception of taste can be separated into submodalities of sweet, salty, sour, bitter, spicy. and umami, all of which are based on different chemicals binding to sensory neurons.[4]
10
+
11
+ Sensory receptors are the cells or structures that detect sensations. Stimuli in the environment activate specialized receptor cells in the peripheral nervous system. During transduction, physical stimulus is converted into action potential by receptors and transmitted towards the central nervous system for processing.[5] Different types of stimuli are sensed by different types of receptor cells. Receptor cells can be classified into types on the basis of three different criteria: cell type, position, and function. Receptors can be classified structurally on the basis of cell type and their position in relation to stimuli they sense. Receptors can further be classified functionally on the basis of the transduction of stimuli, or how the mechanical stimulus, light, or chemical changed the cell membrane potential.[4]
12
+
13
+ One way to classify receptors is based on their location relative to the stimuli. An exteroceptor is a receptor that is located near a stimulus of the external environment, such as the somatosensory receptors that are located in the skin. An interoceptor is one that interprets stimuli from internal organs and tissues, such as the receptors that sense the increase in blood pressure in the aorta or carotid sinus.[4]
14
+
15
+ The cells that interpret information about the environment can be either (1) a neuron that has a free nerve ending, with dendrites embedded in tissue that would receive a sensation; (2) a neuron that has an encapsulated ending in which the sensory nerve endings are encapsulated in connective tissue that enhances their sensitivity; or (3) a specialized receptor cell, which has distinct structural components that interpret a specific type of stimulus. The pain and temperature receptors in the dermis of the skin are examples of neurons that have free nerve endings (1). Also located in the dermis of the skin are lamellated corpuscles, neurons with encapsulated nerve endings that respond to pressure and touch (2). The cells in the retina that respond to light stimuli are an example of a specialized receptor (3), a photoreceptor.[4]
16
+
17
+ A transmembrane protein receptor is a protein in the cell membrane that mediates a physiological change in a neuron, most often through the opening of ion channels or changes in the cell signaling processes. Transmembrane receptors are activated by chemicals called ligands. For example, a molecule in food can serve as a ligand for taste receptors. Other transmembrane proteins, which are not accurately called receptors, are sensitive to mechanical or thermal changes. Physical changes in these proteins increase ion flow across the membrane, and can generate an action potential or a graded potential in the sensory neurons.[4]
18
+
19
+ A third classification of receptors is by how the receptor transduces stimuli into membrane potential changes. Stimuli are of three general types. Some stimuli are ions and macromolecules that affect transmembrane receptor proteins when these chemicals diffuse across the cell membrane. Some stimuli are physical variations in the environment that affect receptor cell membrane potentials. Other stimuli include the electromagnetic radiation from visible light. For humans, the only electromagnetic energy that is perceived by our eyes is visible light. Some other organisms have receptors that humans lack, such as the heat sensors of snakes, the ultraviolet light sensors of bees, or magnetic receptors in migratory birds.[4]
20
+
21
+ Receptor cells can be further categorized on the basis of the type of stimuli they transduce. The different types of functional receptor cell types are mechanoreceptors, photoreceptors, chemoreceptors (osmoreceptor), thermoreceptors, and nociceptors. Physical stimuli, such as pressure and vibration, as well as the sensation of sound and body position (balance), are interpreted through a mechanoreceptor. Photoreceptors convert light (visible electromagnetic radiation) into signals. Chemical stimuli can be interpreted by a chemoreceptor that interprets chemical stimuli, such as an object's taste or smell, while osmoreceptors respond to a chemical solute concentrations of body fluids. Nociception (pain) interprets the presence of tissue damage, from sensory information from mechano-, chemo-, and thermoreceptors.[18] Another physical stimulus that has its own type of receptor is temperature, which is sensed through a thermoreceptor that is either sensitive to temperatures above (heat) or below (cold) normal body temperature.[4]
22
+
23
+ Each sense organ (eyes or nose, for instance) requires a minimal amount of stimulation in order to detect a stimulus. This minimum amount of stimulus is called the absolute threshold.[2] The absolute threshold is defined as the minimum amount of stimulation necessary for the detection of a stimulus 50% of the time.[1] Absolute threshold is measured by using a method called signal detection. This process involves presenting stimuli of varying intensities to a subject in order to determine the level at which the subject can reliably detect stimulation in a given sense.[2]
24
+
25
+ Differential threshold or just noticeable difference (JDS) is the smallest detectable difference between two stimuli, or the smallest difference in stimuli that can be judged to be different from each other.[1] Weber's Law is an empirical law that states that the difference threshold is a constant fraction of the comparison stimulus.[1] According to Weber's Law, bigger stimuli require larger differences to be noticed.[2]
26
+
27
+ Magnitude estimation is a psychophysical method in which subjects assign perceived values of given stimuli. The relationship between stimulus intensity and perceptive intensity is described by Steven's power law.[1]
28
+
29
+ Signal detection theory quantifies the experience of the subject to the presentation of a stimulus in the presence of noise. There is internal noise and there is external noise when it comes to signal detection. The internal noise originates from static in the nervous system. For example, an individual with closed eyes in a dark room still sees something - a blotchy pattern of grey with intermittent brighter flashes -, this is internal noise. External noise is the result of noise in the environment that can interfere with the detection of the stimulus of interest. Noise is only a problem if the magnitude of the noise is large enough to interfere with signal collection. The nervous system calculates a criterion, or an internal threshold, for the detection of a signal in the presence of noise. If a signal is judged to be above the criterion, thus the signal is differentiated from the noise, the signal is sensed and perceived. Errors in signal detection can potentially lead to false positives and false negatives. The sensory criterion might be shifted based on the importance of the detecting the signal. Shifting of the criterion may influence the likelihood of false positives and false negatives.[1]
30
+
31
+ Subjective visual and auditory experiences appear to be similar across humans subjects. The same cannot be said about taste. For example, there is a molecule called propylthiouracil (PROP) that some humans experience as bitter, some as almost tasteless, while others experience it as somewhere between tasteless and bitter. There is a genetic basis for this difference between perception given the same sensory stimulus. This subjective difference in taste perception has implications for individuals' food preferences, and consequently, health.[1]
32
+
33
+ When a stimulus is constant and unchanging, perceptual sensory adaptation occurs. During this process, the subject becomes less sensitive to the stimulus.[2]
34
+
35
+ Biological auditory (hearing), vestibular and spatial, and visual systems (vision) appear to break down real-world complex stimuli into sine wave components, through the mathematical process called Fourier analysis. Many neurons have a strong preference for certain sine frequency components in contrast to others. The way that simpler sounds and images are encoded during sensation can provide insight into how perception of real-world objects happens.[1]
36
+
37
+ Perception occurs when nerves that lead from the sensory organs (e.g. eye) to the brain are stimulated, even if that stimulation is unrelated to the target signal of the sensory organ. For example, in the case of the eye, it does not matter whether light or something else stimulates the optic nerve, that stimulation will results in visual perception, even if there was no visual stimulus to begin with. (To prove this point to yourself (and if you are a human), close your eyes (preferably in a dark room) and press gently on the outside corner of one eye through the eyelid. You will see a visual spot toward the inside of your visual field, near your nose.)[1]
38
+
39
+ All stimuli received by the receptors are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area (cortex) of the brain. Just as different nerves are dedicated to sensory and motors tasks, different areas of the brain (cortices) are similarly dedicated to different sensory and perceptual tasks. More complex processing is accomplished across primary cortical regions that spread beyond the primary cortices. Every nerve, sensory or motor, has its own signal transmission speed. For example, nerves in the frog's legs have a 90 ft/s (99 km/h) signal transmission speed, while sensory nerves in humans, transmit sensory information at speeds between 165 ft/s (181 km/h) and 330 ft/s (362 km/h).[1]
40
+
41
+ Perceptual experience is often multimodal. Multimodality integrates different senses into one unified perceptual experience. Information from one sense has the potential to influence how information from another is perceived.[2] Multimodal perception is qualitatively different from unimodal perception. There has been a growing body of evidence since the mid-1990s on the neural correlates of multimodal perception.[20]
42
+
43
+ Historical inquiries into the underlying mechanisms of sensation and perception have lead early researchers to subscribe to various philosophical interpretations of perception and the mind, including panpsychism, dualism, and materialism. The majority of modern scientists who study sensation and perception take on a materialistic view of the mind.[1]
44
+
45
+ Some examples of human absolute thresholds for the 9-21 external senses.[21]
46
+
47
+ Humans respond more strongly to multimodal stimuli compared to the sum of each single modality together, an effect called the superadditive effect of multisensory integration.[2] Neurons that respond to both visual and auditory stimuli have been identified in the superior temporal sulcus.[20] Additionally, multimodal “what” and “where” pathways have been proposed for auditory and tactile stimuli.[22]Siddhu
48
+
49
+ External receptors that respond to stimuli from outside the body are called extoreceptors.[23] Human external sensation is based on the sensory organs of the eyes, ears, skin, vestibular system, nose, and mouth, which contribute, respectively, to the sensory perceptions of vision, hearing, touch, spatial orientation, smell, and taste. Smell and taste are both responsible for identifying molecules and thus both are types of chemoreceptors. Both olfaction (smell) and gustation (taste) require the transduction of chemical stimuli into electrical potentials.[2][1]
50
+
51
+ The visual system, or sense of sight, is based on the transduction of light stimuli received through the eyes and contributes to visual perception. The visual system detects light on photoreceptors in the retina of each eye that generates electrical nerve impulses for the perception of varying colors and brightness. There are two types of photoreceptors: rods and cones. Rods are very sensitive to light but do not distinguish colors. Cones distinguish colors but are less sensitive to dim light.[4]
52
+
53
+ At the molecular level, visual stimuli cause changes in the photopigment molecule that lead to changes in membrane potential of the photoreceptor cell. A single unit of light is called a photon, which is described in physics as a packet of energy with properties of both a particle and a wave. The energy of a photon is represented by its wavelength, with each wavelength of visible light corresponding to a particular color. Visible light is electromagnetic radiation with a wavelength between 380 and 720 nm. Wavelengths of electromagnetic radiation longer than 720 nm fall into the infrared range, whereas wavelengths shorter than 380 nm fall into the ultraviolet range. Light with a wavelength of 380 nm is blue whereas light with a wavelength of 720 nm is dark red. All other colors fall between red and blue at various points along the wavelength scale.[4]
54
+
55
+ The three types of cone opsins, being sensitive to different wavelengths of light, provide us with color vision. By comparing the activity of the three different cones, the brain can extract color information from visual stimuli. For example, a bright blue light that has a wavelength of approximately 450 nm would activate the “red” cones minimally, the “green” cones marginally, and the “blue” cones predominantly. The relative activation of the three different cones is calculated by the brain, which perceives the color as blue. However, cones cannot react to low-intensity light, and rods do not sense the color of light. Therefore, our low-light vision is—in essence—in grayscale. In other words, in a dark room, everything appears as a shade of gray. If you think that you can see colors in the dark, it is most likely because your brain knows what color something is and is relying on that memory.[4]
56
+
57
+ There is some disagreement as to whether the visual system consists of one, two, or three submodalities. Neuroanatomists generally regard it as two submodalities, given that different receptors are responsible for the perception of color and brightness. Some argue[citation needed] that stereopsis, the perception of depth using both eyes, also constitutes a sense, but it is generally regarded as a cognitive (that is, post-sensory) function of the visual cortex of the brain where patterns and objects in images are recognized and interpreted based on previously learned information. This is called visual memory.
58
+
59
+ The inability to see is called blindness. Blindness may result from damage to the eyeball, especially to the retina, damage to the optic nerve that connects each eye to the brain, and/or from stroke (infarcts in the brain). Temporary or permanent blindness can be caused by poisons or medications. People who are blind from degradation or damage to the visual cortex, but still have functional eyes, are actually capable of some level of vision and reaction to visual stimuli but not a conscious perception; this is known as blindsight. People with blindsight are usually not aware that they are reacting to visual sources, and instead just unconsciously adapt their behavior to the stimulus.
60
+
61
+ On February 14, 2013 researchers developed a neural implant that gives rats the ability to sense infrared light which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities.[24]
62
+
63
+ Visual Perception in Psychology
64
+
65
+ According to Gestalt Psychology, people perceive the whole of something even if it is not there. The Gestalt’s Law of Organization states that people have seven factors that help to group what is seen into patterns or groups: Common Fate, Similarity, Proximity, Closure, Symmetry, Continuity, and Past Experience.[25]
66
+
67
+ The Law of Common fate says that objects are led along the smoothest path. People follow the trend of motion as the lines/dots flow. [26]
68
+
69
+ The Law of Similarity refers to the grouping of images or objects that are similar to each other in some aspect. This could be due to shade, colour, size, shape, or other qualities you could distinguish.[27]
70
+
71
+ The Law of Proximity states that our minds like to group based on how close objects are to each other. We may see 42 objects in a group, but we can also perceive three groups of two lines with seven objects in each line. [26]
72
+
73
+ The Law of Closure is the idea that we as humans still see a full picture even if there are gaps within that picture. There could be gaps or parts missing from a section of a shape, but we would still perceive the shape as whole.[27]
74
+
75
+ The Law of Symmetry refers to a person's preference to see symmetry around a central point. An example would be when we use parentheses in writing. We tend to perceive all of the words in the parentheses as one section instead of individual words within the parentheses.[27]
76
+
77
+ The Law of Continuity tells us that objects are grouped together by their elements and then perceived as a whole. This usually happens when we see overlapping objects.We will see the overlapping objects with no interruptions.[27]
78
+
79
+ The Law of Past Experience refers to the tendency humans have to categorize objects according to past experiences under certain circumstances. If two objects are usually perceived together or within close proximity of each other the Law of Past Experience is usually seen.[26]
80
+
81
+ Hearing, or audition, is the transduction of sound waves into a neural signal that is made possible by the structures of the ear. The large, fleshy structure on the lateral aspect of the head is known as the auricle. At the end of the auditory canal is the tympanic membrane, or ear drum, which vibrates after it is struck by sound waves. The auricle, ear canal, and tympanic membrane are often referred to as the external ear. The middle ear consists of a space spanned by three small bones called the ossicles. The three ossicles are the malleus, incus, and stapes, which are Latin names that roughly translate to hammer, anvil, and stirrup. The malleus is attached to the tympanic membrane and articulates with the incus. The incus, in turn, articulates with the stapes. The stapes is then attached to the inner ear, where the sound waves will be transduced into a neural signal. The middle ear is connected to the pharynx through the Eustachian tube, which helps equilibrate air pressure across the tympanic membrane. The tube is normally closed but will pop open when the muscles of the pharynx contract during swallowing or yawning.[4]
82
+
83
+ Mechanoreceptors turn motion into electrical nerve pulses, which are located in the inner ear. Since sound is vibration, propagating through a medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical sense because these vibrations are mechanically conducted from the eardrum through a series of tiny bones to hair-like fibers in the inner ear, which detect mechanical motion of the fibers within a range of about 20 to 20,000 hertz,[28] with substantial variation between individuals. Hearing at high frequencies declines with an increase in age. Inability to hear is called deafness or hearing impairment. Sound can also be detected as vibrations conducted through the body by tactition. Lower frequencies that can be heard are detected this way. Some deaf people are able to determine the direction and location of vibrations picked up through the feet.[29]
84
+
85
+ Studies pertaining to Audition started to increase in number towards the latter end of the nineteenth century. During this time, many laboratories in the United States began to create new models, diagrams, and instruments that all pertained to the ear. [30]
86
+
87
+ There is a branch of Cognitive Psychology dedicated strictly to Audition. They call it Auditory Cognitive Psychology. The main point is to understand why humans are able to use sound in thinking outside of actually saying it. [31]
88
+
89
+ Relating to Auditory Cognitive Psychology is Psychoacoustics. Psychoacoustics is more pointed to people interested in music.[32] Haptics, a word used to refer to both taction and kinesthesia, has many parallels with psychoacoustics.[32] Most research around these two are focused on the instrument, the listener, and the player of the instrument.  [32]
90
+
91
+ Somatosensation is considered a general sense, as opposed to the special senses discussed in this section. Somatosensation is the group of sensory modalities that are associated with touch and interoception. The modalities of somatosensation include pressure, vibration, light touch, tickle, itch, temperature, pain, kinesthesia.[4] Somatosensation, also called tactition (adjectival form: tactile) is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc.). The touch sense of itching caused by insect bites or allergies involves special itch-specific neurons in the skin and spinal cord.[33] The loss or impairment of the ability to feel anything touched is called tactile anesthesia. Paresthesia is a sensation of tingling, pricking, or numbness of the skin that may result from nerve damage and may be permanent or temporary.
92
+
93
+ Two types of somatosensory signals that are transduced by free nerve endings are pain and temperature. These two modalities use thermoreceptors and nociceptors to transduce temperature and pain stimuli, respectively. Temperature receptors are stimulated when local temperatures differ from body temperature. Some thermoreceptors are sensitive to just cold and others to just heat. Nociception is the sensation of potentially damaging stimuli. Mechanical, chemical, or thermal stimuli beyond a set threshold will elicit painful sensations. Stressed or damaged tissues release chemicals that activate receptor proteins in the nociceptors. For example, the sensation of heat associated with spicy foods involves capsaicin, the active molecule in hot peppers.[4]
94
+
95
+ Low frequency vibrations are sensed by mechanoreceptors called Merkel cells, also known as type I cutaneous mechanoreceptors. Merkel cells are located in the stratum basale of the epidermis. Deep pressure and vibration is transduced by lamellated (Pacinian) corpuscles, which are receptors with encapsulated endings found deep in the dermis, or subcutaneous tissue. Light touch is transduced by the encapsulated endings known as tactile (Meissner) corpuscles. Follicles are also wrapped in a plexus of nerve endings known as the hair follicle plexus. These nerve endings detect the movement of hair at the surface of the skin, such as when an insect may be walking along the skin. Stretching of the skin is transduced by stretch receptors known as bulbous corpuscles. Bulbous corpuscles are also known as Ruffini corpuscles, or type II cutaneous mechanoreceptors.[4]
96
+
97
+ The heat receptors are sensitive to infrared radiation and can occur in specialized organs, for instance in pit vipers. The thermoceptors in the skin are quite different from the homeostatic thermoceptors in the brain (hypothalamus), which provide feedback on internal body temperature.
98
+
99
+ The vestibular sense, or sense of balance (equilibrium), is the sense that contributes to the perception of balance (equilibrium), spatial orientation, direction, or acceleration (equilibrioception). Along with audition, the inner ear is responsible for encoding information about equilibrium. A similar mechanoreceptor—a hair cell with stereocilia—senses head position, head movement, and whether our bodies are in motion. These cells are located within the vestibule of the inner ear. Head position is sensed by the utricle and saccule, whereas head movement is sensed by the semicircular canals. The neural signals generated in the vestibular ganglion are transmitted through the vestibulocochlear nerve to the brain stem and cerebellum.[4]
100
+
101
+ The semicircular canals are three ring-like extensions of the vestibule. One is oriented in the horizontal plane, whereas the other two are oriented in the vertical plane. The anterior and posterior vertical canals are oriented at approximately 45 degrees relative to the sagittal plane. The base of each semicircular canal, where it meets with the vestibule, connects to an enlarged region known as the ampulla. The ampulla contains the hair cells that respond to rotational movement, such as turning the head while saying “no.” The stereocilia of these hair cells extend into the cupula, a membrane that attaches to the top of the ampulla. As the head rotates in a plane parallel to the semicircular canal, the fluid lags, deflecting the cupula in the direction opposite to the head movement. The semicircular canals contain several ampullae, with some oriented horizontally and others oriented vertically. By comparing the relative movements of both the horizontal and vertical ampullae, the vestibular system can detect the direction of most head movements within three-dimensional (3D) space.[4]
102
+
103
+ The vestibular nerve conducts information from sensory receptors in three ampulla that sense motion of fluid in three semicircular canals caused by three-dimensional rotation of the head. The vestibular nerve also conducts information from the utricle and the saccule, which contain hair-like sensory receptors that bend under the weight of otoliths (which are small crystals of calcium carbonate) that provide the inertia needed to detect head rotation, linear acceleration, and the direction of gravitational force.
104
+
105
+ The gustatory system or the sense of taste is the sensory system that is partially responsible for the perception of taste (flavor).[34] A few recognized submodalities exist within taste: sweet, salty, sour, bitter, and umami. Very recent research has suggested that there may also be a sixth taste submodality for fats, or lipids.[4] The sense of taste is often confused with the perception of flavor, which is the results of the multimodal integration of gustatory (taste) and olfactory (smell) sensations.[35]
106
+
107
+ Within the structure of the lingual papillae are taste buds that contain specialized gustatory receptor cells for the transduction of taste stimuli. These receptor cells are sensitive to the chemicals contained within foods that are ingested, and they release neurotransmitters based on the amount of the chemical in the food. Neurotransmitters from the gustatory cells can activate sensory neurons in the facial, glossopharyngeal, and vagus cranial nerves.[4]
108
+
109
+ Salty and sour taste submodalities are triggered by the cations Na+ and H+, respectively. The other taste modalities result from food molecules binding to a G protein–coupled receptor. A G protein signal transduction system ultimately leads to depolarization of the gustatory cell. The sweet taste is the sensitivity of gustatory cells to the presence of glucose (or sugar substitutes) dissolved in the saliva. Bitter taste is similar to sweet in that food molecules bind to G protein–coupled receptors. The taste known as umami is often referred to as the savory taste. Like sweet and bitter, it is based on the activation of G protein–coupled receptors by a specific molecule.[4]
110
+
111
+ Once the gustatory cells are activated by the taste molecules, they release neurotransmitters onto the dendrites of sensory neurons. These neurons are part of the facial and glossopharyngeal cranial nerves, as well as a component within the vagus nerve dedicated to the gag reflex. The facial nerve connects to taste buds in the anterior third of the tongue. The glossopharyngeal nerve connects to taste buds in the posterior two thirds of the tongue. The vagus nerve connects to taste buds in the extreme posterior of the tongue, verging on the pharynx, which are more sensitive to noxious stimuli such as bitterness.[4]
112
+
113
+ Flavor depends on odor, texture, and temperature as well as on taste. Humans receive tastes through sensory organs called taste buds, or gustatory calyculi, concentrated on the upper surface of the tongue. Other tastes such as calcium[36][37] and free fatty acids[38] may also be basic tastes but have yet to receive widespread acceptance. The inability to taste is called ageusia.
114
+
115
+ There is a rare phenomenon when it comes to the Gustatory sense. It is called Lexical-Gustatory Synesthesia. Lexical-Gustatory Synesthesia is when people can “taste” words. [39] They have reported having flavor sensations they aren’t actually eating. When they read words, hear words, or even imagine words. They have reported not only simple flavors, but textures, complex flavors, and temperatures as well.  [40]
116
+
117
+ Like the sense of taste, the sense of smell, or the olfactiory system, is also responsive to chemical stimuli.[4] Unlike taste, there are hundreds of olfactory receptors (388 according to one source), each binding to a particular molecular feature. Odor molecules possess a variety of features and, thus, excite specific receptors more or less strongly. This combination of excitatory signals from different receptors makes up what humans perceive as the molecule's smell.[41]
118
+
119
+ The olfactory receptor neurons are located in a small region within the superior nasal cavity. This region is referred to as the olfactory epithelium and contains bipolar sensory neurons. Each olfactory sensory neuron has dendrites that extend from the apical surface of the epithelium into the mucus lining the cavity. As airborne molecules are inhaled through the nose, they pass over the olfactory epithelial region and dissolve into the mucus. These odorant molecules bind to proteins that keep them dissolved in the mucus and help transport them to the olfactory dendrites. The odorant–protein complex binds to a receptor protein within the cell membrane of an olfactory dendrite. These receptors are G protein–coupled, and will produce a graded membrane potential in the olfactory neurons.[4]
120
+
121
+ In the brain, olfaction is processed by the olfactory cortex. Olfactory receptor neurons in the nose differ from most other neurons in that they die and regenerate on a regular basis. The inability to smell is called anosmia. Some neurons in the nose are specialized to detect pheromones.[42] Loss of the sense of smell can result in food tasting bland. A person with an impaired sense of smell may require additional spice and seasoning levels for food to be tasted. Anosmia may also be related to some presentations of mild depression, because the loss of enjoyment of food may lead to a general sense of despair. The ability of olfactory neurons to replace themselves decreases with age, leading to age-related anosmia. This explains why some elderly people salt their food more than younger people do.[4]
122
+
123
+ Causes of Olfactory dysfunction can be caused by age, exposure to toxic chemicals, viral infections, epilepsy, some sort of neurodegenerative disease, head trauma, or as a result of another disorder. [5]
124
+
125
+ As studies in olfaction have continued, there has been a positive correlation to its dysfunction or degeneration and early signs of Alzheimers and sporadic Parkinson’s disease. Many patients don’t notice the decline in smell before being tested. In Parkinson’s Disease and Alzheimers, an olfactory deficit is present in 85 to 90% of the early onset cases. [5]There is evidence that the decline of this sense can precede the Alzheimers or Parkinson’s Disease by a couple years. Although the deficit is present in these two diseases, as well as others, it is important to make note that the severity or magnitude vary with every disease. This has brought to light some suggestions that olfactory testing could be used in some cases to aid in differentiating many of the neurodegenerative diseases. [5]
126
+
127
+ Those who were born without a sense of smell or have a damaged sense of smell usually complain about 1, or more, of 3 things. Our olfactory sense is also used as a warning against bad food. If the sense of smell is damaged or not there, it can lead to a person contracting food poisoning more often. Not having a sense of smell can also lead to damaged relationships or insecurities within the relationships because of the inability for the person to not smell body odor. Lastly, smell influences how food and drink taste. When the olfactory sense is damaged, the satisfaction from eating and drinking is not as prominent.
128
+
129
+ Proprioception, the kinesthetic sense, provides the parietal cortex of the brain with information on the movement and relative positions of the parts of the body. Neurologists test this sense by telling patients to close their eyes and touch their own nose with the tip of a finger. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses. Proprioception and touch are related in subtle ways, and their impairment results in surprising and deep deficits in perception and action.[43]
130
+
131
+ Nociception (physiological pain) signals nerve-damage or damage to tissue. The three types of pain receptors are cutaneous (skin), somatic (joints and bones), and visceral (body organs). It was previously believed that pain was simply the overloading of pressure receptors, but research in the first half of the 20th century indicated that pain is a distinct phenomenon that intertwines with all of the other senses, including touch. Pain was once considered an entirely subjective experience, but recent studies show that pain is registered in the anterior cingulate gyrus of the brain.[44] The main function of pain is to attract our attention to dangers and motivate us to avoid them. For example, humans avoid touching a sharp needle, or hot object, or extending an arm beyond a safe limit because it is dangerous, and thus hurts. Without pain, people could do many dangerous things without being aware of the dangers.
132
+
133
+ An internal sensation and perception also known as interoception[45] is "any sense that is normally stimulated from within the body".[46] These involve numerous sensory receptors in internal organs. Interoception is thought to be atypical in clinical conditions such as alexithymia.[47]
134
+ Some examples of specific receptors are:
135
+
136
+ Other living organisms have receptors to sense the world around them, including many of the senses listed above for humans. However, the mechanisms and capabilities vary widely.
137
+
138
+ An example of smell in non-mammals is that of sharks, which combine their keen sense of smell with timing to determine the direction of a smell. They follow the nostril that first detected the smell.[54] Insects have olfactory receptors on their antennae. Although it is unknown to the degree and magnitude which non-human animals can smell better than humans.[55]
139
+
140
+ Many animals (salamanders, reptiles, mammals) have a vomeronasal organ[56] that is connected with the mouth cavity. In mammals it is mainly used to detect pheromones of marked territory, trails, and sexual state. Reptiles like snakes and monitor lizards make extensive use of it as a smelling organ by transferring scent molecules to the vomeronasal organ with the tips of the forked tongue. In reptiles the vomeronasal organ is commonly referred to as Jacobsons organ. In mammals, it is often associated with a special behavior called flehmen characterized by uplifting of the lips. The organ is vestigial in humans, because associated neurons have not been found that give any sensory input in humans.[57]
141
+
142
+ Flies and butterflies have taste organs on their feet, allowing them to taste anything they land on. Catfish have taste organs across their entire bodies, and can taste anything they touch, including chemicals in the water.[58]
143
+
144
+ Cats have the ability to see in low light, which is due to muscles surrounding their irides–which contract and expand their pupils–as well as to the tapetum lucidum, a reflective membrane that optimizes the image.
145
+ Pit vipers, pythons and some boas have organs that allow them to detect infrared light, such that these snakes are able to sense the body heat of their prey. The common vampire bat may also have an infrared sensor on its nose.[59] It has been found that birds and some other animals are tetrachromats and have the ability to see in the ultraviolet down to 300 nanometers. Bees and dragonflies[60] are also able to see in the ultraviolet. Mantis shrimps can perceive both polarized light and multispectral images and have twelve distinct kinds of color receptors, unlike humans which have three kinds and most mammals which have two kinds.[61]
146
+
147
+ Cephalopods have the ability to change color using chromatophores in their skin. Researchers believe that opsins in the skin can sense different wavelengths of light and help the creatures choose a coloration that camouflages them, in addition to light input from the eyes.[62] Other researchers hypothesize that cephalopod eyes in species which only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into color vision,[63] explaining pupils shaped like the letter U, the letter W, or a dumbbell, as well as explaining the need for colorful mating displays.[64] Some cephalopods can distinguish the polarization of light.
148
+
149
+ Many invertebrates have a statocyst, which is a sensor for acceleration and orientation that works very differently from the mammalian's semi-circular canals.
150
+
151
+ In addition, some animals have senses that humans do not, including the following:
152
+
153
+ Magnetoception (or magnetoreception) is the ability to detect the direction one is facing based on the Earth's magnetic field. Directional awareness is most commonly observed in birds, which rely on their magnetic sense to navigate during migration.[65][65][66][permanent dead link][67][68] It has also been observed in insects such as bees. Cattle make use of magnetoception to align themselves in a north–south direction.[69] Magnetotactic bacteria build miniature magnets inside themselves and use them to determine their orientation relative to the Earth's magnetic field.[70][71] There has been some recent (tentative) research suggesting that the Rhodopsin in the human eye, which responds particularly well to blue light, can facilitate magnetoception in humans.[72]
154
+
155
+ Certain animals, including bats and cetaceans, have the ability to determine orientation to other objects through interpretation of reflected sound (like sonar). They most often use this to navigate through poor lighting conditions or to identify and track prey. There is currently an uncertainty whether this is simply an extremely developed post-sensory interpretation of auditory perceptions or it actually constitutes a separate sense. Resolution of the issue will require brain scans of animals while they actually perform echolocation, a task that has proven difficult in practice.
156
+
157
+ Blind people report they are able to navigate and in some cases identify an object by interpreting reflected sounds (especially their own footsteps), a phenomenon known as human echolocation.
158
+
159
+ Electroreception (or electroception) is the ability to detect electric fields. Several species of fish, sharks, and rays have the capacity to sense changes in electric fields in their immediate vicinity. For cartilaginous fish this occurs through a specialized organ called the Ampullae of Lorenzini. Some fish passively sense changing nearby electric fields; some generate their own weak electric fields, and sense the pattern of field potentials over their body surface; and some use these electric field generating and sensing capacities for social communication. The mechanisms by which electroceptive fish construct a spatial representation from very small differences in field potentials involve comparisons of spike latencies from different parts of the fish's body.
160
+
161
+ The only orders of mammals that are known to demonstrate electroception are the dolphin and monotreme orders. Among these mammals, the platypus[73] has the most acute sense of electroception.
162
+
163
+ A dolphin can detect electric fields in water using electroreceptors in vibrissal crypts arrayed in pairs on its snout and which evolved from whisker motion sensors.[74] These electroreceptors can detect electric fields as weak as 4.6 microvolts per centimeter, such as those generated by contracting muscles and pumping gills of potential prey. This permits the dolphin to locate prey from the seafloor where sediment limits visibility and echolocation.
164
+
165
+ Spiders have been shown to detect electric fields to determine a suitable time to extend web for 'ballooning'.[75]
166
+
167
+ Body modification enthusiasts have experimented with magnetic implants to attempt to replicate this sense.[76] However, in general humans (and it is presumed other mammals) can detect electric fields only indirectly by detecting the effect they have on hairs. An electrically charged balloon, for instance, will exert a force on human arm hairs, which can be felt through tactition and identified as coming from a static charge (and not from wind or the like). This is not electroreception, as it is a post-sensory cognitive action.
168
+
169
+ Hygroreception is the ability to detect changes in the moisture content of the environment.[11][77]
170
+
171
+ The ability to sense infrared thermal radiation evolved independently in various families of snakes. Essentially, it allows these reptiles to "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes.[78] It was previously thought that the organs evolved primarily as prey detectors, but it is now believed that it may also be used in thermoregulatory decision making.[79] The facial pit underwent parallel evolution in pitvipers and some boas and pythons, having evolved once in pitvipers and multiple times in boas and pythons.[80] The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (Loreal pit), while boas and pythons have three or more comparatively smaller pits lining the upper and sometimes the lower lip, in or between the scales. Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure. Within the family Viperidae, the pit organ is seen only in the subfamily Crotalinae: the pitvipers. The organ is used extensively to detect and target endothermic prey such as rodents and birds, and it was previously assumed that the organ evolved specifically for that purpose. However, recent evidence shows that the pit organ may also be used for thermoregulation. According to Krochmal et al., pitvipers can use their pits for thermoregulatory decision-making while true vipers (vipers who do not contain heat-sensing pits) cannot.
172
+
173
+ In spite of its detection of IR light, the pits' IR detection mechanism is not similar to photoreceptors – while photoreceptors detect light via photochemical reactions, the protein in the pits of snakes is in fact a temperature-sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than a chemical reaction to light.[81] This is consistent with the thin pit membrane, which allows incoming IR radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as vascularize the pit membrane in order to rapidly cool the ion channel back to its original "resting" or "inactive" temperature.[81]
174
+
175
+ Pressure detection uses the organ of Weber, a system consisting of three appendages of vertebrae transferring changes in shape of the gas bladder to the middle ear. It can be used to regulate the buoyancy of the fish. Fish like the weather fish and other loaches are also known to respond to low pressure areas but they lack a swim bladder.
176
+
177
+ Current detection is a detection system of water currents, consisting mostly of vortices, found in the lateral line of fish and aquatic forms of amphibians. The lateral line is also sensitive to low-frequency vibrations. The mechanoreceptors are hair cells, the same mechanoreceptors for vestibular sense and hearing. It is used primarily for navigation, hunting, and schooling. The receptors of the electrical sense are modified hair cells of the lateral line system.
178
+
179
+ Polarized light direction/detection is used by bees to orient themselves, especially on cloudy days. Cuttlefish, some beetles, and mantis shrimp can also perceive the polarization of light. Most sighted humans can in fact learn to roughly detect large areas of polarization by an effect called Haidinger's brush, however this is considered an entoptic phenomenon rather than a separate sense.
180
+
181
+ Slit sensillae of spiders detect mechanical strain in the exoskeleton, providing information on force and vibrations.
182
+
183
+ By using a variety of sense receptors, plants sense light, temperature, humidity, chemical substances, chemical gradients, reorientation, magnetic fields, infections, tissue damage and mechanical pressure. The absence of a nervous system notwithstanding, plants interpret and respond to these stimuli by a variety of hormonal and cell-to-cell communication pathways that result in movement, morphological changes and physiological state alterations at the organism level, that is, result in plant behavior. Such physiological and cognitive functions are generally not believed to give rise to mental phenomena or qualia, however, as these are typically considered the product of nervous system activity. The emergence of mental phenomena from the activity of systems functionally or computationally analogous to that of nervous systems is, however, a hypothetical possibility explored by some schools of thought in the philosophy of mind field, such as functionalism and computationalism.
184
+
185
+ However, plants could perceive the world around them,[15] and might be able to emit airborne sounds similar to "screaming" when stressed. Those noises could not be detectable by human ears, but organisms with a hearing range that can hear ultrasonic frequencies—like mice, bats or perhaps other plants—could hear the plants' cries from as far as 15 feet (4.6 m) away.[82]
186
+
187
+ Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them.[16][17][83] Computers take in and respond to their environment through attached hardware. Until recently, input was limited to a keyboard, joystick or a mouse, but advances in technology, both in hardware and software, have allowed computers to take in sensory input in a way similar to humans.[16][17]
188
+
189
+ In the time of William Shakespeare, there were commonly reckoned to be five wits or five senses.[85] At that time, the words "sense" and "wit" were synonyms,[85] so the senses were known as the five outward wits.[86][87] This traditional concept of five senses is common today.
190
+
191
+ The traditional five senses are enumerated as the "five material faculties" (pañcannaṃ indriyānaṃ avakanti) in Hindu literature. They appear in allegorical representation as early as in the Katha Upanishad (roughly 6th century BC), as five horses drawing the "chariot" of the body, guided by the mind as "chariot driver".
192
+
193
+ Depictions of the five traditional senses as allegory became a popular subject for seventeenth-century artists, especially among Dutch and Flemish Baroque painters. A typical example is Gérard de Lairesse's Allegory of the Five Senses (1668), in which each of the figures in the main group alludes to a sense: Sight is the reclining boy with a convex mirror, hearing is the cupid-like boy with a triangle, smell is represented by the girl with flowers, taste is represented by the woman with the fruit, and touch is represented by the woman holding the bird.
194
+
195
+ In Buddhist philosophy, Ayatana or "sense-base" includes the mind as a sense organ, in addition to the traditional five. This addition to the commonly acknowledged senses may arise from the psychological orientation involved in Buddhist thought and practice. The mind considered by itself is seen as the principal gateway to a different spectrum of phenomena that differ from the physical sense data. This way of viewing the human sense system indicates the importance of internal sources of sensation and perception that complements our experience of the external world.[citation needed]
en/5352.html.txt ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Seoul (/soʊl/, like soul; Korean: 서울 [sʌ.ul] (listen); lit. 'Capital'), officially the Seoul Special City, is the capital[7] and largest metropolis of South Korea.[8] Seoul has a population of 9.7 million people, and forms the heart of the Seoul Capital Area with the surrounding Incheon metropolis and Gyeonggi province. Ranked as an alpha world city, Seoul was the world's 4th largest metropolitan economy with a GDP of US$635 billion[9] in 2014 after Tokyo, New York City and Los Angeles. International visitors generally reach Seoul via AREX from the Incheon International Airport, notable for having been rated the best airport for nine consecutive years (2005–2013) by the Airports Council International. In 2015, it was rated Asia's most livable city with the second highest quality of life globally by Arcadis, with the GDP per capita (PPP) in Seoul being around $40,000. In 2017, the cost of living in Seoul was ranked 6th globally.[10][11] In 2020, Seoul's real estate market was ranked 3rd in the world for the price of apartments in the downtown center.[12] Seoul was one of the host cities for the official tournament of the 2002 FIFA World Cup, which was co-hosted by South Korea and Japan.
4
+
5
+ With major technology hubs centered in Gangnam and Digital Media City,[13] the Seoul Capital Area is home to the headquarters of 15 Fortune Global 500 companies, including Samsung,[14] LG, and Hyundai. Ranked seventh in the Global Power City Index and Global Financial Centres Index, the metropolis exerts a major influence in global affairs as one of the five leading hosts of global conferences.[15] Seoul has hosted the 1986 Asian Games, 1988 Summer Olympics, 2002 FIFA World Cup, and more recently the 2010 G-20 Seoul summit.
6
+
7
+ Seoul was the capital of various Korean states, including Baekje, Joseon, the Korean Empire, Goryeo (as a secondary capital), and presently South Korea. Strategically located along the Han River, Seoul's history stretches back over two thousand years, when it was founded in 18 BC by the people of Baekje, one of the Three Kingdoms of Korea. The city was later designated the capital of Korea under the Joseon dynasty. Seoul is surrounded by a mountainous and hilly landscape, with Bukhan Mountain located on the northern edge of the city. As with its long history, the Seoul Capital Area contains five UNESCO World Heritage Sites: Changdeok Palace, Hwaseong Fortress, Jongmyo Shrine, Namhansanseong and the Royal Tombs of the Joseon Dynasty.[16] More recently, Seoul has been a major site of modern architectural construction – major modern landmarks include the N Seoul Tower, the 63 Building, the Lotte World Tower, the Dongdaemun Design Plaza, Lotte World, Trade Tower, COEX, and the IFC Seoul. Seoul was named the 2010 World Design Capital. As the birthplace of K-pop and the Korean Wave, Seoul received over 10 million international visitors in 2014,[17] making it the world's 9th most visited city and 4th largest earner in tourism.[18]
8
+
9
+ The city has been known in the past by the names Wiryeseong (Korean: 위례성; Hanja: 慰禮城, during the Baekje era), Hanyang (한양; 漢陽, during the Goryeo era), Hanseong (한성; 漢城, during the Joseon era), and Keijō (京城) or Gyeongseong (경성) during the period of annexation to Japan.[19]
10
+
11
+ During Japan's annexation of Korea, Hanseong (漢城) was renamed Keijō (京城) by the Imperial authorities to prevent confusion with the hanja '漢' (a transliteration of an ancient Korean word Han (한) meaning "great"), which also refers to Han people or the Han dynasty in Chinese and in Japanese is a term for "China".[20]
12
+
13
+ After World War II and Korea's liberation, the city took its present name, which originated from the Korean word meaning "capital city", which is believed to have descended from an ancient word, Seorabeol (Korean: 서라벌; Hanja: 徐羅伐), which originally referred to Gyeongju, the capital of Silla.[21] Ancient Gyeongju was also known in documents by the Chinese-style name Geumseong (金城, literally "Gold Castle or City" or "Metal Castle or City"), but it is unclear whether the native Korean-style name Seorabeol had the same meaning as Geumseong.
14
+
15
+ Unlike most place names in Korea, "Seoul" has no corresponding hanja (Chinese characters used in the Korean language). On January 18, 2005, the Seoul government changed its official name in Chinese characters from the historic Hancheng (simplified Chinese: 汉城; traditional Chinese: 漢城; pinyin: Hànchéng) to Shou'er (simplified Chinese: 首尔; traditional Chinese: 首爾; pinyin: Shǒu'ěr).[22][23][24]
16
+
17
+ Settlement of the Han River area, where present-day Seoul is located, began around 4000 BC.[25]
18
+
19
+ Seoul is first recorded as Wiryeseong, the capital of Baekje (founded in 18 BC) in the northeastern area of modern Seoul.[25] There are several city walls remaining in the area that date from this time. Pungnaptoseong, an earthen wall located southeast Seoul, is widely believed to have been at the main Wiryeseong site.[26] As the Three Kingdoms competed for this strategic region, control passed from Baekje to Goguryeo in the 5th century, and from Goguryeo to Silla in the 6th century.[27]
20
+
21
+ In the 11th century Goryeo, which succeeded Unified Silla, built a summer palace in Seoul, which was referred to as the "Southern Capital". It was only from this period that Seoul became a larger settlement.[25] When Joseon replaced Goryeo, the capital was moved to Seoul (also known as Hanyang or Hanseong), where it remained until the fall of the dynasty. The Gyeongbok Palace, built in the 14th century, served as the royal residence until 1592. The other large palace, Changdeokgung, constructed in 1405, served as the main royal palace from 1611 to 1872.[25] After Joseon changed her name to the Korean Empire in 1897, Hwangseong also designated Seoul.
22
+
23
+ Originally, the city was entirely surrounded by a massive circular stone wall to provide its citizens security from wild animals, thieves and attacks. The city has grown beyond those walls and although the wall no longer stands (except along Bugaksan Mountain (Korean: 북악산; Hanja: 北岳山), north of the downtown area[28]), the gates remain near the downtown district of Seoul, including most notably Sungnyemun (commonly known as Namdaemun) and Heunginjimun (commonly known as Dongdaemun).[29] During the Joseon dynasty, the gates were opened and closed each day, accompanied by the ringing of large bells at the Bosingak belfry.[30] In the late 19th century, after hundreds of years of isolation, Seoul opened its gates to foreigners and began to modernize. Seoul became the first city in East Asia to introduce electricity in the royal palace, built by the Edison Illuminating Company[31] and a decade later Seoul also implemented electrical street lights.[32]
24
+
25
+ Much of the development was due to trade with foreign countries like France and the United States. For example, the Seoul Electric Company, Seoul Electric Trolley Company, and Seoul Fresh Spring Water Company were all joint Korean–U.S. owned enterprises.[33] In 1904, an American by the name of Angus Hamilton visited the city and said, "The streets of Seoul are magnificent, spacious, clean, admirably made and well-drained. The narrow, dirty lanes have been widened, gutters have been covered, roadways broadened. Seoul is within measurable distance of becoming the highest, most interesting and cleanest city in the East."[34]
26
+
27
+ After the annexation treaty in 1910, Japan annexed Korea and renamed the city Gyeongseong ("Kyongsong" in Korean and "Keijo" in Japanese). Japanese technology was imported, the city walls were removed, some of the gates demolished. Roads became paved and Western-style buildings were constructed. The city was liberated by U.S. forces at the end of World War II.[25]
28
+
29
+ In 1945, the city was officially named Seoul, and was designated as a special city in 1949.[25]
30
+
31
+ During the Korean War, Seoul changed hands between the Soviet/Chinese-backed North Korean forces and the American-backed South Korean forces several times, leaving the city heavily damaged after the war. The capital was temporarily relocated to Busan.[25] One estimate of the extensive damage states that after the war, at least 191,000 buildings, 55,000 houses, and 1,000 factories lay in ruins. In addition, a flood of refugees had entered Seoul during the war, swelling the population of the city and its metropolitan area to an estimated 1.5 million by 1955.[35]
32
+
33
+ Following the war, Seoul began to focus on reconstruction and modernization. As South Korea's economy started to grow rapidly from the 1960s, urbanization also accelerated and workers began to move to Seoul and other larger cities.[35] From the 1970s, the size of Seoul administrative area greatly expanded as it annexed a number of towns and villages from several surrounding counties.[36]
34
+
35
+ Until 1972, Seoul was claimed by North Korea as its de jure capital, being specified as such in Article 103 of the 1948 North Korean constitution.[37]
36
+
37
+ According to 2012 census data, the population of the Seoul area makes up around 20% of the total population of South Korea,[38] Seoul has become the economic, political and cultural hub of the country,[25] with several Fortune Global 500 companies, including Samsung, SK Holdings, Hyundai, POSCO and LG Group headquartered there.[39]
38
+
39
+ Seoul was the host city of the 1986 Asian Games and 1988 Summer Olympics as well as one of the venues of the 2002 FIFA World Cup.
40
+
41
+ Gyeongbokgung, the main royal palace during Joseon Dynasty.
42
+
43
+ Changdeok Palace, one of the five royal palaces during Joseon Dynasty.
44
+
45
+ Seoul is in the northwest of South Korea. Seoul proper comprises 605.25 km2 (233.69 sq mi),[2] with a radius of approximately 15 km (9 mi), roughly bisected into northern and southern halves by the Han River. The Han River and its surrounding area played an important role in Korean history. The Three Kingdoms of Korea strove to take control of this land, where the river was used as a trade route to China (via the Yellow Sea).[40] The river is no longer actively used for navigation, because its estuary is located at the borders of the two Koreas, with civilian entry barred. Historically, the city was during the Joseon dynasty bounded by the Seoul Fortress Wall, which stretched between the four main mountains in central Seoul: Namsan, Naksan, Bukhansan and Inwangsan. The city is bordered by eight mountains, as well as the more level lands of the Han River plain and western areas. Due to its geography and to economic development policies, Seoul is a very polycentric city. The area that was the old capital in the Joseon dynasty, and mostly comprises Jongno District and Jung District, constitutes the historical and political center of the city. However, for example, the city's financial capital is widely considered to be in Yeouido, while its economic capital is Gangnam District.
46
+
47
+ Seoul has a humid subtropical climate influenced by the monsoons (Köppen: Cwa). Being in the extreme east Asia, the climate can be described as humid continental with great variation of the precipitation throughout the year and warm to hot summer (Dwa, by 0 °C isoterm).[41][42] The suburbs of Seoul are generally cooler than the center of Seoul because of the urban heat island effect.[43] Summers are generally hot and humid, with the East Asian monsoon taking place from June until September. August, the hottest month, has average high and low temperatures of 32.6 and 23.4 °C (91 and 74 °F) with higher temperatures possible. Winters are usually cold to freezing with average January high and low temperatures of 1.5 and −5.9 °C (34.7 and 21.4 °F) and are generally much drier than summers, with an average of 24.9 days of snow annually. Sometimes, temperatures drop dramatically to below −10 °C (14 °F), and on some occasions as low as −15 °C (5 °F) in the mid winter period of January and February. Temperatures below −20 °C (−4 °F) have been recorded.
48
+
49
+ Air pollution is a major issue in Seoul.[52][53][54][55] According to the 2016 World Health Organization Global Urban Ambient Air Pollution Database,[56] the annual average PM2.5 concentration in 2014 was 24 micrograms per cubic metre (1.0×10−5 gr/cu ft), which is 2.4 times higher than that recommended by the WHO Air Quality Guidelines[57] for the annual mean PM2.5. The Seoul Metropolitan Government monitors and publicly shares real-time air quality data.[58]
50
+
51
+ Since the early 1960s, the Ministry of Environment has implemented a range of policies and air pollutant standards to improve and manage air quality for its people.[59] The "Special Act on the Improvement of Air Quality in the Seoul Metropolitan Area" was passed in December 2003. Its 1st Seoul Metropolitan Air Quality Improvement Plan (2005–2014) focused on improving the concentrations of PM10 and nitrogen dioxide by reducing emissions.[60] As a result, the annual average PM10 concentrations decreased from 70.0 μg/m3 in 2001 to 44.4 μg/m3 in 2011[61] and 46 μg/m3 in 2014.[56] As of 2014, the annual average PM10 concentration was still at least twice than that recommended by the WHO Air Quality Guidelines.[57] The 2nd Seoul Metropolitan Air Quality Improvement Plan (2015–2024) added PM2.5 and ozone to its list of managed pollutants.[62]
52
+
53
+ Asian dust, emissions from Seoul and in general from the rest of South Korea, as well as emissions from China, all contribute to Seoul's air quality.[53][63] A partnership between researchers in South Korea and the United States is conducting an international air quality field study in Korea (KORUS-AQ) to determine how much each source contributes.[64]
54
+
55
+ Besides air quality, greenhouse gas emissions represent hot issues in South Korea since the country is among top-10 strongest emitters in the world. Seoul is the strongest hotspot of greenhouse gas emissions in the country and according to satellite data, the persistent carbon dioxide anomaly over the city is one of the strongest in the world.[65]
56
+
57
+ Seoul is divided into 25 gu (Korean: 구; Hanja: 區) (district).[66] The gu vary greatly in area (from 10 to 47 km2 or 3.9 to 18.1 sq mi) and population (from fewer than 140,000 to 630,000). Songpa has the most people, while Seocho has the largest area. The government of each gu handles many of the functions that are handled by city governments in other jurisdictions. Each gu is divided into "dong" (동; 洞) or neighbourhoods. Some gu have only a few dong while others like Jongno District have a very large number of distinct neighbourhoods. Gu of Seoul consist of 423 administrative dongs (행정동) in total.[66] Dong are also sub-divided into 13,787 tong (통; 統), which are further divided into 102,796 ban in total.
58
+
59
+ Seoul proper is noted for its population density, which is almost twice that of New York City and eight times greater than Rome. Its metropolitan area was the most densely populated among OECD countries in Asia in 2012, and second worldwide after that of Paris.[68] As of 2015, the population was 9.86 million,[69] in 2012, it was 10.44 million.
60
+
61
+ [70] As of the end of June 2011, 10.29 million Republic of Korea citizens lived in the city. This was a 0.24% decrease from the end of 2010. The population of Seoul has been dropping since the early 1990s, the reasons being the high costs of living, urban sprawling to Gyeonggi region's satellite bed cities and an aging population.[69]
62
+
63
+ As of 2016, the number of foreigners living in Seoul was 404,037, 22.9% of the total foreign population in South Korea.[71] As of June 2011, 186,631 foreigners were Chinese citizens of Korean ancestry. This was an 8.84% increase from the end of 2010 and a 12.85% increase from June 2010. The next largest group was Chinese citizens who are not of Korean ethnicity; 29,901 of them resided in Seoul. The next highest group consisted of the 9,999 United States citizens who were not of Korean ancestry. The next highest group were Taiwanese citizens, at 8,717.[72]
64
+
65
+ The two major religions in Seoul are Christianity and Buddhism. Other religions include Muism (indigenous religion) and Confucianism. Seoul is home to one of the world's largest Christian congregations, Yoido Full Gospel Church, which has around 830,000 members.[73]
66
+
67
+ Seoul is home to the world's largest modern university founded by a Buddhist Order, Dongguk University.[74] Native Seoulites tend to speak the Gyeonggi dialect of Korean.[citation needed]
68
+
69
+ Seoul is the business and financial hub of South Korea. Although it accounts for only 0.6 percent of the nation's land area, 48.3 percent of South Korea's bank deposits were held in Seoul in 2003,[76] and the city generated 23 percent of the country's GDP overall in 2012.[77] In 2008 the Worldwide Centers of Commerce Index ranked Seoul No.9.[78] The Global Financial Centres Index in 2015 listed Seoul as the 6th financially most competitive city in the world.[79] The Economist Intelligence Unit ranked Seoul 15th in the list of "Overall 2025 City Competitiveness" regarding future competitiveness of cities.[80]
70
+
71
+ The traditional, labour-intensive manufacturing industries have been continuously replaced by information technology, electronics and assembly-type of industries;[81][82] however, food and beverage production, as well as printing and publishing remained among the core industries.[81] Major manufacturers are headquartered in the city, including Samsung, LG, Hyundai, Kia and SK. Notable food and beverage companies include Jinro, whose soju is the most sold alcoholic drink in the world, beating out Smirnoff vodka;[83] top selling beer producers Hite (merged with Jinro) and Oriental Brewery.[84] It also hosts food giants like Seoul Dairy Cooperative, Nongshim Group, Ottogi, CJ, Orion, Maeil Holdings, Namyang Dairy Products and Lotte.
72
+
73
+ Seoul hosts large concentration of headquarters of International companies and banks, including 15 companies on fortune 500 list such as Samsung, LG and Hyundai.[85] Most bank headquarters and the Korea Exchange are located in Yeouido (Yeoui island),[81] which is often called "South Korea's Wall Street" and has been serving as the financial center of the city since the 1980s.[86] The Seoul international finance center & SIFC MALL, Hanhwa 63 building, the Hanhwa insurance company head office. Hanhwa is one of the three largest South Korean insurance companies, along with Samsung Life and Gangnam & Kyobo life insurance group.
74
+
75
+ The largest wholesale and retail market in South Korea, the Dongdaemun Market, is located in Seoul.[87] Myeongdong is a shopping and entertainment area in downtown Seoul with mid- to high-end stores, fashion boutiques and international brand outlets.[88] The nearby Namdaemun Market, named after the Namdaemun Gate, is the oldest continually running market in Seoul.[89]
76
+
77
+ Insadong is the cultural art market of Seoul, where traditional and modern Korean artworks, such as paintings, sculptures and calligraphy are sold.[90] Hwanghak-dong Flea Market and Janganpyeong Antique Market also offer antique products.[91][92] Some shops for local designers have opened in Samcheong-dong, where numerous small art galleries are located. While Itaewon had catered mainly to foreign tourists and American soldiers based in the city, Koreans now comprise the majority of visitors to the area.[93] The Gangnam district is one of the most affluent areas in Seoul[93] and is noted for the fashionable and upscale Apgujeong-dong and Cheongdam-dong areas and the COEX Mall. Wholesale markets include Noryangjin Fisheries Wholesale Market and Garak Market.
78
+
79
+ The Yongsan Electronics Market is the largest electronics market in Asia. Electronics markets are Gangbyeon station metro line 2 Techno mart, ENTER6 MALL & Shindorim station Technomart mall complex.[94]
80
+
81
+ Times Square is one of Seoul's largest shopping malls featuring the CGV Starium, the world's largest permanent 35 mm cinema screen.[95]
82
+
83
+ Korea World Trade Center Complex, which comprises COEX mall, congress center, 3 Inter-continental hotels, Business tower (Asem tower), Residence hotel, Casino and City airport terminal was established in 1988 in time for the Seoul Olympics . The 2nd World trade trade center is being planned at Seoul Olympic stadium complex as MICE HUB by Seoul city. Ex-Kepco head office building was purchased by Hyundai motor group with 9billion USD to build 115-storey Hyundai GBC & hotel complex until 2021. Now ex-kepco 25-storey building is under demolition.
84
+
85
+ Seoul has been described as the world's "most wired city",[96] ranked first in technology readiness by PwC's Cities of Opportunity report.[97] Seoul has a very technologically advanced infrastructure.[98][99]
86
+
87
+ Seoul is among the world leaders in Internet connectivity, being the capital of South Korea, which has the world's highest fibre-optic broadband penetration and highest global average internet speeds of 26.1 Mbit/s.[100][101] Since 2015, Seoul has provided free Wi-Fi access in outdoor spaces through a 47.7 billion won ($44 million) project with Internet access at 10,430 parks, streets and other public places.[102] Internet speeds in some apartment buildings reach up to 52.5Gbit/s with assistance from Nokia, and though the average standard consists of 100 Mbit/s services, providers nationwide are rapidly rolling out 1Gbit/s connections at the equivalent of US$20 per month.[103] In addition, the city is served by the KTX high-speed rail and the Seoul Subway, which provides 4G LTE, WiFi and DMB inside subway cars. 5G will be introduced commercially in March 2019 in Seoul.
88
+
89
+ The traditional heart of Seoul is the old Joseon dynasty city, now the downtown area, where most palaces, government offices, corporate headquarters, hotels, and traditional markets are located. Cheonggyecheon, a stream that runs from west to east through the valley before emptying into the Han River, was for many years covered with concrete, but was recently restored by an urban revival project in 2005.[104] Jongno street, meaning "Bell Street", has been a principal street and one of the earliest commercial streets of the city,[105][106] on which one can find Bosingak, a pavilion containing a large bell. The bell signaled the different times of the day and controlled the four major gates to the city. North of downtown is Bukhan Mountain, and to the south is the smaller Namsan. Further south are the old suburbs, Yongsan District and Mapo District. Across the Han River are the newer and wealthier areas of Gangnam District, Seocho District and surrounding neighborhoods.
90
+
91
+ Seoul has many historical and cultural landmarks. In Amsa-dong Prehistoric Settlement Site, Gangdong District, neolithic remains were excavated and accidentally discovered by a flood in 1925.[107]
92
+
93
+ Urban and civil planning was a key concept when Seoul was first designed to serve as a capital in the late 14th century. The Joseon dynasty built the "Five Grand Palaces" in Seoul – Changdeokgung, Changgyeonggung, Deoksugung, Gyeongbokgung and Gyeonghuigung – all of which are located in Jongno and Jung Districts. Among them, Changdeokgung was added to the UNESCO World Heritage List in 1997 as an "outstanding example of Far Eastern palace architecture and garden design". The main palace, Gyeongbokgung, underwent a large-scale restoration project.[108] The palaces are considered exemplary architecture of the Joseon period. Beside the palaces, Unhyeongung is known for being the royal residence of Regent Daewongun, the father of Emperor Gojong at the end of the Joseon Dynasty.
94
+
95
+ Seoul has been surrounded by walls that were built to regulate visitors from other regions and protect the city in case of an invasion. Pungnap Toseong is a flat earthen wall built at the edge of the Han River, which is widely believed to be the site of Wiryeseong. Mongchon Toseong (Korean: 몽촌토성; Hanja: 蒙村土城) is another earthen wall built during the Baekje period that is now located inside the Olympic Park.[26] The Fortress Wall of Seoul was built early in the Joseon dynasty for protection of the city. After many centuries of destruction and rebuilding, about ⅔ of the wall remains, as well as six of the original eight gates. These gates include Sungnyemun and Heunginjimun, commonly known as Namdaemun (South Great Gate) and Dongdaemun (East Great Gate). Namdaemun was the oldest wooden gate until a 2008 arson attack, and was re-opened after complete restoration in 2013.[109] Located near the gates are the traditional markets and largest shopping center, Namdaemun Market and Dongdaemun Market.
96
+
97
+ There are also many buildings constructed with international styles in the late 19th and early 20th centuries. The Independence Gate was built in 1897 to inspire an independent spirit. Seoul Station was opened in 1900 as Gyeongseong Station.
98
+
99
+ Dongdaemun Design Plaza, designed by the Iraqi-British architect, Zaha Hadid.
100
+
101
+ Royal Throne in Geunjeongjeon, inside Gyeongbok Palace.
102
+
103
+ Bukchon Hanok Village -- traditional Seoul village built during Joseon era.
104
+
105
+ Various high-rise office buildings and residential buildings, like the Gangnam Finance Center, the Tower Palace, Namsan Seoul Tower, and the Lotte World Tower, dominate the city's skyline. The tallest building is Lotte World Tower, reaching a height of 555m. It opened to the public in April 2017. It is also the 4th highest building in the world.
106
+
107
+ The World Trade Center Seoul, located in Gangnam District, hosts various expositions and conferences. Also in Gangnam District is the COEX Mall, a large indoor shopping and entertainment complex. Downstream from Gangnam District is Yeouido, an island that is home to the National Assembly, major broadcasting studios, and a number of large office buildings, as well as the Korea Finance Building and the Yoido Full Gospel Church. The Olympic Stadium, Olympic Park, and Lotte World are located in Songpa District, on the south side of the Han River, upstream from Gangnam District. Three new modern landmarks of Seoul are Dongdaemun Design Plaza & Park, designed by Zaha Hadid, the new wave-shaped Seoul City Hall, by Yoo Kerl of iArc, and the Lotte World Tower, the 5th tallest building in the world designed by Kohn Pederson Fox.
108
+
109
+ In 2010 Seoul was designated the World Design Capital for the year.[110]
110
+
111
+ Seoul is home to 115 museums,[111] including four national and nine official municipal museums. Among the city's national museum, The National Museum of Korea is the most representative of museums in not only Seoul but all of South Korea. Since its establishment in 1945, the museum has built a collection of 220,000 artifacts.[112] In October 2005, the museum moved to a new building in Yongsan Family Park.
112
+
113
+ The National Folk Museum is located on the grounds of the Gyeongbokgung Palace in the district of Jongno District and uses replicas of historical objects to illustrate the folk history of the Korean people.[113] The National Palace Museum of Korea is also located on the grounds of the Gyeongbokgung Palace. Finally, the Seoul branch of the National Museum of Modern and Contemporary Art, whose main museum is located in Gwacheon, opened in 2013, in Sogyeok-dong.
114
+
115
+ Bukchon Hanok Village and Namsangol Hanok Village are old residential districts consisting of hanok Korean traditional houses, parks, and museums that allows visitors to experience traditional Korean culture.[114][115]
116
+
117
+ The War Memorial, one of nine municipal museums in Seoul, offers visitors an educational and emotional experience of various wars in which Korea was involved, including Korean War themes.[116][117] The Seodaemun Prison is a former prison built during the Japanese occupation, and is used as a historic museum.[118]
118
+
119
+ The Seoul Museum of Art and Ilmin Museum of Art have preserved the appearance of the old building that is visually unique from the neighboring tall, modern buildings. The former is operated by Seoul City Council and sits adjacent to Gyeonghuigung Palace, a Joseon dynasty royal palace. Leeum, Samsung Museum of Art, is widely regarded as one of Seoul's largest private museum. For many Korean film lovers from all over the world, the Korean Film Archive is running the Korean Film Museum and Cinematheque KOFA in its main center located in Digital Media City(DMC), Sangam-dong. The Tteok & Kitchen Utensil Museum and Kimchi Field Museum provide information regarding Korean culinary history.
120
+
121
+ There are also religious buildings that take important roles in Korean society and politics. The Wongudan altar was a sacrificial place where Korean rulers held heavenly rituals since the Three Kingdoms period. Since the Joseon dynasty adopted Confucianism as its national ideology in the 14th century, the state built many Confucian shrines. The descendants of the Joseon royal family still continue to hold ceremonies to commemorate ancestors at Jongmyo. It is the oldest royal Confucian shrine preserved and the ritual ceremonies continue a tradition established in the 14th century. Sajikdan, Munmyo and Dongmyo were built during the same period. Although Buddhism was suppressed by the Joseon state, it has continued its existence. Jogyesa is the headquarters of the Jogye Order of Korean Buddhism. Hwagyesa and Bongeunsa are also major Buddhist temples in Seoul.
122
+
123
+ The Myeongdong Cathedral is a landmark of the Myeongdong, Jung District and the biggest Catholic church in Seoul established in 1883. It is a symbol of Catholicism in Korea. It was also a focus for political dissent in the 1980s. In this way the Roman Catholic Church has a very strong influence in Korean society. And Yakhyeon Catholic Church in Jungnim-dong, Jung District is first Catholic parish in Korea. It has been the first Gothic church ever built in Korea.
124
+
125
+ There are many Protestant churches in Seoul. The most numerous are Presbyterian, but there are also many Methodist and Baptist churches. Yoido Full Gospel Church is a Pentecostal church affiliated with the Assemblies of God on Yeouido in Seoul. With approximately 830,000 members (2007), it is the largest Pentecostal Christian congregation in the world, which has been recognized by the Guinness Book of World Records.[citation needed]
126
+
127
+ The St. Nicholas Cathedral, but sometimes called bald church, is the only Byzantine-style church in Seoul. It is located in Ahyeon-dong, Mapo District, and is cathedral of the Orthodox Metropolis of Korea. In 2015, it was designated as a Seoul Future Heritage.
128
+
129
+ In October 2012 KBS Hall in Seoul hosted major international music festivals – First ABU TV and Radio Song Festivals within frameworks of Asia-Pacific Broadcasting Union 49th General Assembly.[119][120]
130
+ Hi! Seoul Festival is a seasonal cultural festival held four times a year every spring, summer, autumn, and winter in Seoul, South Korea since 2003. It is based on the "Seoul Citizens' Day" held on every October since 1994 to commemorate the 600 years history of Seoul as the capital of the country. The festival is arranged under the Seoul Metropolitan Government. As of 2012[update], Seoul has hosted Ultra Music Festival Korea, an annual dance music festival that takes place on the 2nd weekend of June.[121]
131
+
132
+ Despite the city's population density, Seoul has a large quantity of parks. One of the most famous parks is Namsan Park, which offers recreational hiking and views of the downtown Seoul skyline. The N Seoul Tower is located at Namsan Park. Seoul Olympic Park, located in Songpa District and built to host the 1988 Summer Olympics is Seoul's largest park. Among the other largest parks in the city are Seoul Forest, Dream Forest, Children's Grand Park and Haneul Park. The Wongaksa Pagoda 10 tier pagoda is located In Tapgol Park, a small public park with an area of 19,599 m2 (210,962 sq ft). Areas around streams serve as public places for relaxation and recreation. Tancheon stream and the nearby area serve as a large park with paths for both walkers and cyclists.
133
+ Cheonggyecheon, a stream that runs nearly 6 km (4 mi) through downtown Seoul, is popular among both Seoul residents and tourists. In 2017 the Seoullo 7017 Skypark opened, spanning diagonally overtop Seoul Station.
134
+
135
+ There are also many parks along the Han River, such as Ichon Hangang Park, Yeouido Hangang Park, Mangwon Hangang Park, Nanji Hangang Park, Banpo Hangang Park, Ttukseom Hangang Park and Jamsil Hangang Park.
136
+ The Seoul National Capital Area also contains a green belt aimed at preventing the city from sprawling out into neighboring Gyeonggi Province. These areas are frequently sought after by people looking to escape from urban life on weekends and during vacations.
137
+ There are also various parks under construction or in project, such as the Gyeongui Line Forest Trail, Seoul Station 7017, Seosomun Memorial Park and Yongsan Park.
138
+
139
+ Seoul is also home to the world's largest indoor amusement park, Lotte World. Other recreation centers include the former Olympic and World Cup stadiums and the City Hall public lawn.
140
+
141
+ Seoul is home of the major South Korean networks KBS, SBS, and MBC. The city is also home to the major South Korean newspapers Chosun Ilbo, Donga Ilbo, Joongang Ilbo, and Hankook Ilbo.
142
+
143
+ Seoul is a major center for sports in South Korea. Seoul has the largest number of professional sports teams and facilities in South Korea.
144
+
145
+ In the history of South Korean major professional sports league championships, which include the K League, KBO League, KBL, V-League, Seoul had multiple championships in a season two times, 1990 K League Classi Lucky-Goldstar FC (currently FC Seoul) and KBO League LG Twins in 1990, K League Classic FC Seoul and KBO League Doosan Bears in 2016.[122]
146
+
147
+ Seoul hosted the 1986 Asian Games, also known as Asiad, 1988 Olympic Games, and Paralympic Games. It also served as one of the host cities of the 2002 FIFA World Cup. Seoul World Cup Stadium hosted the opening ceremony and first game of the tournament.
148
+
149
+ Taekwondo is South Korea's national sport and Seoul is the location of the Kukkiwon, the world headquarters of taekwondo, as well as the World Taekwondo Federation.
150
+
151
+ Seoul's most well-known football club is FC Seoul.
152
+
153
+ Seoul has a well developed transportation network. Its system dates back to the era of the Korean Empire, when the first streetcar lines were laid and a railroad linking Seoul and Incheon was completed.[123] Seoul's most important streetcar line ran along Jongno until it was replaced by Line 1 of the subway system in the early 1970s. Other notable streets in downtown Seoul include Euljiro, Teheranno, Sejongno, Chungmuro, Yulgongno, and Toegyero. There are nine major subway lines stretching for more than 250 km (155 mi), with one additional line planned. As of 2010[update], 25% of the population has a commute time of an hour or more.
154
+
155
+ Seoul's bus system is operated by the Seoul Metropolitan Government (S.M.G.), with four primary bus configurations available servicing most of the city. Seoul has many large intercity/express bus terminals. These buses connect Seoul with cities throughout South Korea. The Seoul Express Bus Terminal, Central City Terminal and Seoul Nambu Terminal are located in the district of Seocho District. In addition, East Seoul Bus Terminal in Gwangjin District and Sangbong Terminal in Jungnang District handles traffics mainly from Gangwon and Chungcheong provinces.
156
+
157
+ Seoul has a comprehensive urban railway network of 21 rapid transit, light metro and commuter lines that interconnects every district of the city and the surrounding areas of Incheon, Gyeonggi province, western Gangwon province, and northern Chungnam province. With more than 8 million passengers per day, the subway has one of the busiest subway systems in the world and the largest in the world, with a total track length of 940 km (580 mi). In addition, in order to cope with the various modes of transport, Seoul's metropolitan government employs several mathematicians to coordinate the subway, bus, and traffic schedules into one timetable. The various lines are run by Korail, Seoul Metro, NeoTrans Co. Ltd., AREX, and Seoul Metro Line 9 Corporation.
158
+
159
+ Seoul is connected to every major city in South Korea by rail. Seoul is also linked to most major South Korean cities by the KTX high-speed train, which has a normal operation speed of more than 300 km/h (186 mph). Another train that stops at all major stops are the Mugunghwa and Saemaeul trains. Major railroad stations include:
160
+
161
+ Two international airports, Incheon International and Gimpo International, serve Seoul.
162
+
163
+ Gimpo International Airport opened in 1939 as Japanese Imperial Army airfield, and opened for civil aircraft in 1957. Since opening of Incheon International, Gimpo International handles scheduled domestic flights along with selected short haul international shuttle flights to Tokyo Haneda, Osaka Kansai, Taipei Songshan, Shanghai Hongqiao, and Beijing Capital.
164
+
165
+ Incheon International Airport, opened in March 2001 in Yeongjong island, is now responsible for major international flights. Incheon International Airport is Asia's eighth busiest airport in terms of passengers, the world's fourth busiest airport by cargo traffic, and the world's eighth busiest airport in terms of international passengers in 2014. In 2016, 57,765,397 passengers used the airport. Incheon International Airport expanded its size by opening terminal 2 on January 18, 2018.
166
+
167
+ Incheon and Gimpo are linked to Seoul by expressway, and to each other by the AREX to Seoul Station. Intercity bus services are available to various destinations around the country.
168
+
169
+ Cycling is becoming increasingly popular in Seoul and in the entire country. Both banks of the Han River have cycling paths that run all the way across the city along the river. In addition, Seoul introduced in 2015 a bicycle-sharing system named Ddareungi (and named Seoul Bike in English).[124]
170
+
171
+ Seoul is home to the majority of South Korea's most prestigious universities, including Seoul National University, Yonsei University, Korea University.
172
+
173
+ Seoul ranked 10th on the QS Best Student Cities 2019.[125]
174
+
175
+ Compulsory education lasts from grade 1–9 (six years of elementary school and 3 years of middle school).[126] Students spend six years in elementary school, three years in middle school, and three years in high school. Secondary schools generally require students to wear uniforms. There is an exit exam for graduating from high school and many students proceeding to the university level are required to take the College Scholastic Ability Test that is held every November. Although there is a test for non-high school graduates, called school qualification exam, most Koreans take the test.
176
+
177
+ Seoul is home to various specialized schools, including three science high schools, and six foreign language High Schools. Seoul Metropolitan Office of Education comprises 235 College-Preparatory High Schools, 80 Vocational Schools, 377 Middle Schools, and 33 Special Education Schools as of 2009[update].
178
+
179
+ Seoul is a member of the Asian Network of Major Cities 21 and the C40 Cities Climate Leadership Group. In addition, Seoul hosts many embassies of countries it has diplomatic ties with.
180
+
181
+ Seoul has 23 sister cities:[127]
en/5353.html.txt ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Seoul (/soʊl/, like soul; Korean: 서울 [sʌ.ul] (listen); lit. 'Capital'), officially the Seoul Special City, is the capital[7] and largest metropolis of South Korea.[8] Seoul has a population of 9.7 million people, and forms the heart of the Seoul Capital Area with the surrounding Incheon metropolis and Gyeonggi province. Ranked as an alpha world city, Seoul was the world's 4th largest metropolitan economy with a GDP of US$635 billion[9] in 2014 after Tokyo, New York City and Los Angeles. International visitors generally reach Seoul via AREX from the Incheon International Airport, notable for having been rated the best airport for nine consecutive years (2005–2013) by the Airports Council International. In 2015, it was rated Asia's most livable city with the second highest quality of life globally by Arcadis, with the GDP per capita (PPP) in Seoul being around $40,000. In 2017, the cost of living in Seoul was ranked 6th globally.[10][11] In 2020, Seoul's real estate market was ranked 3rd in the world for the price of apartments in the downtown center.[12] Seoul was one of the host cities for the official tournament of the 2002 FIFA World Cup, which was co-hosted by South Korea and Japan.
4
+
5
+ With major technology hubs centered in Gangnam and Digital Media City,[13] the Seoul Capital Area is home to the headquarters of 15 Fortune Global 500 companies, including Samsung,[14] LG, and Hyundai. Ranked seventh in the Global Power City Index and Global Financial Centres Index, the metropolis exerts a major influence in global affairs as one of the five leading hosts of global conferences.[15] Seoul has hosted the 1986 Asian Games, 1988 Summer Olympics, 2002 FIFA World Cup, and more recently the 2010 G-20 Seoul summit.
6
+
7
+ Seoul was the capital of various Korean states, including Baekje, Joseon, the Korean Empire, Goryeo (as a secondary capital), and presently South Korea. Strategically located along the Han River, Seoul's history stretches back over two thousand years, when it was founded in 18 BC by the people of Baekje, one of the Three Kingdoms of Korea. The city was later designated the capital of Korea under the Joseon dynasty. Seoul is surrounded by a mountainous and hilly landscape, with Bukhan Mountain located on the northern edge of the city. As with its long history, the Seoul Capital Area contains five UNESCO World Heritage Sites: Changdeok Palace, Hwaseong Fortress, Jongmyo Shrine, Namhansanseong and the Royal Tombs of the Joseon Dynasty.[16] More recently, Seoul has been a major site of modern architectural construction – major modern landmarks include the N Seoul Tower, the 63 Building, the Lotte World Tower, the Dongdaemun Design Plaza, Lotte World, Trade Tower, COEX, and the IFC Seoul. Seoul was named the 2010 World Design Capital. As the birthplace of K-pop and the Korean Wave, Seoul received over 10 million international visitors in 2014,[17] making it the world's 9th most visited city and 4th largest earner in tourism.[18]
8
+
9
+ The city has been known in the past by the names Wiryeseong (Korean: 위례성; Hanja: 慰禮城, during the Baekje era), Hanyang (한양; 漢陽, during the Goryeo era), Hanseong (한성; 漢城, during the Joseon era), and Keijō (京城) or Gyeongseong (경성) during the period of annexation to Japan.[19]
10
+
11
+ During Japan's annexation of Korea, Hanseong (漢城) was renamed Keijō (京城) by the Imperial authorities to prevent confusion with the hanja '漢' (a transliteration of an ancient Korean word Han (한) meaning "great"), which also refers to Han people or the Han dynasty in Chinese and in Japanese is a term for "China".[20]
12
+
13
+ After World War II and Korea's liberation, the city took its present name, which originated from the Korean word meaning "capital city", which is believed to have descended from an ancient word, Seorabeol (Korean: 서라벌; Hanja: 徐羅伐), which originally referred to Gyeongju, the capital of Silla.[21] Ancient Gyeongju was also known in documents by the Chinese-style name Geumseong (金城, literally "Gold Castle or City" or "Metal Castle or City"), but it is unclear whether the native Korean-style name Seorabeol had the same meaning as Geumseong.
14
+
15
+ Unlike most place names in Korea, "Seoul" has no corresponding hanja (Chinese characters used in the Korean language). On January 18, 2005, the Seoul government changed its official name in Chinese characters from the historic Hancheng (simplified Chinese: 汉城; traditional Chinese: 漢城; pinyin: Hànchéng) to Shou'er (simplified Chinese: 首尔; traditional Chinese: 首爾; pinyin: Shǒu'ěr).[22][23][24]
16
+
17
+ Settlement of the Han River area, where present-day Seoul is located, began around 4000 BC.[25]
18
+
19
+ Seoul is first recorded as Wiryeseong, the capital of Baekje (founded in 18 BC) in the northeastern area of modern Seoul.[25] There are several city walls remaining in the area that date from this time. Pungnaptoseong, an earthen wall located southeast Seoul, is widely believed to have been at the main Wiryeseong site.[26] As the Three Kingdoms competed for this strategic region, control passed from Baekje to Goguryeo in the 5th century, and from Goguryeo to Silla in the 6th century.[27]
20
+
21
+ In the 11th century Goryeo, which succeeded Unified Silla, built a summer palace in Seoul, which was referred to as the "Southern Capital". It was only from this period that Seoul became a larger settlement.[25] When Joseon replaced Goryeo, the capital was moved to Seoul (also known as Hanyang or Hanseong), where it remained until the fall of the dynasty. The Gyeongbok Palace, built in the 14th century, served as the royal residence until 1592. The other large palace, Changdeokgung, constructed in 1405, served as the main royal palace from 1611 to 1872.[25] After Joseon changed her name to the Korean Empire in 1897, Hwangseong also designated Seoul.
22
+
23
+ Originally, the city was entirely surrounded by a massive circular stone wall to provide its citizens security from wild animals, thieves and attacks. The city has grown beyond those walls and although the wall no longer stands (except along Bugaksan Mountain (Korean: 북악산; Hanja: 北岳山), north of the downtown area[28]), the gates remain near the downtown district of Seoul, including most notably Sungnyemun (commonly known as Namdaemun) and Heunginjimun (commonly known as Dongdaemun).[29] During the Joseon dynasty, the gates were opened and closed each day, accompanied by the ringing of large bells at the Bosingak belfry.[30] In the late 19th century, after hundreds of years of isolation, Seoul opened its gates to foreigners and began to modernize. Seoul became the first city in East Asia to introduce electricity in the royal palace, built by the Edison Illuminating Company[31] and a decade later Seoul also implemented electrical street lights.[32]
24
+
25
+ Much of the development was due to trade with foreign countries like France and the United States. For example, the Seoul Electric Company, Seoul Electric Trolley Company, and Seoul Fresh Spring Water Company were all joint Korean–U.S. owned enterprises.[33] In 1904, an American by the name of Angus Hamilton visited the city and said, "The streets of Seoul are magnificent, spacious, clean, admirably made and well-drained. The narrow, dirty lanes have been widened, gutters have been covered, roadways broadened. Seoul is within measurable distance of becoming the highest, most interesting and cleanest city in the East."[34]
26
+
27
+ After the annexation treaty in 1910, Japan annexed Korea and renamed the city Gyeongseong ("Kyongsong" in Korean and "Keijo" in Japanese). Japanese technology was imported, the city walls were removed, some of the gates demolished. Roads became paved and Western-style buildings were constructed. The city was liberated by U.S. forces at the end of World War II.[25]
28
+
29
+ In 1945, the city was officially named Seoul, and was designated as a special city in 1949.[25]
30
+
31
+ During the Korean War, Seoul changed hands between the Soviet/Chinese-backed North Korean forces and the American-backed South Korean forces several times, leaving the city heavily damaged after the war. The capital was temporarily relocated to Busan.[25] One estimate of the extensive damage states that after the war, at least 191,000 buildings, 55,000 houses, and 1,000 factories lay in ruins. In addition, a flood of refugees had entered Seoul during the war, swelling the population of the city and its metropolitan area to an estimated 1.5 million by 1955.[35]
32
+
33
+ Following the war, Seoul began to focus on reconstruction and modernization. As South Korea's economy started to grow rapidly from the 1960s, urbanization also accelerated and workers began to move to Seoul and other larger cities.[35] From the 1970s, the size of Seoul administrative area greatly expanded as it annexed a number of towns and villages from several surrounding counties.[36]
34
+
35
+ Until 1972, Seoul was claimed by North Korea as its de jure capital, being specified as such in Article 103 of the 1948 North Korean constitution.[37]
36
+
37
+ According to 2012 census data, the population of the Seoul area makes up around 20% of the total population of South Korea,[38] Seoul has become the economic, political and cultural hub of the country,[25] with several Fortune Global 500 companies, including Samsung, SK Holdings, Hyundai, POSCO and LG Group headquartered there.[39]
38
+
39
+ Seoul was the host city of the 1986 Asian Games and 1988 Summer Olympics as well as one of the venues of the 2002 FIFA World Cup.
40
+
41
+ Gyeongbokgung, the main royal palace during Joseon Dynasty.
42
+
43
+ Changdeok Palace, one of the five royal palaces during Joseon Dynasty.
44
+
45
+ Seoul is in the northwest of South Korea. Seoul proper comprises 605.25 km2 (233.69 sq mi),[2] with a radius of approximately 15 km (9 mi), roughly bisected into northern and southern halves by the Han River. The Han River and its surrounding area played an important role in Korean history. The Three Kingdoms of Korea strove to take control of this land, where the river was used as a trade route to China (via the Yellow Sea).[40] The river is no longer actively used for navigation, because its estuary is located at the borders of the two Koreas, with civilian entry barred. Historically, the city was during the Joseon dynasty bounded by the Seoul Fortress Wall, which stretched between the four main mountains in central Seoul: Namsan, Naksan, Bukhansan and Inwangsan. The city is bordered by eight mountains, as well as the more level lands of the Han River plain and western areas. Due to its geography and to economic development policies, Seoul is a very polycentric city. The area that was the old capital in the Joseon dynasty, and mostly comprises Jongno District and Jung District, constitutes the historical and political center of the city. However, for example, the city's financial capital is widely considered to be in Yeouido, while its economic capital is Gangnam District.
46
+
47
+ Seoul has a humid subtropical climate influenced by the monsoons (Köppen: Cwa). Being in the extreme east Asia, the climate can be described as humid continental with great variation of the precipitation throughout the year and warm to hot summer (Dwa, by 0 °C isoterm).[41][42] The suburbs of Seoul are generally cooler than the center of Seoul because of the urban heat island effect.[43] Summers are generally hot and humid, with the East Asian monsoon taking place from June until September. August, the hottest month, has average high and low temperatures of 32.6 and 23.4 °C (91 and 74 °F) with higher temperatures possible. Winters are usually cold to freezing with average January high and low temperatures of 1.5 and −5.9 °C (34.7 and 21.4 °F) and are generally much drier than summers, with an average of 24.9 days of snow annually. Sometimes, temperatures drop dramatically to below −10 °C (14 °F), and on some occasions as low as −15 °C (5 °F) in the mid winter period of January and February. Temperatures below −20 °C (−4 °F) have been recorded.
48
+
49
+ Air pollution is a major issue in Seoul.[52][53][54][55] According to the 2016 World Health Organization Global Urban Ambient Air Pollution Database,[56] the annual average PM2.5 concentration in 2014 was 24 micrograms per cubic metre (1.0×10−5 gr/cu ft), which is 2.4 times higher than that recommended by the WHO Air Quality Guidelines[57] for the annual mean PM2.5. The Seoul Metropolitan Government monitors and publicly shares real-time air quality data.[58]
50
+
51
+ Since the early 1960s, the Ministry of Environment has implemented a range of policies and air pollutant standards to improve and manage air quality for its people.[59] The "Special Act on the Improvement of Air Quality in the Seoul Metropolitan Area" was passed in December 2003. Its 1st Seoul Metropolitan Air Quality Improvement Plan (2005–2014) focused on improving the concentrations of PM10 and nitrogen dioxide by reducing emissions.[60] As a result, the annual average PM10 concentrations decreased from 70.0 μg/m3 in 2001 to 44.4 μg/m3 in 2011[61] and 46 μg/m3 in 2014.[56] As of 2014, the annual average PM10 concentration was still at least twice than that recommended by the WHO Air Quality Guidelines.[57] The 2nd Seoul Metropolitan Air Quality Improvement Plan (2015–2024) added PM2.5 and ozone to its list of managed pollutants.[62]
52
+
53
+ Asian dust, emissions from Seoul and in general from the rest of South Korea, as well as emissions from China, all contribute to Seoul's air quality.[53][63] A partnership between researchers in South Korea and the United States is conducting an international air quality field study in Korea (KORUS-AQ) to determine how much each source contributes.[64]
54
+
55
+ Besides air quality, greenhouse gas emissions represent hot issues in South Korea since the country is among top-10 strongest emitters in the world. Seoul is the strongest hotspot of greenhouse gas emissions in the country and according to satellite data, the persistent carbon dioxide anomaly over the city is one of the strongest in the world.[65]
56
+
57
+ Seoul is divided into 25 gu (Korean: 구; Hanja: 區) (district).[66] The gu vary greatly in area (from 10 to 47 km2 or 3.9 to 18.1 sq mi) and population (from fewer than 140,000 to 630,000). Songpa has the most people, while Seocho has the largest area. The government of each gu handles many of the functions that are handled by city governments in other jurisdictions. Each gu is divided into "dong" (동; 洞) or neighbourhoods. Some gu have only a few dong while others like Jongno District have a very large number of distinct neighbourhoods. Gu of Seoul consist of 423 administrative dongs (행정동) in total.[66] Dong are also sub-divided into 13,787 tong (통; 統), which are further divided into 102,796 ban in total.
58
+
59
+ Seoul proper is noted for its population density, which is almost twice that of New York City and eight times greater than Rome. Its metropolitan area was the most densely populated among OECD countries in Asia in 2012, and second worldwide after that of Paris.[68] As of 2015, the population was 9.86 million,[69] in 2012, it was 10.44 million.
60
+
61
+ [70] As of the end of June 2011, 10.29 million Republic of Korea citizens lived in the city. This was a 0.24% decrease from the end of 2010. The population of Seoul has been dropping since the early 1990s, the reasons being the high costs of living, urban sprawling to Gyeonggi region's satellite bed cities and an aging population.[69]
62
+
63
+ As of 2016, the number of foreigners living in Seoul was 404,037, 22.9% of the total foreign population in South Korea.[71] As of June 2011, 186,631 foreigners were Chinese citizens of Korean ancestry. This was an 8.84% increase from the end of 2010 and a 12.85% increase from June 2010. The next largest group was Chinese citizens who are not of Korean ethnicity; 29,901 of them resided in Seoul. The next highest group consisted of the 9,999 United States citizens who were not of Korean ancestry. The next highest group were Taiwanese citizens, at 8,717.[72]
64
+
65
+ The two major religions in Seoul are Christianity and Buddhism. Other religions include Muism (indigenous religion) and Confucianism. Seoul is home to one of the world's largest Christian congregations, Yoido Full Gospel Church, which has around 830,000 members.[73]
66
+
67
+ Seoul is home to the world's largest modern university founded by a Buddhist Order, Dongguk University.[74] Native Seoulites tend to speak the Gyeonggi dialect of Korean.[citation needed]
68
+
69
+ Seoul is the business and financial hub of South Korea. Although it accounts for only 0.6 percent of the nation's land area, 48.3 percent of South Korea's bank deposits were held in Seoul in 2003,[76] and the city generated 23 percent of the country's GDP overall in 2012.[77] In 2008 the Worldwide Centers of Commerce Index ranked Seoul No.9.[78] The Global Financial Centres Index in 2015 listed Seoul as the 6th financially most competitive city in the world.[79] The Economist Intelligence Unit ranked Seoul 15th in the list of "Overall 2025 City Competitiveness" regarding future competitiveness of cities.[80]
70
+
71
+ The traditional, labour-intensive manufacturing industries have been continuously replaced by information technology, electronics and assembly-type of industries;[81][82] however, food and beverage production, as well as printing and publishing remained among the core industries.[81] Major manufacturers are headquartered in the city, including Samsung, LG, Hyundai, Kia and SK. Notable food and beverage companies include Jinro, whose soju is the most sold alcoholic drink in the world, beating out Smirnoff vodka;[83] top selling beer producers Hite (merged with Jinro) and Oriental Brewery.[84] It also hosts food giants like Seoul Dairy Cooperative, Nongshim Group, Ottogi, CJ, Orion, Maeil Holdings, Namyang Dairy Products and Lotte.
72
+
73
+ Seoul hosts large concentration of headquarters of International companies and banks, including 15 companies on fortune 500 list such as Samsung, LG and Hyundai.[85] Most bank headquarters and the Korea Exchange are located in Yeouido (Yeoui island),[81] which is often called "South Korea's Wall Street" and has been serving as the financial center of the city since the 1980s.[86] The Seoul international finance center & SIFC MALL, Hanhwa 63 building, the Hanhwa insurance company head office. Hanhwa is one of the three largest South Korean insurance companies, along with Samsung Life and Gangnam & Kyobo life insurance group.
74
+
75
+ The largest wholesale and retail market in South Korea, the Dongdaemun Market, is located in Seoul.[87] Myeongdong is a shopping and entertainment area in downtown Seoul with mid- to high-end stores, fashion boutiques and international brand outlets.[88] The nearby Namdaemun Market, named after the Namdaemun Gate, is the oldest continually running market in Seoul.[89]
76
+
77
+ Insadong is the cultural art market of Seoul, where traditional and modern Korean artworks, such as paintings, sculptures and calligraphy are sold.[90] Hwanghak-dong Flea Market and Janganpyeong Antique Market also offer antique products.[91][92] Some shops for local designers have opened in Samcheong-dong, where numerous small art galleries are located. While Itaewon had catered mainly to foreign tourists and American soldiers based in the city, Koreans now comprise the majority of visitors to the area.[93] The Gangnam district is one of the most affluent areas in Seoul[93] and is noted for the fashionable and upscale Apgujeong-dong and Cheongdam-dong areas and the COEX Mall. Wholesale markets include Noryangjin Fisheries Wholesale Market and Garak Market.
78
+
79
+ The Yongsan Electronics Market is the largest electronics market in Asia. Electronics markets are Gangbyeon station metro line 2 Techno mart, ENTER6 MALL & Shindorim station Technomart mall complex.[94]
80
+
81
+ Times Square is one of Seoul's largest shopping malls featuring the CGV Starium, the world's largest permanent 35 mm cinema screen.[95]
82
+
83
+ Korea World Trade Center Complex, which comprises COEX mall, congress center, 3 Inter-continental hotels, Business tower (Asem tower), Residence hotel, Casino and City airport terminal was established in 1988 in time for the Seoul Olympics . The 2nd World trade trade center is being planned at Seoul Olympic stadium complex as MICE HUB by Seoul city. Ex-Kepco head office building was purchased by Hyundai motor group with 9billion USD to build 115-storey Hyundai GBC & hotel complex until 2021. Now ex-kepco 25-storey building is under demolition.
84
+
85
+ Seoul has been described as the world's "most wired city",[96] ranked first in technology readiness by PwC's Cities of Opportunity report.[97] Seoul has a very technologically advanced infrastructure.[98][99]
86
+
87
+ Seoul is among the world leaders in Internet connectivity, being the capital of South Korea, which has the world's highest fibre-optic broadband penetration and highest global average internet speeds of 26.1 Mbit/s.[100][101] Since 2015, Seoul has provided free Wi-Fi access in outdoor spaces through a 47.7 billion won ($44 million) project with Internet access at 10,430 parks, streets and other public places.[102] Internet speeds in some apartment buildings reach up to 52.5Gbit/s with assistance from Nokia, and though the average standard consists of 100 Mbit/s services, providers nationwide are rapidly rolling out 1Gbit/s connections at the equivalent of US$20 per month.[103] In addition, the city is served by the KTX high-speed rail and the Seoul Subway, which provides 4G LTE, WiFi and DMB inside subway cars. 5G will be introduced commercially in March 2019 in Seoul.
88
+
89
+ The traditional heart of Seoul is the old Joseon dynasty city, now the downtown area, where most palaces, government offices, corporate headquarters, hotels, and traditional markets are located. Cheonggyecheon, a stream that runs from west to east through the valley before emptying into the Han River, was for many years covered with concrete, but was recently restored by an urban revival project in 2005.[104] Jongno street, meaning "Bell Street", has been a principal street and one of the earliest commercial streets of the city,[105][106] on which one can find Bosingak, a pavilion containing a large bell. The bell signaled the different times of the day and controlled the four major gates to the city. North of downtown is Bukhan Mountain, and to the south is the smaller Namsan. Further south are the old suburbs, Yongsan District and Mapo District. Across the Han River are the newer and wealthier areas of Gangnam District, Seocho District and surrounding neighborhoods.
90
+
91
+ Seoul has many historical and cultural landmarks. In Amsa-dong Prehistoric Settlement Site, Gangdong District, neolithic remains were excavated and accidentally discovered by a flood in 1925.[107]
92
+
93
+ Urban and civil planning was a key concept when Seoul was first designed to serve as a capital in the late 14th century. The Joseon dynasty built the "Five Grand Palaces" in Seoul – Changdeokgung, Changgyeonggung, Deoksugung, Gyeongbokgung and Gyeonghuigung – all of which are located in Jongno and Jung Districts. Among them, Changdeokgung was added to the UNESCO World Heritage List in 1997 as an "outstanding example of Far Eastern palace architecture and garden design". The main palace, Gyeongbokgung, underwent a large-scale restoration project.[108] The palaces are considered exemplary architecture of the Joseon period. Beside the palaces, Unhyeongung is known for being the royal residence of Regent Daewongun, the father of Emperor Gojong at the end of the Joseon Dynasty.
94
+
95
+ Seoul has been surrounded by walls that were built to regulate visitors from other regions and protect the city in case of an invasion. Pungnap Toseong is a flat earthen wall built at the edge of the Han River, which is widely believed to be the site of Wiryeseong. Mongchon Toseong (Korean: 몽촌토성; Hanja: 蒙村土城) is another earthen wall built during the Baekje period that is now located inside the Olympic Park.[26] The Fortress Wall of Seoul was built early in the Joseon dynasty for protection of the city. After many centuries of destruction and rebuilding, about ⅔ of the wall remains, as well as six of the original eight gates. These gates include Sungnyemun and Heunginjimun, commonly known as Namdaemun (South Great Gate) and Dongdaemun (East Great Gate). Namdaemun was the oldest wooden gate until a 2008 arson attack, and was re-opened after complete restoration in 2013.[109] Located near the gates are the traditional markets and largest shopping center, Namdaemun Market and Dongdaemun Market.
96
+
97
+ There are also many buildings constructed with international styles in the late 19th and early 20th centuries. The Independence Gate was built in 1897 to inspire an independent spirit. Seoul Station was opened in 1900 as Gyeongseong Station.
98
+
99
+ Dongdaemun Design Plaza, designed by the Iraqi-British architect, Zaha Hadid.
100
+
101
+ Royal Throne in Geunjeongjeon, inside Gyeongbok Palace.
102
+
103
+ Bukchon Hanok Village -- traditional Seoul village built during Joseon era.
104
+
105
+ Various high-rise office buildings and residential buildings, like the Gangnam Finance Center, the Tower Palace, Namsan Seoul Tower, and the Lotte World Tower, dominate the city's skyline. The tallest building is Lotte World Tower, reaching a height of 555m. It opened to the public in April 2017. It is also the 4th highest building in the world.
106
+
107
+ The World Trade Center Seoul, located in Gangnam District, hosts various expositions and conferences. Also in Gangnam District is the COEX Mall, a large indoor shopping and entertainment complex. Downstream from Gangnam District is Yeouido, an island that is home to the National Assembly, major broadcasting studios, and a number of large office buildings, as well as the Korea Finance Building and the Yoido Full Gospel Church. The Olympic Stadium, Olympic Park, and Lotte World are located in Songpa District, on the south side of the Han River, upstream from Gangnam District. Three new modern landmarks of Seoul are Dongdaemun Design Plaza & Park, designed by Zaha Hadid, the new wave-shaped Seoul City Hall, by Yoo Kerl of iArc, and the Lotte World Tower, the 5th tallest building in the world designed by Kohn Pederson Fox.
108
+
109
+ In 2010 Seoul was designated the World Design Capital for the year.[110]
110
+
111
+ Seoul is home to 115 museums,[111] including four national and nine official municipal museums. Among the city's national museum, The National Museum of Korea is the most representative of museums in not only Seoul but all of South Korea. Since its establishment in 1945, the museum has built a collection of 220,000 artifacts.[112] In October 2005, the museum moved to a new building in Yongsan Family Park.
112
+
113
+ The National Folk Museum is located on the grounds of the Gyeongbokgung Palace in the district of Jongno District and uses replicas of historical objects to illustrate the folk history of the Korean people.[113] The National Palace Museum of Korea is also located on the grounds of the Gyeongbokgung Palace. Finally, the Seoul branch of the National Museum of Modern and Contemporary Art, whose main museum is located in Gwacheon, opened in 2013, in Sogyeok-dong.
114
+
115
+ Bukchon Hanok Village and Namsangol Hanok Village are old residential districts consisting of hanok Korean traditional houses, parks, and museums that allows visitors to experience traditional Korean culture.[114][115]
116
+
117
+ The War Memorial, one of nine municipal museums in Seoul, offers visitors an educational and emotional experience of various wars in which Korea was involved, including Korean War themes.[116][117] The Seodaemun Prison is a former prison built during the Japanese occupation, and is used as a historic museum.[118]
118
+
119
+ The Seoul Museum of Art and Ilmin Museum of Art have preserved the appearance of the old building that is visually unique from the neighboring tall, modern buildings. The former is operated by Seoul City Council and sits adjacent to Gyeonghuigung Palace, a Joseon dynasty royal palace. Leeum, Samsung Museum of Art, is widely regarded as one of Seoul's largest private museum. For many Korean film lovers from all over the world, the Korean Film Archive is running the Korean Film Museum and Cinematheque KOFA in its main center located in Digital Media City(DMC), Sangam-dong. The Tteok & Kitchen Utensil Museum and Kimchi Field Museum provide information regarding Korean culinary history.
120
+
121
+ There are also religious buildings that take important roles in Korean society and politics. The Wongudan altar was a sacrificial place where Korean rulers held heavenly rituals since the Three Kingdoms period. Since the Joseon dynasty adopted Confucianism as its national ideology in the 14th century, the state built many Confucian shrines. The descendants of the Joseon royal family still continue to hold ceremonies to commemorate ancestors at Jongmyo. It is the oldest royal Confucian shrine preserved and the ritual ceremonies continue a tradition established in the 14th century. Sajikdan, Munmyo and Dongmyo were built during the same period. Although Buddhism was suppressed by the Joseon state, it has continued its existence. Jogyesa is the headquarters of the Jogye Order of Korean Buddhism. Hwagyesa and Bongeunsa are also major Buddhist temples in Seoul.
122
+
123
+ The Myeongdong Cathedral is a landmark of the Myeongdong, Jung District and the biggest Catholic church in Seoul established in 1883. It is a symbol of Catholicism in Korea. It was also a focus for political dissent in the 1980s. In this way the Roman Catholic Church has a very strong influence in Korean society. And Yakhyeon Catholic Church in Jungnim-dong, Jung District is first Catholic parish in Korea. It has been the first Gothic church ever built in Korea.
124
+
125
+ There are many Protestant churches in Seoul. The most numerous are Presbyterian, but there are also many Methodist and Baptist churches. Yoido Full Gospel Church is a Pentecostal church affiliated with the Assemblies of God on Yeouido in Seoul. With approximately 830,000 members (2007), it is the largest Pentecostal Christian congregation in the world, which has been recognized by the Guinness Book of World Records.[citation needed]
126
+
127
+ The St. Nicholas Cathedral, but sometimes called bald church, is the only Byzantine-style church in Seoul. It is located in Ahyeon-dong, Mapo District, and is cathedral of the Orthodox Metropolis of Korea. In 2015, it was designated as a Seoul Future Heritage.
128
+
129
+ In October 2012 KBS Hall in Seoul hosted major international music festivals – First ABU TV and Radio Song Festivals within frameworks of Asia-Pacific Broadcasting Union 49th General Assembly.[119][120]
130
+ Hi! Seoul Festival is a seasonal cultural festival held four times a year every spring, summer, autumn, and winter in Seoul, South Korea since 2003. It is based on the "Seoul Citizens' Day" held on every October since 1994 to commemorate the 600 years history of Seoul as the capital of the country. The festival is arranged under the Seoul Metropolitan Government. As of 2012[update], Seoul has hosted Ultra Music Festival Korea, an annual dance music festival that takes place on the 2nd weekend of June.[121]
131
+
132
+ Despite the city's population density, Seoul has a large quantity of parks. One of the most famous parks is Namsan Park, which offers recreational hiking and views of the downtown Seoul skyline. The N Seoul Tower is located at Namsan Park. Seoul Olympic Park, located in Songpa District and built to host the 1988 Summer Olympics is Seoul's largest park. Among the other largest parks in the city are Seoul Forest, Dream Forest, Children's Grand Park and Haneul Park. The Wongaksa Pagoda 10 tier pagoda is located In Tapgol Park, a small public park with an area of 19,599 m2 (210,962 sq ft). Areas around streams serve as public places for relaxation and recreation. Tancheon stream and the nearby area serve as a large park with paths for both walkers and cyclists.
133
+ Cheonggyecheon, a stream that runs nearly 6 km (4 mi) through downtown Seoul, is popular among both Seoul residents and tourists. In 2017 the Seoullo 7017 Skypark opened, spanning diagonally overtop Seoul Station.
134
+
135
+ There are also many parks along the Han River, such as Ichon Hangang Park, Yeouido Hangang Park, Mangwon Hangang Park, Nanji Hangang Park, Banpo Hangang Park, Ttukseom Hangang Park and Jamsil Hangang Park.
136
+ The Seoul National Capital Area also contains a green belt aimed at preventing the city from sprawling out into neighboring Gyeonggi Province. These areas are frequently sought after by people looking to escape from urban life on weekends and during vacations.
137
+ There are also various parks under construction or in project, such as the Gyeongui Line Forest Trail, Seoul Station 7017, Seosomun Memorial Park and Yongsan Park.
138
+
139
+ Seoul is also home to the world's largest indoor amusement park, Lotte World. Other recreation centers include the former Olympic and World Cup stadiums and the City Hall public lawn.
140
+
141
+ Seoul is home of the major South Korean networks KBS, SBS, and MBC. The city is also home to the major South Korean newspapers Chosun Ilbo, Donga Ilbo, Joongang Ilbo, and Hankook Ilbo.
142
+
143
+ Seoul is a major center for sports in South Korea. Seoul has the largest number of professional sports teams and facilities in South Korea.
144
+
145
+ In the history of South Korean major professional sports league championships, which include the K League, KBO League, KBL, V-League, Seoul had multiple championships in a season two times, 1990 K League Classi Lucky-Goldstar FC (currently FC Seoul) and KBO League LG Twins in 1990, K League Classic FC Seoul and KBO League Doosan Bears in 2016.[122]
146
+
147
+ Seoul hosted the 1986 Asian Games, also known as Asiad, 1988 Olympic Games, and Paralympic Games. It also served as one of the host cities of the 2002 FIFA World Cup. Seoul World Cup Stadium hosted the opening ceremony and first game of the tournament.
148
+
149
+ Taekwondo is South Korea's national sport and Seoul is the location of the Kukkiwon, the world headquarters of taekwondo, as well as the World Taekwondo Federation.
150
+
151
+ Seoul's most well-known football club is FC Seoul.
152
+
153
+ Seoul has a well developed transportation network. Its system dates back to the era of the Korean Empire, when the first streetcar lines were laid and a railroad linking Seoul and Incheon was completed.[123] Seoul's most important streetcar line ran along Jongno until it was replaced by Line 1 of the subway system in the early 1970s. Other notable streets in downtown Seoul include Euljiro, Teheranno, Sejongno, Chungmuro, Yulgongno, and Toegyero. There are nine major subway lines stretching for more than 250 km (155 mi), with one additional line planned. As of 2010[update], 25% of the population has a commute time of an hour or more.
154
+
155
+ Seoul's bus system is operated by the Seoul Metropolitan Government (S.M.G.), with four primary bus configurations available servicing most of the city. Seoul has many large intercity/express bus terminals. These buses connect Seoul with cities throughout South Korea. The Seoul Express Bus Terminal, Central City Terminal and Seoul Nambu Terminal are located in the district of Seocho District. In addition, East Seoul Bus Terminal in Gwangjin District and Sangbong Terminal in Jungnang District handles traffics mainly from Gangwon and Chungcheong provinces.
156
+
157
+ Seoul has a comprehensive urban railway network of 21 rapid transit, light metro and commuter lines that interconnects every district of the city and the surrounding areas of Incheon, Gyeonggi province, western Gangwon province, and northern Chungnam province. With more than 8 million passengers per day, the subway has one of the busiest subway systems in the world and the largest in the world, with a total track length of 940 km (580 mi). In addition, in order to cope with the various modes of transport, Seoul's metropolitan government employs several mathematicians to coordinate the subway, bus, and traffic schedules into one timetable. The various lines are run by Korail, Seoul Metro, NeoTrans Co. Ltd., AREX, and Seoul Metro Line 9 Corporation.
158
+
159
+ Seoul is connected to every major city in South Korea by rail. Seoul is also linked to most major South Korean cities by the KTX high-speed train, which has a normal operation speed of more than 300 km/h (186 mph). Another train that stops at all major stops are the Mugunghwa and Saemaeul trains. Major railroad stations include:
160
+
161
+ Two international airports, Incheon International and Gimpo International, serve Seoul.
162
+
163
+ Gimpo International Airport opened in 1939 as Japanese Imperial Army airfield, and opened for civil aircraft in 1957. Since opening of Incheon International, Gimpo International handles scheduled domestic flights along with selected short haul international shuttle flights to Tokyo Haneda, Osaka Kansai, Taipei Songshan, Shanghai Hongqiao, and Beijing Capital.
164
+
165
+ Incheon International Airport, opened in March 2001 in Yeongjong island, is now responsible for major international flights. Incheon International Airport is Asia's eighth busiest airport in terms of passengers, the world's fourth busiest airport by cargo traffic, and the world's eighth busiest airport in terms of international passengers in 2014. In 2016, 57,765,397 passengers used the airport. Incheon International Airport expanded its size by opening terminal 2 on January 18, 2018.
166
+
167
+ Incheon and Gimpo are linked to Seoul by expressway, and to each other by the AREX to Seoul Station. Intercity bus services are available to various destinations around the country.
168
+
169
+ Cycling is becoming increasingly popular in Seoul and in the entire country. Both banks of the Han River have cycling paths that run all the way across the city along the river. In addition, Seoul introduced in 2015 a bicycle-sharing system named Ddareungi (and named Seoul Bike in English).[124]
170
+
171
+ Seoul is home to the majority of South Korea's most prestigious universities, including Seoul National University, Yonsei University, Korea University.
172
+
173
+ Seoul ranked 10th on the QS Best Student Cities 2019.[125]
174
+
175
+ Compulsory education lasts from grade 1–9 (six years of elementary school and 3 years of middle school).[126] Students spend six years in elementary school, three years in middle school, and three years in high school. Secondary schools generally require students to wear uniforms. There is an exit exam for graduating from high school and many students proceeding to the university level are required to take the College Scholastic Ability Test that is held every November. Although there is a test for non-high school graduates, called school qualification exam, most Koreans take the test.
176
+
177
+ Seoul is home to various specialized schools, including three science high schools, and six foreign language High Schools. Seoul Metropolitan Office of Education comprises 235 College-Preparatory High Schools, 80 Vocational Schools, 377 Middle Schools, and 33 Special Education Schools as of 2009[update].
178
+
179
+ Seoul is a member of the Asian Network of Major Cities 21 and the C40 Cities Climate Leadership Group. In addition, Seoul hosts many embassies of countries it has diplomatic ties with.
180
+
181
+ Seoul has 23 sister cities:[127]
en/5354.html.txt ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The separation of powers is a representation for the governance of a state. Under this model, a state's government is divided into branches, each with separate, independent powers and responsibilities so that powers of one branch are not in conflict with those of the other branches. The typical division is into three branches: a legislature, an executive, and a judiciary, which is the trias politica model. It can be contrasted with the fusion of powers in parliamentary and semi-presidential systems, where the executive and legislative branches overlap.
4
+
5
+ Separation of powers, therefore, refers to the division of responsibilities into distinct branches of government by limiting any one branch from exercising the core functions of another. The intent of separation of powers is to prevent the concentration of power by providing for checks and balances.
6
+
7
+ The separation of powers model is often imprecisely and metonymically used interchangeably with the trias politica principle. While the trias politica model is a common type of separation, there are governments that have greater or fewer than three branches, as mentioned later in the article.
8
+
9
+ Aristotle first mentioned the idea of a "mixed government" or hybrid government in his work Politics, where he drew upon many of the constitutional forms in the city-states of Ancient Greece. In the Roman Republic, the Roman Senate, Consuls and the Assemblies showed an example of a mixed government according to Polybius (Histories, Book 6, 11–13).
10
+
11
+ John Calvin (1509–1564) favoured a system of government that divided political power between democracy and aristocracy (mixed government). Calvin appreciated the advantages of democracy, stating: "It is an invaluable gift if God allows a people to elect its own government and magistrates."[1] In order to reduce the danger of misuse of political power, Calvin suggested setting up several political institutions that should complement and control each other in a system of checks and balances.[2]
12
+
13
+ In this way, Calvin and his followers resisted political absolutism and furthered the growth of democracy. Calvin aimed to protect the rights and the well-being of ordinary people.[3][need quotation to verify] In 1620 a group of English separatist Congregationalists and Anglicans (later known as the Pilgrim Fathers) founded Plymouth Colony in North America. Enjoying self-rule, they established a bipartite democratic system of government. The "freemen" elected the General Court, which functioned as legislature and judiciary and which in turn elected a governor, who together with his seven "assistants" served in the functional role of providing executive power.[4] Massachusetts Bay Colony (founded 1628), Rhode Island (1636), Connecticut (1636), New Jersey, and Pennsylvania had similar constitutions – they all separated political powers. (Except for Plymouth Colony and Massachusetts Bay Colony, these English outposts added religious freedom to their democratic systems, an important step towards the development of human rights.[5][6]) Books like William Bradford's Of Plymouth Plantation (written between 1630 and 1651) were widely read in England.[citation needed] So the form of government in the colonies was well known in the mother country, including to the philosopher John Locke (1632–1704). He deduced from a study of the English constitutional system the advantages of dividing political power into the legislative (which should be distributed among several bodies, for example, the House of Lords and the House of Commons), on the one hand, and the executive and federative power, responsible for the protection of the country and prerogative of the monarch, on the other hand. (The Kingdom of England had no written constitution.)[7][need quotation to verify][8]
14
+
15
+ During the English Civil War, the parliamentarians viewed the English system of government as composed of three branches - the King, the House of Lords and the House of Commons - where the first should have executive powers only, and the latter two legislative powers. Few years later, one of the first documents proposing a tripartite system of separation of powers was the Instrument of Government, written by the English general John Lambert in 1653, and soon adopted as the constitution of England for few years during The Protectorate. The system comprised a legislative branch (the Parliament) and two executive branches, the English Council of State and the Lord Protector, all being elected (though the Lord Protector was elected for life) and having checks upon each other.[9]
16
+
17
+ A further development in English thought was the idea that the judicial powers should be separated from the executive branch. This followed the use of the juridical system by the Crown to prosecute opposition leaders following the Restoration, in the late years of Charles II and during the short reign of James II (namely, during the 1680s).[10]
18
+
19
+ The term "tripartite system" is commonly ascribed to French Enlightenment political philosopher Baron de Montesquieu, although he did not use such a term but referred to "distribution" of powers. In The Spirit of the Laws (1748),[11] Montesquieu described the various forms of distribution of political power among a legislature, an executive, and a judiciary. Montesquieu's approach was to present and defend a form of government whose powers were not excessively centralized in a single monarch or similar ruler (a form known then as "aristocracy"). He based this model on the Constitution of the Roman Republic and the British constitutional system. Montesquieu took the view that the Roman Republic had powers separated so that no one could usurp complete power.[12][13][14] In the British constitutional system, Montesquieu discerned a separation of powers among the monarch, Parliament, and the courts of law.[15]
20
+
21
+ In every government there are three sorts of power: the legislative; the executive in respect to things dependent on the law of nations; and the executive in regard to matters that depend on the civil law.
22
+
23
+ By virtue of the first, the prince or magistrate enacts temporary or perpetual laws, and amends or abrogates those that have been already enacted. By the second, he makes peace or war, sends or receives embassies, establishes the public security, and provides against invasions. By the third, he punishes criminals, or determines the disputes that arise between individuals. The latter we shall call the judiciary power, and the other simply the executive power of the state.
24
+
25
+ Montesquieu argues that each Power should only exercise its own functions. He was quite explicit here:[16]
26
+
27
+ When the legislative and executive powers are united in the same person, or in the same body of magistrates, there can be no liberty; because apprehensions may arise, lest the same monarch or senate should enact tyrannical laws, to execute them in a tyrannical manner.
28
+
29
+ Again, there is no liberty, if the judiciary power be not separated from the legislative and executive. Were it joined with the legislative, the life and liberty of the subject would be exposed to arbitrary control; for the judge would be then the legislator. Were it joined to the executive power, the judge might behave with violence and oppression.
30
+
31
+ There would be an end of everything, were the same man or the same body, whether of the nobles or of the people, to exercise those three powers, that of enacting laws, that of executing the public resolutions, and of trying the causes of individuals.
32
+
33
+ Separation of powers requires a different source of legitimization, or a different act of legitimization from the same source, for each of the separate powers. If the legislative branch appoints the executive and judicial powers, as Montesquieu indicated, there will be no separation or division of its powers, since the power to appoint carries with it the power to revoke.[17]
34
+
35
+ The executive power ought to be in the hands of a monarch, because this branch of government, having need of despatch, is better administered by one than by many: on the other hand, whatever depends on the legislative power is oftentimes better regulated by many than by a single person.
36
+
37
+ But if there were no monarch, and the executive power should be committed to a certain number of persons selected from the legislative body, there would be an end then of liberty; by reason the two powers would be united, as the same persons would sometimes possess, and would be always able to possess, a share in both.
38
+
39
+ Montesquieu actually specified that the independence of the judiciary has to be real, and not merely apparent.[18] The judiciary was generally seen as the most important of the three powers, independent and unchecked.[19]
40
+
41
+ The principle of checks and balances is that each branch has power to limit or check the other two, which creates a balance between the three separate branches of the state. This principle induces one branch to prevent either of the other branches from becoming supreme, thereby securing political liberty.
42
+
43
+ Immanuel Kant was an advocate of this, noting that "the problem of setting up a state can be solved even by a nation of devils" so long as they possess an appropriate constitution to pit opposing factions against each other.[20]
44
+ Checks and balances are designed to maintain the system of separation of powers keeping each branch in its place. The idea is that it is not enough to separate the powers and guarantee their independence but the branches need to have the constitutional means to defend their own legitimate powers from the encroachments of the other branches.[21] They guarantee that the branches have the same level of power (co-equal), that is, are balanced, so that they can limit each other, avoiding the abuse of power. The origin of checks and balances, like separation of powers itself, is specifically credited to Montesquieu in the Enlightenment (in The Spirit of the Laws, 1748). Under this influence it was implemented in 1787 in the Constitution of the United States.
45
+
46
+ The following example of the separation of powers and their mutual checks and balances from the experience of the United States Constitution is presented as illustrative of the general principles applied in similar forms of government as well:[22]
47
+
48
+ But the great security against a gradual concentration of the several powers in the same department, consists in giving to those who administer each department the necessary constitutional means and personal motives to resist encroachments of the others. The provision for defense must in this, as in all other cases, be made commensurate to the danger of attack. Ambition must be made to counteract ambition. The interest of the man must be connected with the constitutional rights of the place. It may be a reflection on human nature, that such devices should be necessary to control the abuses of government. But what is government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself.
49
+
50
+ A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions. This policy of supplying, by opposite and rival interests, the defect of better motives, might be traced through the whole system of human affairs, private as well as public. We see it particularly displayed in all the subordinate distributions of power, where the constant aim is to divide and arrange the several offices in such a manner as that each may be a check on the other that the private interest of every individual may be a sentinel over the public rights. These inventions of prudence cannot be less requisite in the distribution of the supreme powers of the State.
51
+
52
+ Constitutions with a high degree of separation of powers are found worldwide. A number of Latin American countries have electoral branches of government.
53
+
54
+ The Westminster system is distinguished by a particular entwining of powers,[23] such as in New Zealand and Canada. Canada makes limited use of separation of powers in practice, although in theory it distinguishes between branches of government. New Zealand's constitution is based on the principle of separation of powers through a series of constitutional safeguards, many of which are tacit. The Executive's ability to carry out decisions often depends on the Legislature, which is elected under the mixed member proportional system. This means the government is rarely a single party but a coalition of parties. The Judiciary is also free of government interference. If a series of judicial decisions result in an interpretation of the law which the Executive considers does not reflect the intention of the policy, the Executive can initiate changes to the legislation in question through the Legislature. The Executive cannot direct or request a judicial officer to revise or reconsider a decision; decisions are final. Should there be a dispute between the Executive and Judiciary, the Executive has no authority to direct the Judiciary, or its individual members and vice versa.
55
+
56
+ Complete separation of powers systems are almost always presidential, although theoretically this need not be the case. There are a few historical exceptions, such as the Directoire system of revolutionary France. Switzerland offers an example of non-Presidential separation of powers today: It is run by a seven-member executive branch, the Federal Council. However, some might argue[weasel words] that Switzerland does not have a strong separation of powers system as the Federal Council is appointed by parliament (but not dependent on parliament) and, although the judiciary has no power of review, the judiciary is still separate from the other branches.
57
+
58
+ Australia does not maintain a strict separation between the legislative and executive branches of government—indeed, government ministers are required to be members of parliament—but the federal judiciary strictly guards its independence from the other two branches. However, under influence from the U.S. constitution, the Australian constitution does define the three branches of government separately, which has been interpreted by the judiciary to induce an implicit separation of powers.[24] State governments have a similar level of separation of power but this is generally on the basis of convention, rather than constitution.
59
+
60
+ The Constitution of Austria was originally written by Hans Kelsen, the prominent constitutional scholar in Europe at that time. Kelsen was to serve as a part of the judicial court of review for Austria as part of its tripartite government.
61
+
62
+ The Constitution of the Czech Republic, adopted in 1992 immediately before the dissolution of Czechoslovakia, establishes the traditional tripartite division of powers[25] and continues the tradition of its predecessor constitutions. The Czechoslovak Constitution of 1920, which replaced the provisional constitution adopted by the newly independent state in 1918, was modelled after the constitutions of established democracies such as those of the United Kingdom, United States and France, and maintained this division,[26] as have subsequent changes to the constitution that followed in 1948 with the Ninth-of-May Constitution, the 1960 Constitution of Czechoslovakia as well as the Constitutional Act on the Czechoslovak Federation of 1968.
63
+
64
+ According to the Constitution of the Fifth Republic, the government of France[27] is divided into three branches:
65
+
66
+ Hong Kong is a Special Administrative Region established in 1997 pursuant to the Sino-British Joint Declaration, an international treaty made between Britain and China in 1984, registered with the United Nations. The Hong Kong Basic Law, a national law of China that serves as the de facto constitution, divides the government into Executive, Legislative, and Judicial bodies.[29]
67
+
68
+ However, according to the former Secretary for Security, Regina Ip, also a current member of the Executive Council(ExCo) and Legislative Council of Hong Kong, Hong Kong never practices Separation of Powers after the handover of Hong Kong back to China.[30]
69
+
70
+ India follows constitutional democracy which offers a clear separation of powers. The judiciary is independent of the other two branches with the power to interpret the constitution. Parliament has the legislative powers. Executive powers are vested in the President who is advised by the Union Council of Ministers headed by the Prime Minister. The constitution of India vested the duty of protecting, preserving and defending the constitution with the President as common head of the executive, parliament, armed forces, etc.—not only for the union government but also the various state governments in a federal structure. All three branches have "checks and balances" over each other to maintain the balance of power and not to exceed the constitutional limits.[31]
71
+
72
+ In Italy the powers are separated, even though the Council of Ministers needs a vote of confidence from both chambers of Parliament (which represents a large number of members, almost 1,000).[32]
73
+
74
+ Like every parliamentary form of government, there is no complete separation between Legislature and Executive, rather a continuum between them due to the confidence link. The balance between these two branches is protected by Constitution[33] and between them and the judiciary, which is really independent.
75
+
76
+ A note on the status of separation of power, checks and balances, and balance of power in Norway today.[34]
77
+
78
+ In the original constitution of 1814 the Montesquieu concept was enshrined, and the people at the time had the same skepticism about political parties as the American founding fathers and the revolutionaries in France. Nor did people really want to get rid of the king and the Council of State (privy council). King and council was a known concept that people had lived with for a long time and for the most part were comfortable with. The 1814 constitution came about as a reaction to external events, most notable the Treaty of Kiel (see 1814 in Norway). There was no revolution against the current powers, as had been the case in the U.S. and France.
79
+
80
+ As there was no election of the executive, the king reigned supremely independent in selecting the members of the Council of State, no formal political parties formed until the 1880s. A conflict between the executive and legislature started developing in the 1870s and climaxed with the legislature impeaching the entire Council of State in 1884 (see Statsrådssaken [Norwegian Wikipedia page]). With this came a switch to a parliamentary system of government. While the full process took decades, it has led to a system of parliamentary sovereignty, where the Montesquieu idea of separation of powers is technically dead even though the three branches remain important institutions.
81
+
82
+ This does not mean that there are no checks and balances. With the introduction of a parliamentary system, political parties started to form quickly, which led to a call for electoral reform that saw the introduction of Party-list proportional representation in 1918. The peculiarities of the Norwegian election system generate 6–8 parties and make it extremely difficult for a single party to gain an absolute majority. It has only occurred for a brief period in the aftermath of World War II where the Labour Party had an absolute majority.
83
+
84
+ A multi-party system parliament that must either form a minority executive or a coalition executive functions as a perfectly good system of checks and balances even if it was never a stated goal for the introduction of multiparty system. The multiparty system came about in response to a public outcry of having too few parties and a general feeling of a lack of representation. For this reason, very little on the topic of separation of powers or checks and balances can be found in the works of Norwegian political sciences today.
85
+
86
+ The development of the British constitution, which is not a codified document, is based on fusion in the person of the Monarch, who has a formal role to play in the legislature (Parliament, which is where legal and political sovereignty lies, is the Crown-in-Parliament, and is summoned and dissolved by the Sovereign who must give his or her Royal Assent to all Bills so that they become Acts), the executive (the Sovereign appoints all ministers of His/Her Majesty's Government, who govern in the name of the Crown) and the judiciary (the Sovereign, as the fount of justice, appoints all senior judges, and all public prosecutions are brought in his or her name).
87
+
88
+ Although the doctrine of separation of power plays a role in the United Kingdom's constitutional life, the constitution is often described as having "a weak separation of powers" (A. V. Dicey) despite it being the one to which Montesquieu originally referred. For example, the executive forms a subset of the legislature, as did—to a lesser extent—the judiciary until the establishment of the Supreme Court of the United Kingdom. The Prime Minister, the Chief Executive, sits as a member of the Parliament of the United Kingdom, either as a peer in the House of Lords or as an elected member of the House of Commons (by convention, and as a result of the supremacy of the Lower House, the Prime Minister now sits in the House of Commons). Furthermore, while the courts in the United Kingdom are amongst the most independent in the world,[citation needed] the Law Lords, who were the final arbiters of most judicial disputes in the U.K. sat simultaneously in the House of Lords, the upper house of the legislature, although this arrangement ceased in 2009 when the Supreme Court of the United Kingdom came into existence. Furthermore, because of the existence of Parliamentary sovereignty, while the theory of separation of powers may be studied there, a system such as that of the U.K. is more accurately described as a "fusion of powers".[citation needed]
89
+
90
+ Until 2005, the Lord Chancellor fused in his person the Legislature, Executive and Judiciary, as he was the ex officio Speaker of the House of Lords, a Government Minister who sat in Cabinet and was head of the Lord Chancellor's Department, which administered the courts, the justice system and appointed judges, and was the head of the Judiciary in England and Wales and sat as a judge on the Judicial Committee of the House of Lords, the highest domestic court in the entire United Kingdom, and the Judicial Committee of the Privy Council, the senior tribunal court for parts of the Commonwealth. The Lord Chancellor also had certain other judicial positions, including being a judge in the Court of Appeal and President of the Chancery Division. The Lord Chancellor combines other aspects of the constitution, including having certain ecclesiastical functions of the established state church, making certain church appointments, nominations and sitting as one of the thirty-three Church Commissioners. These functions remain intact and unaffected by the Constitutional Reform Act. In 2005, the Constitutional Reform Act separated the powers with Legislative functions going to an elected Lord Speaker and the Judicial functions going to the Lord Chief Justice. The Lord Chancellor's Department was replaced with a Ministry of Justice and the Lord Chancellor currently serves in the position of Secretary of State for Justice.
91
+
92
+ The judiciary has no power to strike down primary legislation, and can only rule on secondary legislation that it is invalid with regard to the primary legislation if necessary.
93
+
94
+ Under the concept of parliamentary sovereignty, Parliament can enact any primary legislation it chooses. However, the concept immediately becomes problematic when the question is asked, "If parliament can do anything, can it bind its successors?" It is generally held that parliament can do no such thing.
95
+
96
+ Equally, while statute takes precedence over precedent-derived common law and the judiciary has no power to strike down primary legislation, there are certain cases where the supreme judicature has effected an injunction against the application of an act or reliance on its authority by the civil service. The seminal example of this is the Factortame case, where the House of Lords granted such an injunction preventing the operation of the Merchant Shipping Act 1988 until litigation in the European Court of Justice had been resolved.
97
+
98
+ The House of Lords ruling in Factortame (No. 1), approving the European Court of Justice formulation that "a national court which, in a case before it concerning Community law, considers that the sole obstacle which precludes it from granting interim relief is a rule of national law, must disapply that rule", has created an implicit tiering of legislative reviewability; the only way for parliament to prevent the supreme judicature from injunctively striking out a law on the basis of incompatibility with Community law is to pass an act specifically removing that power from the court, or by repealing the European Communities Act 1972.
99
+
100
+ The British legal systems are based on common law traditions, which require:
101
+
102
+ Separation of powers was first established in the United States Constitution, wherein the founding fathers included features of many new concepts, including hard-learned historical lessons about the checks and balances of power. Similar concepts were also prominent in the state governments of the United States. As colonies of Great Britain, the founding fathers considered that the American states had suffered an abuse of the broad power of parliamentarism and monarchy. As a remedy, the United States Constitution limits the powers of the federal government through various means—in particular, the three branches of the federal government are divided by exercising different functions. The executive and legislative powers are separated in origin by separate elections, and the judiciary is kept independent. Each branch controls the actions of others and balances its powers in some way.
103
+
104
+ In the Constitution, Article 1 Section I grants Congress only those "legislative powers herein granted" and proceeds to list those permissible actions in Article I Section 8, while Section 9 lists actions that are prohibited for Congress. The vesting clause in Article II places no limits on the Executive branch, simply stating that "The Executive Power shall be vested in a President of the United States of America."[35] The Supreme Court holds "The judicial Power" according to Article III, and judicial review was established in Marbury v. Madison under the Marshall court.[36]
105
+
106
+ The presidential system adopted by the Constitution of the United States obeys the balance of powers sought, and not found, by the constitutional monarchy. The people appoint their representatives to meet periodically in a legislative body, and, since they do not have a king, the people themselves elect a preeminent citizen to perform, also periodically, the executive functions of the State.
107
+
108
+ The direct election of the head of state or of the executive power is an inevitable consequence of the political freedom of the people, understood as the capacity to appoint and depose their leaders. Only this separate election of the person who has to fulfill the functions that the Constitution attributes to the president, so different by its nature and by its function from the election of representatives of the electors, allows the executive power to be controlled by the legislative and submitted to the demands of political responsibility.[37][disputed – discuss]
109
+
110
+ Judicial independence is maintained by appointments for life, which remove any dependence on the Executive, with voluntary retirement and a high threshold for dismissal by the Legislature, in addition to a salary that cannot be diminished during their service.
111
+
112
+ The federal government refers to the branches as "branches of government", while some systems use "government" exclusively to describe the executive. The Executive branch has attempted[38] to claim power arguing for separation of powers to include being the Commander-in-Chief of a standing army since the American Civil War, executive orders, emergency powers, security classifications since World War II, national security, signing statements, and the scope of the unitary executive.[22]
113
+
114
+ In order to lay a due foundation for that separate and distinct exercise of the different powers of government, which to a certain extent is admitted on all hands to be essential to the preservation of liberty, it is evident that each department should have a will of its own; and consequently should be so constituted that the members of each should have as little agency as possible in the appointment of the members of the others. Were this principle rigorously adhered to, it would require that all the appointments for the supreme executive, legislative, and judiciary magistracies should be drawn from the same fountain of authority, the people, through channels having no communication whatever with one another. Perhaps such a plan of constructing the several departments would be less difficult in practice than it may in contemplation appear. Some difficulties, however, and some additional expense would attend the execution of it. Some deviations, therefore, from the principle must be admitted. In the constitution of the judiciary department in particular, it might be inexpedient to insist rigorously on the principle: first, because peculiar qualifications being essential in the members, the primary consideration ought to be to select that mode of choice which best secures these qualifications; secondly, because the permanent tenure by which the appointments are held in that department, must soon destroy all sense of dependence on the authority conferring them.
115
+
116
+ It is equally evident, that the members of each department should be as little dependent as possible on those of the others, for the emoluments annexed to their offices. Were the executive magistrate, or the judges, not independent of the legislature in this particular, their independence in every other would be merely nominal.
117
+
118
+ Belgium is currently a federated state that has imposed the trias politica on different governmental levels. The constitution of 1831, considered one of the most liberal of its time for limiting the powers of its monarch and imposing a rigorous system of separation of powers, is based on three principles (represented in the Schematic overview of Belgian institutions).
119
+
120
+ Trias politica (horizontal separation of powers):
121
+
122
+ Subsidiarity (vertical separation of powers):
123
+
124
+ Secularism (separation of state and religion):
125
+
126
+ Three Lords:
127
+
128
+ Nine Ministers / Nine Courts, etc.
129
+
130
+ Three Judicial Offices [zh]:
131
+
132
+ According to Sun Yat-sen's idea of "separation of the five powers", the government of the Republic of China has five branches:
133
+
134
+ The president and vice president as well as the defunct National Assembly are constitutionally not part of the above five branches. Before being abolished in 2005, the National Assembly was a standing constituent assembly and electoral college for the president and vice president. Its constitutional amending powers were passed to the legislative yuan and its electoral powers were passed to the electorate.
135
+
136
+ The relationship between the executive and legislative branches are poorly defined. An example of the problems this causes is the near complete political paralysis that results when the president, who has neither the power to veto nor the ability to dissolve the legislature and call new elections, cannot negotiate with the legislature when his party is in the minority.[39] The examination and control yuans are marginal branches; their leaders as well as the leaders of the executive and judicial yuans are appointed by the president and confirmed by the legislative yuan. The legislature is the only branch that chooses its own leadership. The vice president has practically no responsibilities.
137
+
138
+ The central government of the People's Republic of China is divided among several state organs:
139
+
140
+ In the aftermath of the 43-day civil war in 1948 (after former President and incumbent candidate Rafael Ángel Calderón Guardia tried to take power through fraud, by not recognising the results of the presidential election that he had lost), the question of which transformational model the Costa Rican State would follow was the main issue that confronted the victors. A Constituent Assembly was elected by popular vote to draw up a new constitution, enacted in 1949, and remains in force. This document was an edit of the constitution of 1871, as the constituent assembly rejected more radical corporatist ideas proposed by the ruling Junta Fundadora de la Segunda República (which, although having come to power by military force, abolished the armed forces). Nonetheless, the new constitution increased centralization of power at the expense of municipalities and eliminated provincial government altogether, and at the time it increased the powers of congress and the judiciary.
141
+
142
+ It established the three supreme powers as the legislative, executive, and judicial branches, but also created two other autonomous state organs that have equivalent power, but not equivalent rank. The first is the Tribunal Supremo de Elecciones de Costa Rica (electoral branch), which controls elections and makes unique, unappealable decisions on their outcomes.
143
+
144
+ The second is the office of the Comptroller General (audit branch), an autonomous and independent organ nominally subordinate to the unicameral legislative assembly. All budgets of ministries and municipalities must pass through this agency, including the execution of budget items such as contracting for routine operations. The Comptroller also provides financial vigilance over government offices and office holders, and routinely brings actions to remove mayors for malfeasance, firmly establishing this organization as the fifth branch of the Republic.
145
+
146
+ The European Union is a supranational polity, and is neither a country nor a federation; but as the EU wields political power it complies with the principle of separation of powers. There are seven institutions of the European Union. In intergovernmental matters, most power is concentrated in the Council of the European Union—giving it the characteristics of a normal international organization. Here, all power at the EU level is in one branch. In the latter there are four main actors. The European Commission acts as an independent executive which is appointed by the Council in conjunction with the European Parliament; but the Commission also has a legislative role as the sole initiator of EU legislation.[40][41]
147
+ [42] An early maxim was: "The Commission proposes and the Council disposes"; and although the EU's lawmaking procedure is now much more complicated, this simple maxim still holds some truth. As well as both executive and legislative functions, the Commission arguably exercises a third, quasi-judicial, function under Articles 101 & 102 TFEU (competition law ); although the ECJ remains the final arbiter. The European Parliament is one half of the legislative branch and is directly elected. The Council itself acts both as the second half of the legislative branch and also holds some executive functions (some of which are exercised by the related European Council in practice). The European Court of Justice acts as the independent judicial branch, interpreting EU law and treaties. The remaining institution, the European Court of Auditors, is an independent audit authority (due to the sensitive nature of fraud in the EU).
148
+
149
+ The three branches in German government are further divided into six main bodies enshrined in the Basic Law for the Federal Republic of Germany:
150
+
151
+ Besides the constitutional court, the judicial branch at the federal level is made up of five supreme courts—one for civil and criminal cases (Bundesgerichtshof), and one each for administrative, tax, labour, and social security issues. There are also state-based (Länder / Bundesländer) courts beneath them, and a rarely used senate of the supreme courts.
152
+
153
+ The four independent branches of power in Hungary (the parliament, the government, the court system, and the office of the public accuser) are divided into six bodies:
154
+
155
+ The independent pillar status of the Hungarian public accuser's office is a unique construction, loosely modelled on the system Portugal introduced after the 1974 victory of the Carnation Revolution. The public accuser (attorney general) body has become the fourth column of Hungarian democracy only in recent times: after communism fell in 1989, the office was made independent by a new clause (XI) of the Constitution. The change was meant to prevent abuse of state power, especially with regards to the use of false accusations against opposition politicians, who may be excluded from elections if locked in protracted or excessively severe court cases.
156
+
157
+ To prevent the Hungarian accuser's office from neglecting its duties, natural human private persons can submit investigation requests, called "pótmagánvád," directly to the courts if the accusers' office refuses to. Courts will decide if the allegations have merit and order police to act in lieu of the accuser's office if warranted. In its decision No. 42/2005, the Hungarian constitutional court declared that the government does not enjoy such privilege and the state is powerless to further pursue cases if the public accuser refuses to do so.
158
+
159
+ Notable examples of states after Montesquieu that had more than three powers include:
en/5355.html.txt ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ September is the ninth month of the year in the Julian and Gregorian calendars, the third of four months to have a length of 30 days, and the fourth of five months to have a length of less than 31 days. In the Northern Hemisphere September is the seasonal equivalent of March in the Southern Hemisphere.
4
+
5
+ In the Northern hemisphere, the beginning of the meteorological autumn is on 1 September. In the Southern hemisphere, the beginning of the meteorological spring is on 1 September.[1]
6
+
7
+ September marks the beginning of the ecclesiastical year in the Eastern Orthodox Church. It is the start of the academic year in many countries of the northern hemisphere, in which children go back to school after the summer break, sometimes on the first day of the month.
8
+
9
+ September (from Latin septem, "seven") was originally the seventh of ten months in the oldest known Roman calendar, the calendar of Romulus c. 750 BC, with March (Latin Martius) the first month of the year until perhaps as late as 451 BC.[2] After the calendar reform that added January and February to the beginning of the year, September became the ninth month but retained its name. It had 29 days until the Julian reform, which added a day.
10
+
11
+ Ancient Roman observances for September include Ludi Romani, originally celebrated from September 12 to September 14, later extended to September 5 to September 19. In the 1st century BC, an extra day was added in honor of the deified Julius Caesar on 4 September. Epulum Jovis was held on September 13. Ludi Triumphales was held from September 18–22. The Septimontium was celebrated in September, and on December 11 on later calendars. These dates do not correspond to the modern Gregorian calendar. In 1752, the British Empire adopted the Gregorian calendar. In the British Empire that year, September 2 was immediately followed by September 14.
12
+
13
+ September was called "harvest month" in Charlemagne's calendar.[3] September corresponds partly to the Fructidor and partly to the Vendémiaire of the first French republic.[3]
14
+ On Usenet, it is said that September 1993 (Eternal September) never ended. September is called Herbstmonat, harvest month, in Switzerland.[3] The Anglo-Saxons called the month Gerstmonath, barley month, that crop being then usually harvested.[3]
15
+
16
+ The September equinox takes place in this month, and certain observances are organized around it. It is the Autumn equinox in the Northern Hemisphere, and the Vernal equinox in the Southern Hemisphere. The dates can vary from 21 September to 24 September (in UTC).
17
+
18
+ September is mostly in the sixth month of the astrological calendar (and the first part of the seventh), which begins at the end of March/Mars/Aries.
19
+
20
+ This list does not necessarily imply either official status or general observance.
en/5356.html.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ 7 is a number, numeral, and glyph.
2
+
3
+ 7 or seven may also refer to:
en/5357.html.txt ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Seven Wonders of the World or the Seven Wonders of the Ancient World is a list of remarkable constructions of classical antiquity given by various authors in guidebooks or poems popular among ancient Hellenic tourists. Although the list, in its current form, did not stabilise until the Renaissance, the first such lists of seven wonders date from the 2nd-1st century BC. The original list inspired innumerable versions through the ages, often listing seven entries. Of the original Seven Wonders, only one—the Great Pyramid of Giza, oldest of the ancient wonders—remains relatively intact. The Colossus of Rhodes, the Lighthouse of Alexandria, the Mausoleum at Halicarnassus, the Temple of Artemis and the Statue of Zeus were all destroyed. The location and ultimate fate of the Hanging Gardens are unknown, and there is speculation that they may not have existed at all.
4
+
5
+ The Greek conquest of much of the known western world in the 4th century BC gave Hellenistic travellers access to the civilizations of the Egyptians, Persians, and Babylonians.[1] Impressed and captivated by the landmarks and marvels of the various lands, these travellers began to list what they saw to remember them.[2][3]
6
+
7
+ Instead of "wonders", the ancient Greeks spoke of "theamata" (θεάματα), which means "sights", in other words "things to be seen" (Τὰ ἑπτὰ θεάματα τῆς οἰκουμένης [γῆς] Tà heptà theámata tēs oikoumenēs [gēs]). Later, the word for "wonder" ("thaumata" θαύματα, "wonders") was used.[4] Hence, the list was meant to be the Ancient World's counterpart of a travel guidebook.[1]
8
+
9
+ The first reference to a list of seven such monuments was given by Diodorus Siculus.[5][6] The epigrammist Antipater of Sidon[7] who lived around or before 100 BC,[8] gave a list of seven such monuments, including six of the present list (substituting the walls of Babylon for the lighthouse):[9]
10
+
11
+ I have gazed on the walls of impregnable Babylon along which chariots may race, and on the Zeus by the banks of the Alpheus, I have seen the hanging gardens, and the Colossus of the Helios, the great man-made mountains of the lofty pyramids, and the gigantic tomb of Mausolus; but when I saw the sacred house of Artemis that towers to the clouds, the others were placed in the shade, for the sun himself has never looked upon its equal outside Olympus.
12
+
13
+ Another 2nd century BC observer, who claimed to be the mathematician Philo of Byzantium,[10] wrote a short account entitled The Seven Sights of the World. However, the incomplete surviving manuscript only covered six of the supposedly seven places, which agreed with Antipater's list.[3]
14
+
15
+ Earlier and later lists by the historian Herodotus (484 BC–ca. 425 BC) and the architect Callimachus of Cyrene (ca. 305–240 BC), housed at the Museum of Alexandria, survived only as references.
16
+
17
+ The Colossus of Rhodes was the last of the seven to be completed, after 280 BC, and the first to be destroyed, by an earthquake in 226/225 BC. Hence, all seven existed at the same time for a period of less than 60 years.
18
+
19
+ The list covered only the sculptural and architectural monuments of the Mediterranean and Middle Eastern regions,[10] which then comprised the known world for the Greeks. Hence, extant sites beyond this realm were not considered as part of contemporary accounts.[1]
20
+
21
+ The primary accounts, coming from Hellenistic writers, also heavily influenced the places included in the wonders list. Five of the seven entries are a celebration of Greek accomplishments in the arts and architecture (the exceptions being the Pyramids of Giza and the Hanging Gardens of Babylon).
22
+
23
+ The seven wonders on Antipater's list won praises for their notable features, ranging from superlatives of the highest or largest of their types, to the artistry with which they were executed. Their architectural and artistic features were imitated throughout the Hellenistic world and beyond.
24
+
25
+ The Greek influence in Roman culture, and the revival of Greco-Roman artistic styles during the Renaissance caught the imagination of European artists and travellers.[15] Paintings and sculptures alluding to Antipater's list were made, while adventurers flocked to the actual sites to personally witness the wonders. Legends circulated to further complement the superlatives of the wonders.
26
+
27
+ Of Antipater's wonders, the only one that has survived to the present day is the Great Pyramid of Giza. Its brilliant white stone facing had survived intact until around 1300 AD, when local communities removed most of the stonework for building materials. The existence of the Hanging Gardens has not been proven, although theories abound.[16] Records and archaeology confirm the existence of the other five wonders. The Temple of Artemis and the Statue of Zeus were destroyed by fire, while the Lighthouse of Alexandria, Colossus, and tomb of Mausolus were destroyed by earthquakes. Among the artifacts to have survived are sculptures from the tomb of Mausolus and the Temple of Artemis in the British Museum in London.
28
+
29
+ Still, the listing of seven of the most marvellous architectural and artistic human achievements continued beyond the Ancient Greek times to the Roman Empire, the Middle Ages, the Renaissance and to the modern age. The Roman poet Martial and the Christian bishop Gregory of Tours had their versions.[1] Reflecting the rise of Christianity and the factor of time, nature and the hand of man overcoming Antipater's seven wonders, Roman and Christian sites began to figure on the list, including the Colosseum, Noah's Ark and Solomon's Temple.[1][3] In the 6th century, a list of seven wonders was compiled by St. Gregory of Tours: the list[17] included the Temple of Solomon, the Pharos of Alexandria and Noah's Ark.
30
+
31
+ Modern historians, working on the premise that the original Seven Ancient Wonders List was limited in its geographic scope, also had their versions to encompass sites beyond the Hellenistic realm—from the Seven Wonders of the Ancient World to the Seven Wonders of the World. Indeed, the "seven wonders" label has spawned innumerable versions among international organizations, publications and individuals based on different themes—works of nature, engineering masterpieces, constructions of the Middle Ages, etc. Its purpose has also changed from just a simple travel guidebook or a compendium of curious places, to lists of sites to defend or to preserve.
en/5358.html.txt ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Seven Wonders of the World or the Seven Wonders of the Ancient World is a list of remarkable constructions of classical antiquity given by various authors in guidebooks or poems popular among ancient Hellenic tourists. Although the list, in its current form, did not stabilise until the Renaissance, the first such lists of seven wonders date from the 2nd-1st century BC. The original list inspired innumerable versions through the ages, often listing seven entries. Of the original Seven Wonders, only one—the Great Pyramid of Giza, oldest of the ancient wonders—remains relatively intact. The Colossus of Rhodes, the Lighthouse of Alexandria, the Mausoleum at Halicarnassus, the Temple of Artemis and the Statue of Zeus were all destroyed. The location and ultimate fate of the Hanging Gardens are unknown, and there is speculation that they may not have existed at all.
4
+
5
+ The Greek conquest of much of the known western world in the 4th century BC gave Hellenistic travellers access to the civilizations of the Egyptians, Persians, and Babylonians.[1] Impressed and captivated by the landmarks and marvels of the various lands, these travellers began to list what they saw to remember them.[2][3]
6
+
7
+ Instead of "wonders", the ancient Greeks spoke of "theamata" (θεάματα), which means "sights", in other words "things to be seen" (Τὰ ἑπτὰ θεάματα τῆς οἰκουμένης [γῆς] Tà heptà theámata tēs oikoumenēs [gēs]). Later, the word for "wonder" ("thaumata" θαύματα, "wonders") was used.[4] Hence, the list was meant to be the Ancient World's counterpart of a travel guidebook.[1]
8
+
9
+ The first reference to a list of seven such monuments was given by Diodorus Siculus.[5][6] The epigrammist Antipater of Sidon[7] who lived around or before 100 BC,[8] gave a list of seven such monuments, including six of the present list (substituting the walls of Babylon for the lighthouse):[9]
10
+
11
+ I have gazed on the walls of impregnable Babylon along which chariots may race, and on the Zeus by the banks of the Alpheus, I have seen the hanging gardens, and the Colossus of the Helios, the great man-made mountains of the lofty pyramids, and the gigantic tomb of Mausolus; but when I saw the sacred house of Artemis that towers to the clouds, the others were placed in the shade, for the sun himself has never looked upon its equal outside Olympus.
12
+
13
+ Another 2nd century BC observer, who claimed to be the mathematician Philo of Byzantium,[10] wrote a short account entitled The Seven Sights of the World. However, the incomplete surviving manuscript only covered six of the supposedly seven places, which agreed with Antipater's list.[3]
14
+
15
+ Earlier and later lists by the historian Herodotus (484 BC–ca. 425 BC) and the architect Callimachus of Cyrene (ca. 305–240 BC), housed at the Museum of Alexandria, survived only as references.
16
+
17
+ The Colossus of Rhodes was the last of the seven to be completed, after 280 BC, and the first to be destroyed, by an earthquake in 226/225 BC. Hence, all seven existed at the same time for a period of less than 60 years.
18
+
19
+ The list covered only the sculptural and architectural monuments of the Mediterranean and Middle Eastern regions,[10] which then comprised the known world for the Greeks. Hence, extant sites beyond this realm were not considered as part of contemporary accounts.[1]
20
+
21
+ The primary accounts, coming from Hellenistic writers, also heavily influenced the places included in the wonders list. Five of the seven entries are a celebration of Greek accomplishments in the arts and architecture (the exceptions being the Pyramids of Giza and the Hanging Gardens of Babylon).
22
+
23
+ The seven wonders on Antipater's list won praises for their notable features, ranging from superlatives of the highest or largest of their types, to the artistry with which they were executed. Their architectural and artistic features were imitated throughout the Hellenistic world and beyond.
24
+
25
+ The Greek influence in Roman culture, and the revival of Greco-Roman artistic styles during the Renaissance caught the imagination of European artists and travellers.[15] Paintings and sculptures alluding to Antipater's list were made, while adventurers flocked to the actual sites to personally witness the wonders. Legends circulated to further complement the superlatives of the wonders.
26
+
27
+ Of Antipater's wonders, the only one that has survived to the present day is the Great Pyramid of Giza. Its brilliant white stone facing had survived intact until around 1300 AD, when local communities removed most of the stonework for building materials. The existence of the Hanging Gardens has not been proven, although theories abound.[16] Records and archaeology confirm the existence of the other five wonders. The Temple of Artemis and the Statue of Zeus were destroyed by fire, while the Lighthouse of Alexandria, Colossus, and tomb of Mausolus were destroyed by earthquakes. Among the artifacts to have survived are sculptures from the tomb of Mausolus and the Temple of Artemis in the British Museum in London.
28
+
29
+ Still, the listing of seven of the most marvellous architectural and artistic human achievements continued beyond the Ancient Greek times to the Roman Empire, the Middle Ages, the Renaissance and to the modern age. The Roman poet Martial and the Christian bishop Gregory of Tours had their versions.[1] Reflecting the rise of Christianity and the factor of time, nature and the hand of man overcoming Antipater's seven wonders, Roman and Christian sites began to figure on the list, including the Colosseum, Noah's Ark and Solomon's Temple.[1][3] In the 6th century, a list of seven wonders was compiled by St. Gregory of Tours: the list[17] included the Temple of Solomon, the Pharos of Alexandria and Noah's Ark.
30
+
31
+ Modern historians, working on the premise that the original Seven Ancient Wonders List was limited in its geographic scope, also had their versions to encompass sites beyond the Hellenistic realm—from the Seven Wonders of the Ancient World to the Seven Wonders of the World. Indeed, the "seven wonders" label has spawned innumerable versions among international organizations, publications and individuals based on different themes—works of nature, engineering masterpieces, constructions of the Middle Ages, etc. Its purpose has also changed from just a simple travel guidebook or a compendium of curious places, to lists of sites to defend or to preserve.
en/5359.html.txt ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Seven Wonders of the World or the Seven Wonders of the Ancient World is a list of remarkable constructions of classical antiquity given by various authors in guidebooks or poems popular among ancient Hellenic tourists. Although the list, in its current form, did not stabilise until the Renaissance, the first such lists of seven wonders date from the 2nd-1st century BC. The original list inspired innumerable versions through the ages, often listing seven entries. Of the original Seven Wonders, only one—the Great Pyramid of Giza, oldest of the ancient wonders—remains relatively intact. The Colossus of Rhodes, the Lighthouse of Alexandria, the Mausoleum at Halicarnassus, the Temple of Artemis and the Statue of Zeus were all destroyed. The location and ultimate fate of the Hanging Gardens are unknown, and there is speculation that they may not have existed at all.
4
+
5
+ The Greek conquest of much of the known western world in the 4th century BC gave Hellenistic travellers access to the civilizations of the Egyptians, Persians, and Babylonians.[1] Impressed and captivated by the landmarks and marvels of the various lands, these travellers began to list what they saw to remember them.[2][3]
6
+
7
+ Instead of "wonders", the ancient Greeks spoke of "theamata" (θεάματα), which means "sights", in other words "things to be seen" (Τὰ ἑπτὰ θεάματα τῆς οἰκουμένης [γῆς] Tà heptà theámata tēs oikoumenēs [gēs]). Later, the word for "wonder" ("thaumata" θαύματα, "wonders") was used.[4] Hence, the list was meant to be the Ancient World's counterpart of a travel guidebook.[1]
8
+
9
+ The first reference to a list of seven such monuments was given by Diodorus Siculus.[5][6] The epigrammist Antipater of Sidon[7] who lived around or before 100 BC,[8] gave a list of seven such monuments, including six of the present list (substituting the walls of Babylon for the lighthouse):[9]
10
+
11
+ I have gazed on the walls of impregnable Babylon along which chariots may race, and on the Zeus by the banks of the Alpheus, I have seen the hanging gardens, and the Colossus of the Helios, the great man-made mountains of the lofty pyramids, and the gigantic tomb of Mausolus; but when I saw the sacred house of Artemis that towers to the clouds, the others were placed in the shade, for the sun himself has never looked upon its equal outside Olympus.
12
+
13
+ Another 2nd century BC observer, who claimed to be the mathematician Philo of Byzantium,[10] wrote a short account entitled The Seven Sights of the World. However, the incomplete surviving manuscript only covered six of the supposedly seven places, which agreed with Antipater's list.[3]
14
+
15
+ Earlier and later lists by the historian Herodotus (484 BC–ca. 425 BC) and the architect Callimachus of Cyrene (ca. 305–240 BC), housed at the Museum of Alexandria, survived only as references.
16
+
17
+ The Colossus of Rhodes was the last of the seven to be completed, after 280 BC, and the first to be destroyed, by an earthquake in 226/225 BC. Hence, all seven existed at the same time for a period of less than 60 years.
18
+
19
+ The list covered only the sculptural and architectural monuments of the Mediterranean and Middle Eastern regions,[10] which then comprised the known world for the Greeks. Hence, extant sites beyond this realm were not considered as part of contemporary accounts.[1]
20
+
21
+ The primary accounts, coming from Hellenistic writers, also heavily influenced the places included in the wonders list. Five of the seven entries are a celebration of Greek accomplishments in the arts and architecture (the exceptions being the Pyramids of Giza and the Hanging Gardens of Babylon).
22
+
23
+ The seven wonders on Antipater's list won praises for their notable features, ranging from superlatives of the highest or largest of their types, to the artistry with which they were executed. Their architectural and artistic features were imitated throughout the Hellenistic world and beyond.
24
+
25
+ The Greek influence in Roman culture, and the revival of Greco-Roman artistic styles during the Renaissance caught the imagination of European artists and travellers.[15] Paintings and sculptures alluding to Antipater's list were made, while adventurers flocked to the actual sites to personally witness the wonders. Legends circulated to further complement the superlatives of the wonders.
26
+
27
+ Of Antipater's wonders, the only one that has survived to the present day is the Great Pyramid of Giza. Its brilliant white stone facing had survived intact until around 1300 AD, when local communities removed most of the stonework for building materials. The existence of the Hanging Gardens has not been proven, although theories abound.[16] Records and archaeology confirm the existence of the other five wonders. The Temple of Artemis and the Statue of Zeus were destroyed by fire, while the Lighthouse of Alexandria, Colossus, and tomb of Mausolus were destroyed by earthquakes. Among the artifacts to have survived are sculptures from the tomb of Mausolus and the Temple of Artemis in the British Museum in London.
28
+
29
+ Still, the listing of seven of the most marvellous architectural and artistic human achievements continued beyond the Ancient Greek times to the Roman Empire, the Middle Ages, the Renaissance and to the modern age. The Roman poet Martial and the Christian bishop Gregory of Tours had their versions.[1] Reflecting the rise of Christianity and the factor of time, nature and the hand of man overcoming Antipater's seven wonders, Roman and Christian sites began to figure on the list, including the Colosseum, Noah's Ark and Solomon's Temple.[1][3] In the 6th century, a list of seven wonders was compiled by St. Gregory of Tours: the list[17] included the Temple of Solomon, the Pharos of Alexandria and Noah's Ark.
30
+
31
+ Modern historians, working on the premise that the original Seven Ancient Wonders List was limited in its geographic scope, also had their versions to encompass sites beyond the Hellenistic realm—from the Seven Wonders of the Ancient World to the Seven Wonders of the World. Indeed, the "seven wonders" label has spawned innumerable versions among international organizations, publications and individuals based on different themes—works of nature, engineering masterpieces, constructions of the Middle Ages, etc. Its purpose has also changed from just a simple travel guidebook or a compendium of curious places, to lists of sites to defend or to preserve.
en/536.html.txt ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Baltic Sea is a mediterranean sea of the Atlantic Ocean, enclosed by Denmark, Estonia, Finland, Latvia, Lithuania, Sweden, northeast Germany, Poland, Russia and the North and Central European Plain.
4
+
5
+ The sea stretches from 53°N to 66°N latitude and from 10°E to 30°E longitude. A marginal sea of the Atlantic, with limited water exchange between the two water bodies, the Baltic Sea drains through the Danish Straits into the Kattegat by way of the Øresund, Great Belt and Little Belt. It includes the Gulf of Bothnia, the Bay of Bothnia, the Gulf of Finland, the Gulf of Riga and the Bay of Gdańsk.
6
+
7
+ The Baltic Proper is bordered on its northern edge, at the latitude 60°N, by the Åland islands and the Gulf of Bothnia, on its northeastern edge by the Gulf of Finland, on its eastern edge by the Gulf of Riga, and in the west by the Swedish part of the southern Scandinavian Peninsula.
8
+
9
+ The Baltic Sea is connected by artificial waterways to the White Sea via the White Sea–Baltic Canal and to the German Bight of the North Sea via the Kiel Canal.
10
+
11
+ Administration
12
+
13
+ The Helsinki Convention on the Protection of the Marine Environment of the Baltic Sea Area includes the Baltic Sea and the Kattegat, without calling Kattegat a part of the Baltic Sea, "For the purposes of this Convention the 'Baltic Sea Area' shall be the Baltic Sea and the Entrance to the Baltic Sea, bounded by the parallel of the Skaw in the Skagerrak at 57°44.43'N."[3]
14
+
15
+ Traffic history
16
+
17
+ Historically, the Kingdom of Denmark collected Sound Dues from ships at the border between the ocean and the land-locked Baltic Sea, in tandem: in the Øresund at Kronborg castle near Helsingør; in the Great Belt at Nyborg; and in the Little Belt at its narrowest part then Fredericia, after that stronghold was built. The narrowest part of Little Belt is the "Middelfart Sund" near Middelfart.[4]
18
+
19
+ Oceanography
20
+
21
+ Geographers widely agree that the preferred physical border of the Baltic is a line drawn through the southern Danish islands, Drogden-Sill and Langeland.[5] The Drogden Sill is situated north of Køge Bugt and connects Dragør in the south of Copenhagen to Malmö; it is used by the Øresund Bridge, including the Drogden Tunnel. By this definition, the Danish Straits are part of the entrance, but the Bay of Mecklenburg and the Bay of Kiel are parts of the Baltic Sea.
22
+ Another usual border is the line between Falsterbo, Sweden and Stevns Klint, Denmark, as this is the southern border of Øresund. It's also the border between the shallow southern Øresund (with a typical depth of 5–10 meters only) and notably deeper water.
23
+
24
+ Hydrography and biology
25
+
26
+ Drogden Sill (depth of 7 m (23 ft)) sets a limit to Øresund and Darss Sill (depth of 18 m (59 ft)), and a limit to the Belt Sea.[6] The shallow sills are obstacles to the flow of heavy salt water from the Kattegat into the basins around Bornholm and Gotland.
27
+
28
+ The Kattegat and the southwestern Baltic Sea are well oxygenated and have a rich biology. The remainder of the Sea is brackish, poor in oxygen and in species. Thus, statistically, the more of the entrance that is included in its definition, the healthier the Baltic appears; conversely, the more narrowly it is defined, the more endangered its biology appears.
29
+
30
+ Tacitus called it Mare Suebicum after the Germanic people of the Suebi,[7] and Ptolemy Sarmatian Ocean after the Sarmatians,[8] but the first to name it the Baltic Sea (Mare Balticum) was the eleventh-century German chronicler Adam of Bremen. The origin of the latter name is speculative and it was adopted into Slavic and Finnic languages spoken around the sea, very likely due to the role of Medieval Latin in cartography. It might be connected to the Germanic word belt, a name used for two of the Danish straits, the Belts, while others claim it to be directly derived from the source of the Germanic word, Latin balteus "belt".[9] Adam of Bremen himself compared the sea with a belt, stating that it is so named because it stretches through the land as a belt (Balticus, eo quod in modum baltei longo tractu per Scithicas regiones tendatur usque in Greciam).
31
+
32
+ He might also have been influenced by the name of a legendary island mentioned in the Natural History of Pliny the Elder. Pliny mentions an island named Baltia (or Balcia) with reference to accounts of Pytheas and Xenophon. It is possible that Pliny refers to an island named Basilia ("the royal") in On the Ocean by Pytheas. Baltia also might be derived from belt and mean "near belt of sea, strait".
33
+
34
+ Meanwhile, others have suggested that the name of the island originates from the Proto-Indo-European root *bhel meaning "white, fair".[10] This root and its basic meaning were retained in Lithuanian (as baltas), Latvian (as balts) and Slavic (as bely). On this basis, a related hypothesis holds that the name originated from this Indo-European root via a Baltic language such as Lithuanian.[11] Another explanation is that, while derived from the aforementioned root, the name of the sea is related to names for various forms of water and related substances in several European languages, that might have been originally associated with colors found in swamps (compare Proto-Slavic *bolto "swamp"). Yet another explanation is that the name originally meant "enclosed sea, bay" as opposed to open sea.[12] Some Swedish historians believe the name derives from the god Baldr of Nordic mythology.
35
+
36
+ In the Middle Ages the sea was known by a variety of names. The name Baltic Sea became dominant only after 1600. Usage of Baltic and similar terms to denote the region east of the sea started only in 19th century.
37
+
38
+ The Baltic Sea was known in ancient Latin language sources as Mare Suebicum or even Mare Germanicum.[13] Older native names in languages that used to be spoken on the shores of the sea or near it usually indicate the geographical location of the sea (in Germanic languages), or its size in relation to smaller gulfs (in Old Latvian), or tribes associated with it (in Old Russian the sea was known as the Varanghian Sea). In modern languages it is known by the equivalents of "East Sea", "West Sea", or "Baltic Sea" in different languages:
39
+
40
+ At the time of the Roman Empire, the Baltic Sea was known as the Mare Suebicum or Mare Sarmaticum. Tacitus in his AD 98 Agricola and Germania described the Mare Suebicum, named for the Suebi tribe, during the spring months, as a brackish sea where the ice broke apart and chunks floated about. The Suebi eventually migrated southwest to temporarily reside in the Rhineland area of modern Germany, where their name survives in the historic region known as Swabia. Jordanes called it the Germanic Sea in his work, the Getica.
41
+
42
+ In the early Middle Ages, Norse (Scandinavian) merchants built a trade empire all around the Baltic. Later, the Norse fought for control of the Baltic against Wendish tribes dwelling on the southern shore. The Norse also used the rivers of Russia for trade routes, finding their way eventually to the Black Sea and southern Russia. This Norse-dominated period is referred to as the Viking Age.
43
+
44
+ Since the Viking Age, the Scandinavians have referred to the Baltic Sea as Austmarr ("Eastern Lake"). "Eastern Sea", appears in the Heimskringla and Eystra salt appears in Sörla þáttr. Saxo Grammaticus recorded in Gesta Danorum an older name, Gandvik, -vik being Old Norse for "bay", which implies that the Vikings correctly regarded it as an inlet of the sea. Another form of the name, "Grandvik", attested in at least one English translation of Gesta Danorum, is likely to be a misspelling.)
45
+
46
+ In addition to fish the sea also provides amber, especially from its southern shores within today's borders of Poland, Russia and Lithuania. First mentions of amber deposits on the South coast of the Baltic Sea date back to the 12th century.[14] The bordering countries have also traditionally exported lumber, wood tar, flax, hemp and furs by ship across the Baltic. Sweden had from early medieval times exported iron and silver mined there, while Poland had and still has extensive salt mines. Thus the Baltic Sea has long been crossed by much merchant shipping.
47
+
48
+ The lands on the Baltic's eastern shore were among the last in Europe to be converted to Christianity. This finally happened during the Northern Crusades: Finland in the twelfth century by Swedes, and what are now Estonia and Latvia in the early thirteenth century by Danes and Germans (Livonian Brothers of the Sword). The Teutonic Order gained control over parts of the southern and eastern shore of the Baltic Sea, where they set up their monastic state. Lithuania was the last European state to convert to Christianity.
49
+
50
+ In the period between the 8th and 14th centuries, there was much piracy in the Baltic from the coasts of Pomerania and Prussia, and the Victual Brothers held Gotland.
51
+
52
+ Starting in the 11th century, the southern and eastern shores of the Baltic were settled by migrants mainly from Germany, a movement called the Ostsiedlung ("east settling"). Other settlers were from the Netherlands, Denmark, and Scotland. The Polabian Slavs were gradually assimilated by the Germans.[15] Denmark gradually gained control over most of the Baltic coast, until she lost much of her possessions after being defeated in the 1227 Battle of Bornhöved.
53
+
54
+ In the 13th to 16th centuries, the strongest economic force in Northern Europe was the Hanseatic League, a federation of merchant cities around the Baltic Sea and the North Sea. In the sixteenth and early seventeenth centuries, Poland, Denmark, and Sweden fought wars for Dominium maris baltici ("Lordship over the Baltic Sea"). Eventually, it was Sweden that virtually encompassed the Baltic Sea. In Sweden the sea was then referred to as Mare Nostrum Balticum ("Our Baltic Sea"). The goal of Swedish warfare during the 17th century was to make the Baltic Sea an all-Swedish sea (Ett Svenskt innanhav), something that was accomplished except the part between Riga in Latvia and Stettin in Pomerania. However, the Dutch dominated Baltic trade in the seventeenth century.
55
+
56
+ In the eighteenth century, Russia and Prussia became the leading powers over the sea. Sweden's defeat in the Great Northern War brought Russia to the eastern coast. Russia became and remained a dominating power in the Baltic. Russia's Peter the Great saw the strategic importance of the Baltic and decided to found his new capital, Saint Petersburg, at the mouth of the Neva river at the east end of the Gulf of Finland. There was much trading not just within the Baltic region but also with the North Sea region, especially eastern England and the Netherlands: their fleets needed the Baltic timber, tar, flax and hemp.
57
+
58
+ During the Crimean War, a joint British and French fleet attacked the Russian fortresses in the Baltic. They bombarded Sveaborg, which guards Helsinki; and Kronstadt, which guards Saint Petersburg; and they destroyed Bomarsund in the Åland Islands. After the unification of Germany in 1871, the whole southern coast became German. World War I was partly fought in the Baltic Sea. After 1920 Poland was connected to the Baltic Sea by the Polish Corridor and enlarged the port of Gdynia in rivalry with the port of the Free City of Danzig.
59
+
60
+ During World War II, Germany reclaimed all of the southern and much of the eastern shore by occupying Poland and the Baltic states. In 1945, the Baltic Sea became a mass grave for retreating soldiers and refugees on torpedoed troop transports. The sinking of the Wilhelm Gustloff remains the worst maritime disaster in history, killing (very roughly) 9,000 people. In 2005, a Russian group of scientists found over five thousand airplane wrecks, sunken warships, and other material, mainly from World War II, on the bottom of the sea.
61
+
62
+ Since the end of World War II, various nations, including the Soviet Union, the United Kingdom and the United States have disposed of chemical weapons in the Baltic Sea, raising concerns of environmental contamination.[16] Today, fishermen occasionally find some of these materials: the most recent available report from the Helsinki Commission notes that four small scale catches of chemical munitions representing approximately 105 kg (231 lb) of material were reported in 2005. This is a reduction from the 25 incidents representing 1,110 kg (2,450 lb) of material in 2003.[17] Until now, the U.S. Government refuses to disclose the exact coordinates of the wreck sites. Deteriorating bottles leak mustard gas and other substances, thus slowly poisoning a substantial part of the Baltic Sea.
63
+
64
+ After 1945, the German population was expelled from all areas east of the Oder-Neisse line, making room for displaced Poles and Russians. Poland gained most of the southern shore. The Soviet Union gained another access to the Baltic with the Kaliningrad Oblast. The Baltic states on the eastern shore were annexed by the Soviet Union. The Baltic then separated opposing military blocs: NATO and the Warsaw Pact. Had war broken out, the Polish navy was prepared to invade the Danish isles. Neutral Sweden developed incident weapons to defend its territorial waters after the Swedish submarine incidents.[18] This border status restricted trade and travel. It ended only after the collapse of the Communist regimes in Central and Eastern Europe in the late 1980s.
65
+
66
+ Since May 2004, with the accession of the Baltic states and Poland, the Baltic Sea has been almost entirely surrounded by countries of the European Union (EU). The remaining non-EU shore areas are Russian: the Saint Petersburg area and the Kaliningrad Oblast exclave.
67
+
68
+ Winter storms begin arriving in the region during October. These have caused numerous shipwrecks, and contributed to the extreme difficulties of rescuing passengers of the ferry M/S Estonia en route from Tallinn, Estonia, to Stockholm, Sweden, in September 1994, which claimed the lives of 852 people. Older, wood-based shipwrecks such as the Vasa tend to remain well-preserved, as the Baltic's cold and brackish water does not suit the shipworm.
69
+
70
+ Storm surge floodings are generally taken to occur when the water level is more than one metre above normal. In Warnemünde about 110 floods occurred from 1950 to 2000, an average of just over two per year.[19]
71
+
72
+ Historic flood events were the All Saints' Flood of 1304 and other floods in the years 1320, 1449, 1625, 1694, 1784 and 1825. Little is known of their extent.[20] From 1872, there exist regular and reliable records of water levels in the Baltic Sea. The highest was the flood of 1872 when the water was an average of 2.43 m (8 ft 0 in) above sea level at Warnemünde and a maximum of 2.83 m (9 ft 3 in) above sea level in Warnemünde. In the last very heavy floods the average water levels reached 1.88 m (6 ft 2 in) above sea level in 1904, 1.89 m (6 ft 2 in) in 1913, 1.73 m (5 ft 8 in) in January 1954, 1.68 m (5 ft 6 in) on 2–4 November 1995 and 1.65 m (5 ft 5 in) on 21 February 2002.[21]
73
+
74
+ An arm of the North Atlantic Ocean, the Baltic Sea is enclosed by Sweden and Denmark to the west, Finland to the northeast, the Baltic countries to the southeast, and the North European Plain to the southwest.
75
+
76
+ It is about 1,600 km (990 mi) long, an average of 193 km (120 mi) wide, and an average of 55 metres (180 ft) deep. The maximum depth is 459 m (1,506 ft) which is on the Swedish side of the center. The surface area is about 349,644 km2 (134,998 sq mi) [22] and the volume is about 20,000 km3 (4,800 cu mi). The periphery amounts to about 8,000 km (5,000 mi) of coastline.[23]
77
+
78
+ The Baltic Sea is one of the largest brackish inland seas by area, and occupies a basin (a zungenbecken) formed by glacial erosion during the last few ice ages.
79
+
80
+ Physical characteristics of the Baltic Sea, its main sub-regions, and the transition zone to the Skagerrak/North Sea area[24]
81
+
82
+ The International Hydrographic Organization defines the limits of the Baltic Sea as follows:[25]
83
+
84
+ The northern part of the Baltic Sea is known as the Gulf of Bothnia, of which the northernmost part is the Bay of Bothnia or Bothnian Bay. The more rounded southern basin of the gulf is called Bothnian Sea and immediately to the south of it lies the Sea of Åland. The Gulf of Finland connects the Baltic Sea with Saint Petersburg. The Gulf of Riga lies between the Latvian capital city of Riga and the Estonian island of Saaremaa.
85
+
86
+ The Northern Baltic Sea lies between the Stockholm area, southwestern Finland and Estonia. The Western and Eastern Gotland basins form the major parts of the Central Baltic Sea or Baltic proper. The Bornholm Basin is the area east of Bornholm, and the shallower Arkona Basin extends from Bornholm to the Danish isles of Falster and Zealand.
87
+
88
+ In the south, the Bay of Gdańsk lies east of the Hel Peninsula on the Polish coast and west of the Sambia Peninsula in Kaliningrad Oblast. The Bay of Pomerania lies north of the islands of Usedom and Wolin, east of Rügen. Between Falster and the German coast lie the Bay of Mecklenburg and Bay of Lübeck. The westernmost part of the Baltic Sea is the Bay of Kiel. The three Danish straits, the Great Belt, the Little Belt and The Sound (Öresund/Øresund), connect the Baltic Sea with the Kattegat and Skagerrak strait in the North Sea.
89
+
90
+ The water temperature of the Baltic Sea varies significantly depending on exact location, season and depth. At the Bornholm Basin, which is located directly east of the island of the same name, the surface temperature typically falls to 0–5 °C (32–41 °F) during the peak of the winter and rises to 15–20 °C (59–68 °F) during the peak of the summer, with an annual average of around 9–10 °C (48–50 °F).[27] A similar pattern can be seen in the Gotland Basin, which is located between the island of Gotland and Latvia. In the deep of these basins the temperature variations are smaller. At the bottom of the Bornholm Basin, deeper than 80 m (260 ft), the temperature typically is 1–10 °C (34–50 °F), and at the bottom of the Gotland Basin, at depths greater than 225 m (738 ft), the temperature typically is 4–7 °C (39–45 °F).[27]
91
+
92
+ On the long-term average, the Baltic Sea is ice-covered at the annual maximum for about 45% of its surface area. The ice-covered area during such a typical winter includes the Gulf of Bothnia, the Gulf of Finland, the Gulf of Riga, the archipelago west of Estonia, the Stockholm archipelago, and the Archipelago Sea southwest of Finland. The remainder of the Baltic does not freeze during a normal winter, except sheltered bays and shallow lagoons such as the Curonian Lagoon. The ice reaches its maximum extent in February or March; typical ice thickness in the northernmost areas in the Bothnian Bay, the northern basin of the Gulf of Bothnia, is about 70 cm (28 in) for landfast sea ice. The thickness decreases farther south.
93
+
94
+ Freezing begins in the northern extremities of the Gulf of Bothnia typically in the middle of November, reaching the open waters of the Bothnian Bay in early January. The Bothnian Sea, the basin south of Kvarken, freezes on average in late February. The Gulf of Finland and the Gulf of Riga freeze typically in late January. In 2011, the Gulf of Finland was completely frozen on 15 February.[28]
95
+
96
+ The ice extent depends on whether the winter is mild, moderate, or severe. In severe winters ice can form around southern Sweden and even in the Danish straits. According to the 18th-century natural historian William Derham, during the severe winters of 1703 and 1708, the ice cover reached as far as the Danish straits.[29] Frequently, parts of the Gulf of Bothnia and Gulf of Finland are frozen, in addition to coastal fringes in more southerly locations such as the Gulf of Riga. This description meant that the whole of the Baltic Sea was covered with ice.
97
+
98
+ Since 1720, the Baltic Sea has frozen over entirely 20 times, most recently in early 1987, which was the most severe winter in Scandinavia since 1720. The ice then covered 400,000 km2 (150,000 sq mi). During the winter of 2010–11, which was quite severe compared to those of the last decades, the maximum ice cover was 315,000 km2 (122,000 sq mi), which was reached on 25 February 2011. The ice then extended from the north down to the northern tip of Gotland, with small ice-free areas on either side, and the east coast of the Baltic Sea was covered by an ice sheet about 25 to 100 km (16 to 62 mi) wide all the way to Gdańsk. This was brought about by a stagnant high-pressure area that lingered over central and northern Scandinavia from around 10 to 24 February. After this, strong southern winds pushed the ice further into the north, and much of the waters north of Gotland were again free of ice, which had then packed against the shores of southern Finland.[30] The effects of the afore-mentioned high-pressure area did not reach the southern parts of the Baltic Sea, and thus the entire sea did not freeze over. However, floating ice was additionally observed near Świnoujście harbour in January 2010.
99
+
100
+ In recent years before 2011, the Bothnian Bay and the Bothnian Sea were frozen with solid ice near the Baltic coast and dense floating ice far from it. In 2008, almost no ice formed except for a short period in March.[31]
101
+
102
+ During winter, fast ice, which is attached to the shoreline, develops first, rendering ports unusable without the services of icebreakers. Level ice, ice sludge, pancake ice, and rafter ice form in the more open regions. The gleaming expanse of ice is similar to the Arctic, with wind-driven pack ice and ridges up to 15 m (49 ft). Offshore of the landfast ice, the ice remains very dynamic all year, and it is relatively easily moved around by winds and therefore forms pack ice, made up of large piles and ridges pushed against the landfast ice and shores.
103
+
104
+ In spring, the Gulf of Finland and the Gulf of Bothnia normally thaw in late April, with some ice ridges persisting until May in the eastern extremities of the Gulf of Finland. In the northernmost reaches of the Bothnian Bay, ice usually stays until late May; by early June it is practically always gone. However, in the famine year of 1867 remnants of ice were observed as late as 17 July near Uddskär.[32] Even as far south as Øresund, remnants of ice have been observed in May on several occasions; near Taarbaek on 15 May 1942 and near Copenhagen on 11 May 1771. Drift ice was also observed on 11 May 1799.[33][34][35]
105
+
106
+ The ice cover is the main habitat for two large mammals, the grey seal (Halichoerus grypus) and the Baltic ringed seal (Pusa hispida botnica), both of which feed underneath the ice and breed on its surface. Of these two seals, only the Baltic ringed seal suffers when there is not adequate ice in the Baltic Sea, as it feeds its young only while on ice. The grey seal is adapted to reproducing also with no ice in the sea. The sea ice also harbours several species of algae that live in the bottom and inside unfrozen brine pockets in the ice.
107
+
108
+ The Baltic Sea flows out through the Danish straits; however, the flow is complex. A surface layer of brackish water discharges 940 km3 (230 cu mi) per year into the North Sea. Due to the difference in salinity, by salinity permeation principle, a sub-surface layer of more saline water moving in the opposite direction brings in 475 km3 (114 cu mi) per year. It mixes very slowly with the upper waters, resulting in a salinity gradient from top to bottom, with most of the salt water remaining below 40 to 70 m (130 to 230 ft) deep. The general circulation is anti-clockwise: northwards along its eastern boundary, and south along the western one .[36]
109
+
110
+ The difference between the outflow and the inflow comes entirely from fresh water. More than 250 streams drain a basin of about 1,600,000 km2 (620,000 sq mi), contributing a volume of 660 km3 (160 cu mi) per year to the Baltic. They include the major rivers of north Europe, such as the Oder, the Vistula, the Neman, the Daugava and the Neva. Additional fresh water comes from the difference of precipitation less evaporation, which is positive.
111
+
112
+ An important source of salty water are infrequent inflows of North Sea water into the Baltic. Such inflows, are important to the Baltic ecosystem because of the oxygen they transport into the Baltic deeps, used to happen regularly until the 1980s. In recent decades they have become less frequent. The latest four occurred in 1983, 1993, 2003 and 2014 suggesting a new inter-inflow period of about ten years.
113
+
114
+ The water level is generally far more dependent on the regional wind situation than on tidal effects. However, tidal currents occur in narrow passages in the western parts of the Baltic Sea.
115
+
116
+ The significant wave height is generally much lower than that of the North Sea. Quite violent, sudden storms sweep the surface ten or more times a year, due to large transient temperature differences and a long reach of wind. Seasonal winds also cause small changes in sea level, of the order of 0.5 m (1 ft 8 in) .[36]
117
+
118
+ The Baltic Sea is the world's largest inland brackish sea.[37] Only two other brackish waters are larger on some measurements: The Black Sea is larger in both surface area and water volume, but most of it is located outside the continental shelf (only a small percentage is inland). The Caspian Sea is larger in water volume, but—despite its name��it is a lake rather than a sea.[37]
119
+
120
+ The Baltic Sea's salinity is much lower than that of ocean water (which averages 3.5%), as a result of abundant freshwater runoff from the surrounding land (rivers, streams and alike), combined with the shallowness of the sea itself; runoff contributes roughly one-fortieth its total volume per year, as the volume of the basin is about 21,000 km3 (5,000 cu mi) and yearly runoff is about 500 km3 (120 cu mi).[citation needed]
121
+
122
+ The open surface waters of the Baltic Sea "proper" generally have a salinity of 0.3 to 0.9%, which is border-line freshwater. The flow of fresh water into the sea from approximately two hundred rivers and the introduction of salt from the southwest builds up a gradient of salinity in the Baltic Sea. The highest surface salinities, generally 0.7–0.9%, is in the southwestern-most part of the Baltic, in the Arkona and Bornholm basins (the former located roughly between southeast Zealand and Bornholm, and the latter directly east of Bornholm). It gradually falls further east and north, reaching the lowest in the Bothnian Bay at around 0.3%.[38] Drinking the surface water of the Baltic as a means of survival would actually hydrate the body instead of dehydrating, as is the case with ocean water.[note 1][citation needed]
123
+
124
+ As salt water is denser than fresh water, the bottom of the Baltic Sea is saltier than the surface. This creates a vertical stratification of the water column, a halocline, that represents a barrier to the exchange of oxygen and nutrients, and fosters completely separate maritime environments.[39] The difference between the bottom and surface salinities vary depending on location. Overall it follows the same southwest to east and north pattern as the surface. At the bottom of the Arkona Basin (equalling depths greater than 40 m or 130 ft) and Bornholm Basin (depths greater than 80 m or 260 ft) it is typically 1.4–1.8%. Further east and north the salinity at the bottom is consistently lower, being the lowest in Bothnian Bay (depths greater than 120 m or 390 ft) where it is slightly below 0.4%, or only marginally higher than the surface in the same region.[38]
125
+
126
+ In contrast, the salinity of the Danish straits, which connect the Baltic Sea and Kattegat, tends to be significantly higher, but with major variations from year to year. For example, the surface and bottom salinity in the Great Belt is typically around 2.0% and 2.8% respectively, which is only somewhat below that of the Kattegat.[38] The water surplus caused by the continuous inflow of rivers and streams to the Baltic Sea means that there generally is a flow of brackish water out though the Danish straits to the Kattegat (and eventually the Atlantic).[40] Significant flows in the opposite direction, salt water from the Kattegat through the Danish straits to the Baltic Sea, are less regular. From 1880 to 1980 inflows occurred on average six to seven times per decade. Since 1980 it has been much less frequent, although a very large inflow occurred in 2014.[27]
127
+
128
+ The rating of mean discharges differs from the ranking of hydrological lengths (from the most distant source to the sea) and the rating of the nominal lengths. Göta älv, a tributary of the Kattegat, is not listed, as due to the northward upper low-salinity-flow in the sea, its water hardly reaches the Baltic proper:
129
+
130
+ Countries that border the sea:
131
+
132
+ Denmark,  Estonia,  Finland,  Germany,  Latvia,  Lithuania,  Poland,  Russia,  Sweden.
133
+
134
+ Countries lands in the outer drainage basin:
135
+
136
+ Belarus,  Czech Republic,  Norway,  Slovakia,  Ukraine.
137
+
138
+ The Baltic sea drainage basin is roughly four times the surface area of the sea itself. About 48% of the region is forested, with Sweden and Finland containing the majority of the forest, especially around the Gulfs of Bothnia and Finland.
139
+
140
+ About 20% of the land is used for agriculture and pasture, mainly in Poland and around the edge of the Baltic Proper, in Germany, Denmark and Sweden. About 17% of the basin is unused open land with another 8% of wetlands. Most of the latter are in the Gulfs of Bothnia and Finland.
141
+
142
+ The rest of the land is heavily populated. About 85 million people live in the Baltic drainage basin, 15 million within 10 km (6 mi) of the coast and 29 million within 50 km (31 mi) of the coast. Around 22 million live in population centers of over 250,000. 90% of these are concentrated in the 10 km (6 mi) band around the coast. Of the nations containing all or part of the basin, Poland includes 45% of the 85 million, Russia 12%, Sweden 10% and the others less than 6% each.[41]
143
+
144
+ The biggest coastal cities (by population):
145
+
146
+ Other important ports:
147
+
148
+ The Baltic Sea somewhat resembles a riverbed, with two tributaries, the Gulf of Finland and Gulf of Bothnia. Geological surveys show that before the Pleistocene, instead of the Baltic Sea, there was a wide plain around a great river that paleontologists call the Eridanos. Several Pleistocene glacial episodes scooped out the river bed into the sea basin. By the time of the last, or Eemian Stage (MIS 5e), the Eemian Sea was in place. Instead of a true sea, the Baltic can even today also be understood as the common estuary of all rivers flowing into it.
149
+
150
+ From that time the waters underwent a geologic history summarized under the names listed below. Many of the stages are named after marine animals (e.g. the Littorina mollusk) that are clear markers of changing water temperatures and salinity.
151
+
152
+ The factors that determined the sea's characteristics were the submergence or emergence of the region due to the weight of ice and subsequent isostatic readjustment, and the connecting channels it found to the North Sea-Atlantic, either through the straits of Denmark or at what are now the large lakes of Sweden, and the White Sea-Arctic Sea.
153
+
154
+ The land is still emerging isostatically from its depressed state, which was caused by the weight of ice during the last glaciation. The phenomenon is known as post-glacial rebound. Consequently, the surface area and the depth of the sea are diminishing. The uplift is about eight millimetres per year on the Finnish coast of the northernmost Gulf of Bothnia. In the area, the former seabed is only gently sloping, leading to large areas of land being reclaimed in what are, geologically speaking, relatively short periods (decades and centuries).
155
+
156
+ The "Baltic Sea anomaly" refers to interpretations of an indistinct sonar image taken by Swedish salvage divers on the floor of the northern Baltic Sea in June 2011. The treasure hunters suggested the image showed an object with unusual features of seemingly extraordinary origin. Speculation published in tabloid newspapers claimed that the object was a sunken UFO. A consensus of experts and scientists say that the image most likely shows a natural geological formation.[43][44][45][46][47]
157
+
158
+ The fauna of the Baltic Sea is a mixture of marine and freshwater species. Among marine fishes are Atlantic cod, Atlantic herring, European hake, European plaice, European flounder, shorthorn sculpin and turbot, and examples of freshwater species include European perch, northern pike, whitefish and common roach. Freshwater species may occur at outflows of rivers or streams in all coastal sections of the Baltic Sea. Otherwise marine species dominate in most sections of the Baltic, at least as far north as Gävle, where less than one-tenth are freshwater species. Further north the pattern is inverted. In the Bothnian Bay, roughly two-thirds of the species are freshwater. In the far north of this bay, saltwater species are almost entirely absent.[27] For example, the common starfish and shore crab, two species that are very widespread along European coasts, are both unable to cope with the significantly lower salinity. Their range limit is west of Bornholm, meaning that they are absent from the vast majority of the Baltic Sea.[27] Some marine species, like the Atlantic cod and European flounder, can survive at relatively low salinities, but need higher salinities to breed, which therefore occurs in deeper parts of the Baltic Sea.[48][49]
159
+
160
+ There is a decrease in species richness from the Danish belts to the Gulf of Bothnia. The decreasing salinity along this path causes restrictions in both physiology and habitats.[50] At more than 600 species of invertebrates, fish, aquatic mammals, aquatic birds and macrophytes, the Arkona Basin (roughly between southeast Zealand and Bornholm) is far richer than other more eastern and northern basins in the Baltic Sea, which all have less than 400 species from these groups, with the exception of the Gulf of Finland with more than 750 species. However, even the most diverse sections of the Baltic Sea have far less species than the almost-full saltwater Kattegat, which is home to more than 1600 species from these groups.[27] The lack of tides has affected the marine species as compared with the Atlantic.
161
+
162
+ Since the Baltic Sea is so young there are only two or three known endemic species: the brown alga Fucus radicans and the flounder Platichthys solemdali. Both appear to have evolved in the Baltic basin and were only recognized as species in 2005 and 2018 respectively, having formerly been confused with more widespread relatives.[49][51] The tiny Copenhagen cockle (Parvicardium hauniense), a rare mussel, is sometimes considered endemic, but has now been recorded in the Mediterranean.[52] However, some consider non-Baltic records to be misidentifications of juvenile lagoon cockles (Cerastoderma glaucum).[53] Several widespread marine species have distinctive subpopulations in the Baltic Sea adapted to the low salinity, such as the Baltic Sea forms of the Atlantic herring and lumpsucker, which are smaller than the widespread forms in the North Atlantic.[40]
163
+
164
+ A peculiar feature of the fauna is that it contains a number of glacial relict species, isolated populations of arctic species which have remained in the Baltic Sea since the last glaciation, such as the large isopod Saduria entomon, the Baltic subspecies of ringed seal, and the fourhorn sculpin. Some of these relicts are derived from glacial lakes, such as Monoporeia affinis, which is a main element in the benthic fauna of the low-salinity Bothnian Bay.
165
+
166
+ Cetaceans in Baltic Sea have been monitored by the ASCOBANS. Critically endangered populations of Atlantic white-sided dolphins and harbor porpoises inhabit the sea where white-colored porpoises have been recorded,[54] and occasionally oceanic and out-of-range species such as minke whales,[55] bottlenose dolphins,[56] beluga whales,[57] orcas,[58] and beaked whales[59] visit the waters. In recent years, very small, but with increasing rates, fin whales[60][61][62][63] and humpback whales migrate into Baltic sea including mother and calf pair.[64] Now extinct Atlantic grey whales (remains found from Gräsö along Bothnian Sea/southern Bothnian Gulf[65] and Ystad[66]) and eastern population of North Atlantic right whales that is facing functional extinction[67] once migrated into Baltic Sea.[68]
167
+
168
+ Other notable megafauna include the basking sharks.[69]
169
+
170
+ Satellite images taken in July 2010 revealed a massive algal bloom covering 377,000 square kilometres (146,000 sq mi) in the Baltic Sea. The area of the bloom extended from Germany and Poland to Finland. Researchers of the phenomenon have indicated that algal blooms have occurred every summer for decades. Fertilizer runoff from surrounding agricultural land has exacerbated the problem and led to increased eutrophication.[70]
171
+
172
+ Approximately 100,000 km2 (38,610 sq mi) of the Baltic's seafloor (a quarter of its total area) is a variable dead zone. The more saline (and therefore denser) water remains on the bottom, isolating it from surface waters and the atmosphere. This leads to decreased oxygen concentrations within the zone. It is mainly bacteria that grow in it, digesting organic material and releasing hydrogen sulfide. Because of this large anaerobic zone, the seafloor ecology differs from that of the neighbouring Atlantic.
173
+
174
+ Plans to artificially oxygenate areas of the Baltic that have experienced eutrophication have been proposed by the University of Gothenburg and Inocean AB. The proposal intends to use wind-driven pumps to inject oxygen (air) into waters at, or around, 130m below sea level.[71]
175
+
176
+ After World War II, Germany had to be disarmed and large quantities of ammunition stockpiles were disposed directly into the Baltic Sea and the North Sea. Environmental experts and marine biologists warn that these ammunition dumps pose a major environmental threat with potentially life-threatening consequences to the health and safety of humans on the coastlines of these seas.[72]
177
+
178
+ Construction of the Great Belt Bridge in Denmark (completed 1997) and the Øresund Bridge-Tunnel (completed 1999), linking Denmark with Sweden, provided a highway and railroad connection between Sweden and the Danish mainland (the Jutland Peninsula, precisely the Zealand). The undersea tunnel of the Øresund Bridge-Tunnel provides for navigation of large ships into and out of the Baltic Sea. The Baltic Sea is the main trade route for export of Russian petroleum. Many of the countries neighboring the Baltic Sea have been concerned about this, since a major oil leak in a seagoing tanker would be disastrous for the Baltic—given the slow exchange of water. The tourism industry surrounding the Baltic Sea is naturally concerned about oil pollution.
179
+
180
+ Much shipbuilding is carried out in the shipyards around the Baltic Sea. The largest shipyards are at Gdańsk, Gdynia, and Szczecin, Poland; Kiel, Germany; Karlskrona and Malmö, Sweden; Rauma, Turku, and Helsinki, Finland; Riga, Ventspils, and Liepāja, Latvia; Klaipėda, Lithuania; and Saint Petersburg, Russia.
181
+
182
+ There are several cargo and passenger ferries that operate on the Baltic Sea, such as Scandlines, Silja Line, Polferries, the Viking Line, Tallink, and Superfast Ferries.
183
+
184
+ Piers
185
+
186
+
187
+
188
+ Resort towns
189
+
190
+
191
+
192
+ For the first time ever, all the sources of pollution around an entire sea were made subject to a single convention, signed in 1974 by the then seven Baltic coastal states. The 1974 Convention entered into force on 3 May 1980.
193
+
194
+ In the light of political changes and developments in international environmental and maritime law, a new convention was signed in 1992 by all the states bordering on the Baltic Sea, and the European Community. After ratification the Convention entered into force on 17 January 2000. The Convention covers the whole of the Baltic Sea area, including inland waters and the water of the sea itself, as well as the seabed. Measures are also taken in the whole catchment area of the Baltic Sea to reduce land-based pollution. The Convention on the Protection of the Marine Environment of the Baltic Sea Area, 1992, entered into force on 17 January 2000.
195
+
196
+ The governing body of the convention is the Helsinki Commission,[73] also known as HELCOM, or Baltic Marine Environment Protection Commission. The present contracting parties are Denmark, Estonia, the European Community, Finland, Germany, Latvia, Lithuania, Poland, Russia and Sweden.
197
+
198
+ The ratification instruments were deposited by the European Community, Germany, Latvia and Sweden in 1994, by Estonia and Finland in 1995, by Denmark in 1996, by Lithuania in 1997, and by Poland and Russia in November 1999.
en/5360.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/5361.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/5362.html.txt ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (such as a sperm or egg cell) with a single set of chromosomes (haploid) combines with another to produce an organism composed of cells with two sets of chromosomes (diploid).[1] Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction does not occur in prokaryotes (organisms without cell nuclei), but they have processes with similar effects such as bacterial conjugation, transformation and transduction, which may have been precursors to sexual reproduction in early eukaryotes.
2
+
3
+ In the production of sex cells in eukaryotes, diploid mother cells divide to produce haploid cells known as gametes in a process called meiosis that involves genetic recombination. The homologous chromosomes pair up so that their DNA sequences are aligned with each other, and this is followed by exchange of genetic information between them. Two rounds of cell division then produce four haploid gametes, each with half the number of chromosomes from each parent cell, but with the genetic information in the parental chromosomes recombined. Two haploid gametes combine into one diploid cell known as a zygote in a process called fertilisation. The zygote incorporates genetic material from both gametes. Multiple cell divisions, without change of the number of chromosomes, then form a multicellular diploid phase or generation.
4
+
5
+ In human reproduction, each cell contains 46 chromosomes in 23 pairs. Meiosis in the parents' gonads produces gametes that each contain only 23 chromosomes that are genetic recombinants of the DNA sequences contained in the parental chromosomes. When the nuclei of the gametes come together to form a fertilized egg or zygote, each cell of the resulting child will have 23 chromosomes from each parent, or 46 in total.[2][3]
6
+
7
+ In plants only, the diploid phase, known as the sporophyte, produces spores by meiosis that germinate and then divide by mitosis to form a haploid multicellular phase, the gametophyte, that produces gametes directly by mitosis. This type of life cycle, involving alternation between two multicellular phases, the sexual haploid gametophyte and asexual diploid sporophyte, is known as alternation of generations.
8
+
9
+ The evolution of sexual reproduction is considered paradoxical,[3] because asexual reproduction should be able to outperform it as every young organism created can bear its own young. This implies that an asexual population has an intrinsic capacity to grow more rapidly with each generation.[4] This 50% cost is a fitness disadvantage of sexual reproduction.[5] The two-fold cost of sex includes this cost and the fact that any organism can only pass on 50% of its own genes to its offspring. One definite advantage of sexual reproduction is that it impedes the accumulation of genetic mutations.[6]
10
+
11
+ Sexual selection is a mode of natural selection in which some individuals out-reproduce others of a population because they are better at securing mates for sexual reproduction.[7][8] It has been described as "a powerful evolutionary force that does not exist in asexual populations."[9]
12
+
13
+ The first fossilized evidence of sexual reproduction in eukaryotes is from the Stenian period, about 1 to 1.2 billion years ago.[10]
14
+
15
+ Biologists studying evolution propose several explanations for the development of sexual reproduction and its maintenance. These reasons include reducing the likelihood of the accumulation of deleterious mutations, increasing rate of adaptation to changing environments,[11] dealing with competition, DNA repair and masking deleterious mutations.[12][13][14] All of these ideas about why sexual reproduction has been maintained are generally supported, but ultimately the size of the population determines if sexual reproduction is entirely beneficial. Larger populations appear to respond more quickly to some of the benefits obtained through sexual reproduction than do smaller population sizes.[15]
16
+
17
+ Maintenance of sexual reproduction has been explained by theories that work at several levels of selection, though some of these models remain controversial.[citation needed] However, newer models presented in recent years suggest a basic advantage for sexual reproduction in slowly reproducing complex organisms.
18
+
19
+ Sexual reproduction allows these species to exhibit characteristics that depend on the specific environment that they inhabit, and the particular survival strategies that they employ.[16]
20
+
21
+ In order to sexually reproduce, both males and females need to find a mate. Generally in animals mate choice is made by females while males compete to be chosen. This can lead organisms to extreme efforts in order to reproduce, such as combat and display, or produce extreme features caused by a positive feedback known as a Fisherian runaway. Thus sexual reproduction, as a form of natural selection, has an effect on evolution. Sexual dimorphism is where the basic phenotypic traits vary between males and females of the same species. Dimorphism is found in both sex organs and in secondary sex characteristics, body size, physical strength and morphology, biological ornamentation, behavior and other bodily traits. However, sexual selection is only implied over an extended period of time leading to sexual dimorphism.[17]
22
+
23
+ Apart from some eusocial wasps, organisms which reproduce sexually have a 1:1 sex ratio of male and female births. The English statistician and biologist Ronald Fisher outlined why this is so in what has come to be known as Fisher's principle.[18] This essentially says the following:
24
+
25
+ Insect species make up more than two-thirds of all extant animal species. Most insect species reproduce sexually, though some species are facultatively parthenogenetic. Many insects species have sexual dimorphism, while in others the sexes look nearly identical. Typically they have two sexes with males producing spermatozoa and females ova. The ova develop into eggs that have a covering called the chorion, which forms before internal fertilization. Insects have very diverse mating and reproductive strategies most often resulting in the male depositing spermatophore within the female, which she stores until she is ready for egg fertilization. After fertilization, and the formation of a zygote, and varying degrees of development, in many species the eggs are deposited outside the female; while in others, they develop further within the female and are born live.
26
+
27
+ There are three extant kinds of mammals: monotremes, placentals and marsupials, all with internal fertilization. In placental mammals, offspring are born as juveniles: complete animals with the sex organs present although not reproductively functional. After several months or years, depending on the species, the sex organs develop further to maturity and the animal becomes sexually mature. Most female mammals are only fertile during certain periods during their estrous cycle, at which point they are ready to mate. Individual male and female mammals meet and carry out copulation.[citation needed] For most mammals, males and females exchange sexual partners throughout their adult lives.[19][20][21]
28
+
29
+ The vast majority of fish species lay eggs that are then fertilized by the male.[22] Some species lay their eggs on a substrate like a rock or on plants, while others scatter their eggs and the eggs are fertilized as they drift or sink in the water column.
30
+
31
+ Some fish species use internal fertilization and then disperse the developing eggs or give birth to live offspring. Fish that have live-bearing offspring include the guppy and mollies or Poecilia. Fishes that give birth to live young can be ovoviviparous, where the eggs are fertilized within the female and the eggs simply hatch within the female body, or in seahorses, the male carries the developing young within a pouch, and gives birth to live young.[23] Fishes can also be viviparous, where the female supplies nourishment to the internally growing offspring. Some fish are hermaphrodites, where a single fish is both male and female and can produce eggs and sperm. In hermaphroditic fish, some are male and female at the same time while in other fish they are serially hermaphroditic; starting as one sex and changing to the other. In at least one hermaphroditic species, self-fertilization occurs when the eggs and sperm are released together. Internal self-fertilization may occur in some other species.[24] One fish species does not reproduce by sexual reproduction but uses sex to produce offspring; Poecilia formosa is a unisex species that uses a form of parthenogenesis called gynogenesis, where unfertilized eggs develop into embryos that produce female offspring. Poecilia formosa mate with males of other fish species that use internal fertilization, the sperm does not fertilize the eggs but stimulates the growth of the eggs which develops into embryos.[25]
32
+
33
+ Animals have life cycles with a single diploid multicellular phase that produces haploid gametes directly by meiosis. Male gametes are called sperm, and female gametes are called eggs or ova. In animals, fertilization of the ovum by a sperm results in the formation of a diploid zygote that develops by repeated mitotic divisions into a diploid adult. Plants have two multicellular life-cycle phases, resulting in an alternation of generations. Plant zygotes germinate and divide repeatedly by mitosis to produce a diploid multicellular organism known as the sporophyte. The mature sporophyte produces haploid spores by meiosis that germinate and divide by mitosis to form a multicellular gametophyte phase that produces gametes at maturity. The gametophytes of different groups of plants vary in size. Mosses and other pteridophytic plants may have gametophytes consisting of several million cells, while angiosperms have as few as three cells in each pollen grain.
34
+
35
+ Flowering plants are the dominant plant form on land and they reproduce either sexually or asexually. Often their most distinguishing feature is their reproductive organs, commonly called flowers. The anther produces pollen grains which contain the male gametophytes that produce sperm nuclei. For pollination to occur, pollen grains must attach to the stigma of the female reproductive structure (carpel), where the female gametophytes (ovules) are located inside the ovary. After the pollen tube grows through the carpel's style, the sex cell nuclei from the pollen grain migrate into the ovule to fertilize the egg cell and endosperm nuclei within the female gametophyte in a process termed double fertilization. The resulting zygote develops into an embryo, while the triploid endosperm (one sperm cell plus two female cells) and female tissues of the ovule give rise to the surrounding tissues in the developing seed. The ovary, which produced the female gametophyte(s), then grows into a fruit, which surrounds the seed(s). Plants may either self-pollinate or cross-pollinate.
36
+
37
+ In 2013, flowers dating from the Cretaceous (100 million years before present) were found encased in amber, the oldest evidence of sexual reproduction in a flowering plant. Microscopic images showed tubes growing out of pollen and penetrating the flower's stigma. The pollen was sticky, suggesting it was carried by insects.[26]
38
+
39
+ Nonflowering plants like ferns, moss and liverworts use other means of sexual reproduction.
40
+
41
+ Ferns mostly produce large diploid sporophytes with rhizomes, roots and leaves. Fertile leaves produce sporangia that contain haploid spores. The spores are released and germinate to produce short, thin gametophytes that are typically heart shaped, small and green in color. The gametophyte thalli, produce both motile sperm in the antheridia and egg cells in archegonia on the same or different plants. After rains or when dew deposits a film of water, the motile sperm are splashed away from the antheridia, which are normally produced on the top side of the thallus, and swim in the film of water to the archegonia where they fertilize the egg. To promote out crossing or cross fertilization the sperm are released before the eggs are receptive of the sperm, making it more likely that the sperm will fertilize the eggs of different thallus. After fertilization, a zygote is formed which grows into a new sporophytic plant. The condition of having separate sporophyte and gametophyte plants is called alternation of generations. Other plants with similar life cycles include Psilotum, Lycopodium, Selaginella and Equisetum.
42
+
43
+ The bryophytes, which include liverworts, hornworts and mosses, reproduce both sexually and vegetatively. They are small plants found growing in moist locations and like ferns, have motile sperm with flagella and need water to facilitate sexual reproduction. These plants start as a haploid spore that grows into the dominant gametophyte form, which is a multicellular haploid body with leaf-like structures that photosynthesize. Haploid gametes are produced in antheridia (male) and archegonia (female) by mitosis. The sperm released from the antheridia respond to chemicals released by ripe archegonia and swim to them in a film of water and fertilize the egg cells thus producing a zygote. The zygote divides by mitotic division and grows into a multicellular, diploid sporophyte. The sporophyte produces spore capsules (sporangia), which are connected by stalks (setae) to the archegonia. The spore capsules produce spores by meiosis and when ripe the capsules burst open to release the spores. Bryophytes show considerable variation in their reproductive structures and the above is a basic outline. Also in some species each plant is one sex (dioicous) while other species produce both sexes on the same plant (monoicous).[27]
44
+
45
+ Fungi are classified by the methods of sexual reproduction they employ. The outcome of sexual reproduction most often is the production of resting spores that are used to survive inclement times and to spread. There are typically three phases in the sexual reproduction of fungi: plasmogamy, karyogamy and meiosis. The cytoplasm of two parent cells fuse during plasmogamy and the nuclei fuse during karyogamy. New haploid gametes are formed during meiosis and develop into spores. The adaptive basis for the maintenance of sexual reproduction in the Ascomycota and Basidiomycota (dikaryon) fungi was reviewed by Wallen and Perlin.[28] They concluded that the most plausible reason for maintaining this capability is the benefit of repairing DNA damage, caused by a variety of stresses, through recombination that occurs during meiosis.[28]
46
+
47
+ Three distinct processes in prokaryotes are regarded as similar to eukaryotic sex: bacterial transformation, which involves the incorporation of foreign DNA into the bacterial chromosome; bacterial conjugation, which is a transfer of plasmid DNA between bacteria, but the plasmids are rarely incorporated into the bacterial chromosome; and gene transfer and genetic exchange in archaea.
48
+
49
+ Bacterial transformation involves the recombination of genetic material and its function is mainly associated with DNA repair. Bacterial transformation is a complex process encoded by numerous bacterial genes, and is a bacterial adaptation for DNA transfer.[12][13] This process occurs naturally in at least 40 bacterial species.[29] For a bacterium to bind, take up, and recombine exogenous DNA into its chromosome, it must enter a special physiological state referred to as competence (see Natural competence). Sexual reproduction in early single-celled eukaryotes may have evolved from bacterial transformation,[14] or from a similar process in archaea (see below).
50
+
51
+ On the other hand, bacterial conjugation is a type of direct transfer of DNA between two bacteria mediated by an external appendage called the conjugation pilus.[30] Bacterial conjugation is controlled by plasmid genes that are adapted for spreading copies of the plasmid between bacteria. The infrequent integration of a plasmid into a host bacterial chromosome, and the subsequent transfer of a part of the host chromosome to another cell do not appear to be bacterial adaptations.[12][31]
52
+
53
+ Exposure of hyperthermophilic archaeal Sulfolobus species to DNA damaging conditions induces cellular aggregation accompanied by high frequency genetic marker exchange.[32][33] Ajon et al.[33] hypothesized that this cellular aggregation enhances species-specific DNA repair by homologous recombination. DNA transfer in Sulfolobus may be an early form of sexual interaction similar to the more well-studied bacterial transformation systems that also involve species-specific DNA transfer leading to homologous recombinational repair of DNA damage.
en/5363.html.txt ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Snakes are elongated, legless, carnivorous reptiles of the suborder Serpentes.[2] Like all other squamates, snakes are ectothermic, amniote vertebrates covered in overlapping scales. Many species of snakes have skulls with several more joints than their lizard ancestors, enabling them to swallow prey much larger than their heads with their highly mobile jaws. To accommodate their narrow bodies, snakes' paired organs (such as kidneys) appear one in front of the other instead of side by side, and most have only one functional lung. Some species retain a pelvic girdle with a pair of vestigial claws on either side of the cloaca. Lizards have evolved elongate bodies without limbs or with greatly reduced limbs about twenty-five times independently via convergent evolution, leading to many lineages of legless lizards.[3] Legless lizards resemble snakes, but several common groups of legless lizards have eyelids and external ears, which snakes lack, although this rule is not universal (see Amphisbaenia, Dibamidae, and Pygopodidae).
4
+
5
+ Living snakes are found on every continent except Antarctica, and on most smaller land masses; exceptions include some large islands, such as Ireland, Iceland, Greenland, the Hawaiian archipelago, and the islands of New Zealand, and many small islands of the Atlantic and central Pacific oceans.[4] Additionally, sea snakes are widespread throughout the Indian and Pacific Oceans. More than 20 families are currently recognized, comprising about 520 genera and about 3,600 species.[5][6] They range in size from the tiny, 10.4 cm (4.1 in)-long Barbados thread snake[7] to the reticulated python of 6.95 meters (22.8 ft) in length.[8] The fossil species Titanoboa cerrejonensis was 12.8 meters (42 ft) long.[9] Snakes are thought to have evolved from either burrowing or aquatic lizards, perhaps during the Jurassic period, with the earliest known fossils dating to between 143 and 167 Ma ago.[10] The diversity of modern snakes appeared during the Paleocene epoch (c 66 to 56 Ma ago, after the Cretaceous–Paleogene extinction event). The oldest preserved descriptions of snakes can be found in the Brooklyn Papyrus.
6
+
7
+ Most species are nonvenomous and those that have venom use it primarily to kill and subdue prey rather than for self-defense. Some possess venom potent enough to cause painful injury or death to humans. Nonvenomous snakes either swallow prey alive or kill by constriction.
8
+
9
+ The English word snake comes from Old English snaca, itself from Proto-Germanic *snak-an- (cf. Germanic Schnake "ring snake", Swedish snok "grass snake"), from Proto-Indo-European root *(s)nēg-o- "to crawl", "to creep", which also gave sneak as well as Sanskrit nāgá "snake".[11] The word ousted adder, as adder went on to narrow in meaning, though in Old English næddre was the general word for snake.[12] The other term, serpent, is from French, ultimately from Indo-European *serp- (to creep),[13] which also gave Ancient Greek hérpō (ἕρπω) "I crawl".
10
+
11
+ Leptotyphlopidae
12
+
13
+ Anomalepididae
14
+
15
+ Typhlopidae
16
+
17
+ Anilius
18
+
19
+ Tropidophiidae
20
+
21
+ Uropeltidae
22
+
23
+ Anomochilus
24
+
25
+ Cylindrophis
26
+
27
+ Pythonidae
28
+
29
+ Xenopeltis
30
+
31
+ Loxocemus
32
+
33
+ Acrochordidae
34
+
35
+ Xenodermidae
36
+
37
+ Pareidae
38
+
39
+ Viperidae
40
+
41
+ Homalopsidae
42
+
43
+ Lamprophiidae
44
+
45
+ Elapidae
46
+
47
+ Colubridae
48
+
49
+ Boidae
50
+
51
+ Erycinae
52
+
53
+ Calabaria
54
+
55
+ Ungaliophiinae
56
+
57
+ Sanzinia
58
+
59
+ Candoia
60
+
61
+ The fossil record of snakes is relatively poor because snake skeletons are typically small and fragile making fossilization uncommon. Fossils readily identifiable as snakes (though often retaining hind limbs) first appear in the fossil record during the Cretaceous period.[15] The earliest known true snake fossils (members of the crown group Serpentes) come from the marine simoliophiids, the oldest of which is the Late Cretaceous (Cenomanian age) Haasiophis terrasanctus,[1] dated to between 112 and 94 million years old.[16]
62
+
63
+ Based on comparative anatomy, there is consensus that snakes descended from lizards.[17]:11[18] Pythons and boas—primitive groups among modern snakes—have vestigial hind limbs: tiny, clawed digits known as anal spurs, which are used to grasp during mating.[17]:11[19] The families Leptotyphlopidae and Typhlopidae also possess remnants of the pelvic girdle, appearing as horny projections when visible.
64
+
65
+ Front limbs are nonexistent in all known snakes. This is caused by the evolution of their Hox genes, controlling limb morphogenesis. The axial skeleton of the snakes’ common ancestor, like most other tetrapods, had regional specializations consisting of cervical (neck), thoracic (chest), lumbar (lower back), sacral (pelvic), and caudal (tail) vertebrae. Early in snake evolution, the Hox gene expression in the axial skeleton responsible for the development of the thorax became dominant. As a result, the vertebrae anterior to the hindlimb buds (when present) all have the same thoracic-like identity (except from the atlas, axis, and 1–3 neck vertebrae). In other words, most of a snake's skeleton is an extremely extended thorax. Ribs are found exclusively on the thoracic vertebrae. Neck, lumbar and pelvic vertebrae are very reduced in number (only 2–10 lumbar and pelvic vertebrae are present), while only a short tail remains of the caudal vertebrae. However, the tail is still long enough to be of important use in many species, and is modified in some aquatic and tree-dwelling species.
66
+
67
+ Many modern snake groups originated during the Paleocene, alongside the adaptive radiation of mammals following the extinction of (non-avian) dinosaurs. The expansion of grasslands in North America also led to an explosive radiation among snakes.[20] Previously, snakes were a minor component of the North American fauna, but during the Miocene, the number of species and their prevalence increased dramatically with the first appearances of vipers and elapids in North America and the significant diversification of Colubridae (including the origin of many modern genera such as Nerodia, Lampropeltis, Pituophis, and Pantherophis).[20]
68
+
69
+ There is fossil evidence to suggest that snakes may have evolved from burrowing lizards, such as the varanids (or a similar group) during the Cretaceous Period.[21] An early fossil snake relative, Najash rionegrina, was a two-legged burrowing animal with a sacrum, and was fully terrestrial.[22] One extant analog of these putative ancestors is the earless monitor Lanthanotus of Borneo (though it also is semiaquatic).[23] Subterranean species evolved bodies streamlined for burrowing, and eventually lost their limbs.[23] According to this hypothesis, features such as the transparent, fused eyelids (brille) and loss of external ears evolved to cope with fossorial difficulties, such as scratched corneas and dirt in the ears.[21][23] Some primitive snakes are known to have possessed hindlimbs, but their pelvic bones lacked a direct connection to the vertebrae. These include fossil species like Haasiophis, Pachyrhachis and Eupodophis, which are slightly older than Najash.[19]
70
+
71
+ This hypothesis was strengthened in 2015 by the discovery of a 113m year-old fossil of a four-legged snake in Brazil that has been named Tetrapodophis amplectus. It has many snake-like features, is adapted for burrowing and its stomach indicates that it was preying on other animals.[24] It is currently uncertain if Tetrapodophis is a snake or another species, in the squamate order, as a snake-like body has independently evolved at least 26 times. Tetrapodophis does not have distinctive snake features in its spine and skull.[25][26]
72
+
73
+ An alternative hypothesis, based on morphology, suggests the ancestors of snakes were related to mosasaurs—extinct aquatic reptiles from the Cretaceous—which in turn are thought to have derived from varanid lizards.[18] According to this hypothesis, the fused, transparent eyelids of snakes are thought to have evolved to combat marine conditions (corneal water loss through osmosis), and the external ears were lost through disuse in an aquatic environment. This ultimately led to an animal similar to today's sea snakes. In the Late Cretaceous, snakes recolonized land, and continued to diversify into today's snakes. Fossilized snake remains are known from early Late Cretaceous marine sediments, which is consistent with this hypothesis; particularly so, as they are older than the terrestrial Najash rionegrina. Similar skull structure, reduced or absent limbs, and other anatomical features found in both mosasaurs and snakes lead to a positive cladistical correlation, although some of these features are shared with varanids.[citation needed]
74
+
75
+ Tetrapodophis
76
+
77
+ Pachyrhachis
78
+
79
+ Eupodophis descouensi
80
+
81
+ Eupodophis descouensi hind leg
82
+
83
+ Genetic studies in recent years have indicated snakes are not as closely related to monitor lizards as was once believed—and therefore not to mosasaurs, the proposed ancestor in the aquatic scenario of their evolution. However, more evidence links mosasaurs to snakes than to varanids. Fragmented remains found from the Jurassic and Early Cretaceous indicate deeper fossil records for these groups, which may potentially refute either hypothesis.[27][28]
84
+
85
+ In 2016 two studies reported that limb loss in snakes is associated with DNA mutations in the Zone of Polarizing Activity Regulatory Sequence (ZRS), a regulatory region of the sonic hedgehog gene which is critically required for limb development. More advanced snakes have no remnants of limbs, but basal snakes such as pythons and boas do have traces of highly reduced, vestigial hind limbs. Python embryos even have fully developed hind limb buds, but their later development is stopped by the DNA mutations in the ZRS.[29][30][31][32]
86
+
87
+ There are over 2,900 species of snakes ranging as far northward as the Arctic Circle in Scandinavia and southward through Australia.[18] Snakes can be found on every continent except Antarctica, in the sea, and as high as 16,000 feet (4,900 m) in the Himalayan Mountains of Asia.[18][33]:143 There are numerous islands from which snakes are absent, such as Ireland, Iceland, and New Zealand[4][33] (although New Zealand's waters are infrequently visited by the yellow-bellied sea snake and the banded sea krait).[34]
88
+
89
+ All modern snakes are grouped within the suborder Serpentes in Linnean taxonomy, part of the order Squamata, though their precise placement within squamates remains controversial.[5]
90
+
91
+ The two infraorders of Serpentes are: Alethinophidia and Scolecophidia.[5] This separation is based on morphological characteristics and mitochondrial DNA sequence similarity. Alethinophidia is sometimes split into Henophidia and Caenophidia, with the latter consisting of "colubroid" snakes (colubrids, vipers, elapids, hydrophiids, and atractaspids) and acrochordids, while the other alethinophidian families comprise Henophidia.[35] While not extant today, the Madtsoiidae, a family of giant, primitive, python-like snakes, was around until 50,000 years ago in Australia, represented by genera such as Wonambi.
92
+
93
+ There are numerous debates in the systematics within the group. For instance, many sources classify Boidae and Pythonidae as one family, while some keep the Elapidae and Hydrophiidae (sea snakes) separate for practical reasons despite their extremely close relation.
94
+
95
+ Recent molecular studies support the monophyly of the clades of modern snakes, scolecophidians, typhlopids + anomalepidids, alethinophidians, core alethinophidians, uropeltids (Cylindrophis, Anomochilus, uropeltines), macrostomatans, booids, boids, pythonids and caenophidians.[14]
96
+
97
+
98
+
99
+ While snakes are limbless reptiles, which evolved from (and are grouped with) lizards, there are many other species of lizards which have lost their limbs independently and superficially look similar to snakes. These include the slowworm and glass snake.
100
+
101
+ Other serpentine tetrapods unrelated to snakes include caecilians (amphibians), amphisbaenians (near-lizard squamates), and the extinct aistopods (amphibians).
102
+
103
+ The now extinct Titanoboa cerrejonensis snakes were 12.8 m (42 ft) in length.[9] By comparison, the largest extant snakes are the reticulated python, which measures about 6.95 m (22.8 ft) long,[8] and the green anaconda, which measures about 5.21 m (17.1 ft) long and is considered the heaviest snake on Earth at 97.5 kg (215 lb).[39]
104
+
105
+ At the other end of the scale, the smallest extant snake is Leptotyphlops carlae, with a length of about 10.4 cm (4.1 in).[7] Most snakes are fairly small animals, approximately 1 m (3.3 ft) in length.[40]
106
+
107
+ Pit vipers, pythons, and some boas have infrared-sensitive receptors in deep grooves on the snout, which allow them to "see" the radiated heat of warm-blooded prey. In pit vipers, the grooves are located between the nostril and the eye in a large "pit" on each side of the head. Other infrared-sensitive snakes have multiple, smaller labial pits lining the upper lip, just below the nostrils.[41]
108
+
109
+ Snakes use smell to track their prey. They smell by using their forked tongues to collect airborne particles, then passing them to the vomeronasal organ or Jacobson's organ in the mouth for examination.[41] The fork in the tongue gives snakes a sort of directional sense of smell and taste simultaneously.[41] They keep their tongues constantly in motion, sampling particles from the air, ground, and water, analyzing the chemicals found, and determining the presence of prey or predators in the local environment. In water-dwelling snakes, such as the anaconda, the tongue functions efficiently underwater.[41]
110
+
111
+ The underside is very sensitive to vibration. This allows snakes to be able to sense approaching animals by detecting faint vibrations in the ground.[41]
112
+
113
+ Snake vision varies greatly, from only being able to distinguish light from dark to keen eyesight, but the main trend is that their vision is adequate although not sharp, and allows them to track movements.[42] Generally, vision is best in arboreal snakes and weakest in burrowing snakes. Some snakes, such as the Asian vine snake (genus Ahaetulla), have binocular vision, with both eyes capable of focusing on the same point. Most snakes focus by moving the lens back and forth in relation to the retina, while in the other amniote groups, the lens is stretched. Many nocturnal snakes have slit pupils while diurnal snakes have round pupils. Most species possess three visual pigments and are probably able to see two primary colors in daylight. It's concluded that the last common ancestors of all snakes had UV-sensitive vision, but that most snakes that depends on their eyesight to hunt in daylight have evolved lenses that act as sunglasses which filters out UV-light, and probably also sharpens their vision by improving the contrasts.[43]
114
+
115
+ The skin of a snake is covered in scales. Contrary to the popular notion of snakes being slimy because of possible confusion of snakes with worms, snakeskin has a smooth, dry texture. Most snakes use specialized belly scales to travel, gripping surfaces. The body scales may be smooth, keeled, or granular. The eyelids of a snake are transparent "spectacle" scales, which remain permanently closed, also known as brille.
116
+
117
+ The shedding of scales is called ecdysis (or in normal usage, molting or sloughing). In the case of snakes, the complete outer layer of skin is shed in one layer.[44] Snake scales are not discrete, but extensions of the epidermis—hence they are not shed separately but as a complete outer layer during each molt, akin to a sock being turned inside out.[45]
118
+
119
+ Snakes have a wide diversity of skin coloration patterns. These patterns are often related to behavior, such as a tendency to have to flee from predators. Snakes that are plain or have longitudinal stripes often have to escape from predators, with the pattern (or lack thereof) not providing reference points to predators, thus allowing the snake to escape without being notice. Plain snakes usually adopt active hunting strategies, as their pattern allows them to send little information to prey about motion. Blotched snakes, on the other hand, usually use ambush-based strategies, likely because it helps them blend into an environment with irregularly shaped objects, like sticks or rocks. Spotted patterning can similarly help snakes to blend into their environment.[46]
120
+
121
+ The shape and number of scales on the head, back, and belly are often characteristic and used for taxonomic purposes. Scales are named mainly according to their positions on the body. In "advanced" (Caenophidian) snakes, the broad belly scales and rows of dorsal scales correspond to the vertebrae, allowing scientists to count the vertebrae without dissection.
122
+
123
+ Molting, or ecdysis, serves a number of functions. Firstly, the old and worn skin is replaced; secondly, it helps get rid of parasites such as mites and ticks. Renewal of the skin by molting is supposed to allow growth in some animals such as insects; however, this has been disputed in the case of snakes.[45][47]
124
+
125
+ Molting occurs periodically throughout the snake's life. Before a molt, the snake stops eating and often hides or moves to a safe place. Just before shedding, the skin becomes dull and dry looking and the eyes become cloudy or blue-colored. The inner surface of the old skin liquefies. This causes the old skin to separate from the new skin beneath it. After a few days, the eyes clear and the snake "crawls" out of its old skin. The old skin breaks near the mouth and the snake wriggles out, aided by rubbing against rough surfaces. In many cases, the cast skin peels backward over the body from head to tail in one piece, like pulling a sock off inside-out. A new, larger, brighter layer of skin has formed underneath.[45][48]
126
+
127
+ An older snake may shed its skin only once or twice a year. But a younger snake, still growing, may shed up to four times a year.[48] The discarded skin gives a perfect imprint of the scale pattern, and it is usually possible to identify the snake if the discarded skin is reasonably intact.[45] This periodic renewal has led to the snake being a symbol of healing and medicine, as pictured in the Rod of Asclepius.[49]
128
+
129
+ Scale counts can sometimes be used to tell the sex of a snake when the species is not distinctly sexually dimorphic. A probe is inserted into the cloaca until it can go no further. The probe is marked at the point where it stops, removed, and compared to the subcaudal depth by laying it alongside the scales.[50] The scalation count determines whether the snake is a male or female as hemipenes of a male will probe to a different depth (usually longer) than the cloaca of a female.[50][clarification needed]
130
+
131
+ The skeleton of most snakes consists solely of the skull, hyoid, vertebral column, and ribs, though henophidian snakes retain vestiges of the pelvis and rear limbs.
132
+
133
+ The skull of the snake consists of a solid and complete neurocranium, to which many of the other bones are only loosely attached, particularly the highly mobile jaw bones, which facilitate manipulation and ingestion of large prey items. The left and right sides of the lower jaw are joined only by a flexible ligament at the anterior tips, allowing them to separate widely, while the posterior end of the lower jaw bones articulate with a quadrate bone, allowing further mobility. The bones of the mandible and quadrate bones can also pick up ground borne vibrations.[51] Because the sides of the jaw can move independently of one another, snakes resting their jaws on a surface have sensitive stereo hearing which can detect the position of prey. The jaw-quadrate-stapes pathway is capable of detecting vibrations on the angstrom scale, despite the absence of an outer ear and the ossicle mechanism of impedance matching used in other vertebrates to receive vibrations from the air.[52][53]
134
+
135
+ The hyoid is a small bone located posterior and ventral to the skull, in the 'neck' region, which serves as an attachment for muscles of the snake's tongue, as it does in all other tetrapods.
136
+
137
+ The vertebral column consists of anywhere between 200 and 400 (or more) vertebrae. Tail vertebrae are comparatively few in number (often less than 20% of the total) and lack ribs, while body vertebrae each have two ribs articulating with them. The vertebrae have projections that allow for strong muscle attachment enabling locomotion without limbs.
138
+
139
+ Autotomy of the tail, a feature found in some lizards is absent in most snakes.[54] Caudal autotomy in snakes is rare and is intervertebral, unlike that in lizards, which is intravertebral—that is, the break happens along a predefined fracture plane present on a vertebra.[55][56]
140
+
141
+ In some snakes, most notably boas and pythons, there are vestiges of the hindlimbs in the form of a pair of pelvic spurs. These small, claw-like protrusions on each side of the cloaca are the external portion of the vestigial hindlimb skeleton, which includes the remains of an ilium and femur.
142
+
143
+ Snakes are polyphyodonts with teeth that are continuously replaced.[57]
144
+
145
+ Snake's and other reptiles have a three-chambered heart that controls the circulatory system via the left and right atrium, and one ventricle.[58] Internally, the ventricle is divided into three interconnected cavities which include the cavum arteriosum, the cavum pulmonale, and the cavum venosum.[59] The cavum venosum receives deoxygenated blood from the right atrium while the cavum arteriosum receives oxygenated blood directly from the left atrium. Located beneath the cavum venosum is the cavum pulmonale, which pumps blood to the pulmonary trunk.[60]
146
+
147
+ The snake's heart is encased in a sac, called the pericardium, located at the bifurcation of the bronchi. The heart is able to move around, however, owing to the lack of a diaphragm. This adjustment protects the heart from potential damage when large ingested prey is passed through the esophagus. The spleen is attached to the gall bladder and pancreas and filters the blood. The thymus is located in fatty tissue above the heart and is responsible for the generation of immune cells in the blood. The cardiovascular system of snakes is also unique for the presence of a renal portal system in which the blood from the snake's tail passes through the kidneys before returning to the heart.[61]
148
+
149
+ The vestigial left lung is often small or sometimes even absent, as snakes' tubular bodies require all of their organs to be long and thin.[61] In the majority of species, only one lung is functional. This lung contains a vascularized anterior portion and a posterior portion that does not function in gas exchange.[61] This 'saccular lung' is used for hydrostatic purposes to adjust buoyancy in some aquatic snakes and its function remains unknown in terrestrial species.[61] Many organs that are paired, such as kidneys or reproductive organs, are staggered within the body, with one located ahead of the other.[61]
150
+
151
+ Snakes have no lymph nodes.[61]
152
+
153
+ Cobras, vipers, and closely related species use venom to immobilize, injure or kill their prey. The venom is modified saliva, delivered through fangs.[17]:243 The fangs of 'advanced' venomous snakes like viperids and elapids are hollow to inject venom more effectively, while the fangs of rear-fanged snakes such as the boomslang merely have a groove on the posterior edge to channel venom into the wound. Snake venoms are often prey specific—their role in self-defense is secondary.[17]:243
154
+
155
+ Venom, like all salivary secretions, is a predigestant that initiates the breakdown of food into soluble compounds, facilitating proper digestion. Even nonvenomous snake bites (like any animal bite) will cause tissue damage.[17]:209
156
+
157
+ Certain birds, mammals, and other snakes (such as kingsnakes) that prey on venomous snakes have developed resistance and even immunity to certain venoms.[17]:243 Venomous snakes include three families of snakes, and do not constitute a formal classification group used in taxonomy.
158
+
159
+ The colloquial term "poisonous snake" is generally an incorrect label for snakes. A poison is inhaled or ingested, whereas venom produced by snakes is injected into its victim via fangs.[62] There are, however, two exceptions: Rhabdophis sequesters toxins from the toads it eats, then secretes them from nuchal glands to ward off predators, and a small unusual population of garter snakes in the U.S. state of Oregon retains enough toxins in their livers from the newts they eat to be effectively poisonous to small local predators (such as crows and foxes).[63]
160
+
161
+ Snake venoms are complex mixtures of proteins, and are stored in venom glands at the back of the head.[63] In all venomous snakes, these glands open through ducts into grooved or hollow teeth in the upper jaw.[17]:243[62] These proteins can potentially be a mix of neurotoxins (which attack the nervous system), hemotoxins (which attack the circulatory system), cytotoxins, bungarotoxins and many other toxins that affect the body in different ways.[62] Almost all snake venom contains hyaluronidase, an enzyme that ensures rapid diffusion of the venom.[17]:243
162
+
163
+ Venomous snakes that use hemotoxins usually have fangs in the front of their mouths, making it easier for them to inject the venom into their victims.[62] Some snakes that use neurotoxins (such as the mangrove snake) have fangs in the back of their mouths, with the fangs curled backwards.[64] This makes it difficult both for the snake to use its venom and for scientists to milk them.[62] Elapids, however, such as cobras and kraits are proteroglyphous—they possess hollow fangs that cannot be erected toward the front of their mouths, and cannot "stab" like a viper. They must actually bite the victim.[17]:242
164
+
165
+ It has recently been suggested that all snakes may be venomous to a certain degree, with harmless snakes having weak venom and no fangs.[65] Most snakes currently labelled "nonvenomous" would still be considered harmless according to this theory, as they either lack a venom delivery method or are incapable of delivering enough to endanger a human. This theory postulates that snakes may have evolved from a common lizard ancestor that was venomous—and that venomous lizards like the gila monster, beaded lizard, monitor lizards, and the now-extinct mosasaurs may also have derived from it. They share this venom clade with various other saurian species.
166
+
167
+ Venomous snakes are classified in two taxonomic families:
168
+
169
+ There is a third family containing the opistoglyphous (rear-fanged) snakes (as well as the majority of other snake species):
170
+
171
+ Although a wide range of reproductive modes are used by snakes, all snakes employ internal fertilization. This is accomplished by means of paired, forked hemipenes, which are stored, inverted, in the male's tail.[66] The hemipenes are often grooved, hooked, or spined in order to grip the walls of the female's cloaca.[67][66]
172
+
173
+ Most species of snakes lay eggs which they abandon shortly after laying. However, a few species (such as the king cobra) actually construct nests and stay in the vicinity of the hatchlings after incubation.[66] Most pythons coil around their egg-clutches and remain with them until they hatch.[68] A female python will not leave the eggs, except to occasionally bask in the sun or drink water. She will even "shiver" to generate heat to incubate the eggs.[68]
174
+
175
+ Some species of snake are ovoviviparous and retain the eggs within their bodies until they are almost ready to hatch.[69][70] Recently, it has been confirmed that several species of snake are fully viviparous, such as the boa constrictor and green anaconda, nourishing their young through a placenta as well as a yolk sac, which is highly unusual among reptiles, or anything else outside of requiem sharks or placental mammals.[69][70] Retention of eggs and live birth are most often associated with colder environments.[66][70]
176
+
177
+ Sexual selection in snakes is demonstrated by the 3,000 species that each use different tactics in acquiring mates.[71] Ritual combat between males for the females they want to mate with includes topping, a behavior exhibited by most viperids in which one male will twist around the vertically elevated fore body of its opponent and forcing it downward. It is common for neck biting to occur while the snakes are entwined.[72]
178
+
179
+ Parthenogenesis is a natural form of reproduction in which growth and development of embryos occur without fertilization. Agkistrodon contortrix (copperhead) and Agkistrodon piscivorus (cotton mouth) can reproduce by facultative parthenogenesis. That is, they are capable of switching from a sexual mode of reproduction to an asexual mode.[73] The type of parthenogenesis that likely occurs is automixis with terminal fusion, a process in which two terminal products from the same meiosis fuse to form a diploid zygote. This process leads to genome wide homozygosity, expression of deleterious recessive alleles and often to developmental abnormalities. Both captive-born and wild-born A. contortrix and A. piscivorus appear to be capable of this form of parthenogenesis.[73]
180
+
181
+ Reproduction in squamate reptiles is almost exclusively sexual. Males ordinarily have a ZZ pair of sex determining chromosomes, and females a ZW pair. However, the Colombian Rainbow boa (Epicrates maurus) can also reproduce by facultative parthenogenesis resulting in production of WW female progeny.[74] The WW females are likely produced by terminal automixis.
182
+
183
+ In regions where winters are colder than snakes can tolerate while remaining active, local species will brumate. Unlike hibernation, in which mammals are actually asleep, brumating reptiles are awake but inactive. Individual snakes may brumate in burrows, under rock piles, or inside fallen trees, or snakes may aggregate in large numbers at hibernacula.
184
+
185
+ All snakes are strictly carnivorous, eating small animals including lizards, frogs, other snakes, small mammals, birds, eggs, fish, snails, worms or insects.[17][3][18][75] Because snakes cannot bite or tear their food to pieces, they must swallow prey whole. The body size of a snake has a major influence on its eating habits. Smaller snakes eat smaller prey. Juvenile pythons might start out feeding on lizards or mice and graduate to small deer or antelope as an adult, for example.
186
+
187
+ The snake's jaw is a complex structure. Contrary to the popular belief that snakes can dislocate their jaws, snakes have a very flexible lower jaw, the two halves of which are not rigidly attached, and numerous other joints in their skull (see snake skull), allowing them to open their mouths wide enough to swallow their prey whole, even if it is larger in diameter than the snake itself.[75] For example, the African egg-eating snake has flexible jaws adapted for eating eggs much larger than the diameter of its head.[17]:81 This snake has no teeth, but does have bony protrusions on the inside edge of its spine, which it uses to break shells when it eats eggs.[17]:81
188
+
189
+ While the majority of snakes eat a variety of prey animals, there is some specialization by some species. King cobras and the Australian bandy-bandy consume other snakes. Snakes of the family Pareidae have more teeth on the right side of their mouths than on the left, as the shells of their prey usually spiral clockwise.[17]:184[76][77]
190
+
191
+ Some snakes have a venomous bite, which they use to kill their prey before eating it.[75][78] Other snakes kill their prey by constriction.[75] Still others swallow their prey whole and alive.[17]:81[75]
192
+
193
+ After eating, snakes become dormant while the process of digestion takes place.[50] Digestion is an intense activity, especially after consumption of large prey. In species that feed only sporadically, the entire intestine enters a reduced state between meals to conserve energy. The digestive system is then 'up-regulated' to full capacity within 48 hours of prey consumption. Being ectothermic ("cold-blooded"), the surrounding temperature plays a large role in snake digestion. The ideal temperature for snakes to digest is 30 °C (86 °F). So much metabolic energy is involved in a snake's digestion that in the South American rattlesnake (Crotalus durissus), surface body temperature increases by as much as 1.2 °C (2.2 °F) during the digestive process.[79] Because of this, a snake disturbed after having eaten recently will often regurgitate its prey to be able to escape the perceived threat. When undisturbed, the digestive process is highly efficient, with the snake's digestive enzymes dissolving and absorbing everything but the prey's hair (or feathers) and claws, which are excreted along with waste.
194
+
195
+ The lack of limbs does not impede the movement of snakes. They have developed several different modes of locomotion to deal with particular environments. Unlike the gaits of limbed animals, which form a continuum, each mode of snake locomotion is discrete and distinct from the others; transitions between modes are abrupt.[80][81]
196
+
197
+ Lateral undulation is the sole mode of aquatic locomotion, and the most common mode of terrestrial locomotion.[81] In this mode, the body of the snake alternately flexes to the left and right, resulting in a series of rearward-moving "waves".[80] While this movement appears rapid, snakes have rarely been documented moving faster than two body-lengths per second, often much less.[82] This mode of movement has the same net cost of transport (calories burned per meter moved) as running in lizards of the same mass.[83]
198
+
199
+ Terrestrial lateral undulation is the most common mode of terrestrial locomotion for most snake species.[80] In this mode, the posteriorly moving waves push against contact points in the environment, such as rocks, twigs, irregularities in the soil, etc.[80] Each of these environmental objects, in turn, generates a reaction force directed forward and towards the midline of the snake, resulting in forward thrust while the lateral components cancel out.[84] The speed of this movement depends upon the density of push-points in the environment, with a medium density of about 8[clarification needed] along the snake's length being ideal.[82] The wave speed is precisely the same as the snake speed, and as a result, every point on the snake's body follows the path of the point ahead of it, allowing snakes to move through very dense vegetation and small openings.[84]
200
+
201
+ When swimming, the waves become larger as they move down the snake's body, and the wave travels backwards faster than the snake moves forwards.[85] Thrust is generated by pushing their body against the water, resulting in the observed slip. In spite of overall similarities, studies show that the pattern of muscle activation is different in aquatic versus terrestrial lateral undulation, which justifies calling them separate modes.[86] All snakes can laterally undulate forward (with backward-moving waves), but only sea snakes have been observed reversing the motion (moving backwards with forward-moving waves).[80]
202
+
203
+ Most often employed by colubroid snakes (colubrids, elapids, and vipers) when the snake must move in an environment that lacks irregularities to push against (rendering lateral undulation impossible), such as a slick mud flat, or a sand dune, sidewinding is a modified form of lateral undulation in which all of the body segments oriented in one direction remain in contact with the ground, while the other segments are lifted up, resulting in a peculiar "rolling" motion.[87][88] This mode of locomotion overcomes the slippery nature of sand or mud by pushing off with only static portions on the body, thereby minimizing slipping.[87] The static nature of the contact points can be shown from the tracks of a sidewinding snake, which show each belly scale imprint, without any smearing. This mode of locomotion has very low caloric cost, less than ⅓ of the cost for a lizard to move the same distance.[83] Contrary to popular belief, there is no evidence that sidewinding is associated with the sand being hot.[87]
204
+
205
+ When push-points are absent, but there is not enough space to use sidewinding because of lateral constraints, such as in tunnels, snakes rely on concertina locomotion.[80][88] In this mode, the snake braces the posterior portion of its body against the tunnel wall while the front of the snake extends and straightens.[87] The front portion then flexes and forms an anchor point, and the posterior is straightened and pulled forwards. This mode of locomotion is slow and very demanding, up to seven times the cost of laterally undulating over the same distance.[83] This high cost is due to the repeated stops and starts of portions of the body as well as the necessity of using active muscular effort to brace against the tunnel walls.
206
+
207
+ The movement of snakes in arboreal habitats has only recently been studied.[89] While on tree branches, snakes use several modes of locomotion depending on species and bark texture.[89] In general, snakes will use a modified form of concertina locomotion on smooth branches, but will laterally undulate if contact points are available.[89] Snakes move faster on small branches and when contact points are present, in contrast to limbed animals, which do better on large branches with little 'clutter'.[89]
208
+
209
+ Gliding snakes (Chrysopelea) of Southeast Asia launch themselves from branch tips, spreading their ribs and laterally undulating as they glide between trees.[87][90][91] These snakes can perform a controlled glide for hundreds of feet depending upon launch altitude and can even turn in midair.[87][90]
210
+
211
+ The slowest mode of snake locomotion is rectilinear locomotion, which is also the only one where the snake does not need to bend its body laterally, though it may do so when turning.[92] In this mode, the belly scales are lifted and pulled forward before being placed down and the body pulled over them. Waves of movement and stasis pass posteriorly, resulting in a series of ripples in the skin.[92] The ribs of the snake do not move in this mode of locomotion and this method is most often used by large pythons, boas, and vipers when stalking prey across open ground as the snake's movements are subtle and harder to detect by their prey in this manner.[87]
212
+
213
+ Snakes do not ordinarily prey on humans. Unless startled or injured, most snakes prefer to avoid contact and will not attack humans. With the exception of large constrictors, nonvenomous snakes are not a threat to humans. The bite of a nonvenomous snake is usually harmless; their teeth are not adapted for tearing or inflicting a deep puncture wound, but rather grabbing and holding. Although the possibility of infection and tissue damage is present in the bite of a nonvenomous snake, venomous snakes present far greater hazard to humans.[17]:209 The World Health Organisation (WHO) lists snakebite under the "other neglected conditions" category.[95]
214
+
215
+ Documented deaths resulting from snake bites are uncommon. Nonfatal bites from venomous snakes may result in the need for amputation of a limb or part thereof. Of the roughly 725 species of venomous snakes worldwide, only 250 are able to kill a human with one bite. Australia averages only one fatal snake bite per year. In India, 250,000 snakebites are recorded in a single year, with as many as 50,000 recorded initial deaths.[96] The WHO estimates that on the order of 100 000 people die each year as a result of snake bites, and around three times as many amputations and other permanent disabilities are caused by snakebites annually.[97]
216
+
217
+ The treatment for a snakebite is as variable as the bite itself. The most common and effective method is through antivenom (or antivenin), a serum made from the venom of the snake. Some antivenom is species-specific (monovalent) while some is made for use with multiple species in mind (polyvalent). In the United States for example, all species of venomous snakes are pit vipers, with the exception of the coral snake. To produce antivenom, a mixture of the venoms of the different species of rattlesnakes, copperheads, and cottonmouths is injected into the body of a horse in ever-increasing dosages until the horse is immunized. Blood is then extracted from the immunized horse. The serum is separated and further purified and freeze-dried. It is reconstituted with sterile water and becomes antivenom. For this reason, people who are allergic to horses are more likely to suffer an allergic reaction to antivenom.[98] Antivenom for the more dangerous species (such as mambas, taipans, and cobras) is made in a similar manner in India, South Africa, and Australia, although these antivenoms are species-specific.
218
+
219
+ In some parts of the world, especially in India, snake charming is a roadside show performed by a charmer. In such a show, the snake charmer carries a basket that contains a snake that he seemingly charms by playing tunes from his flutelike musical instrument, to which the snake responds.[99] Snakes lack external ears, though they do have internal ears, and respond to the movement of the flute, not the actual noise.[99]
220
+
221
+ The Wildlife Protection Act of 1972 in India technically proscribes snake charming on grounds of reducing animal cruelty. Other snake charmers also have a snake and mongoose show, where both the animals have a mock fight; however, this is not very common, as the snakes, as well as the mongooses, may be seriously injured or killed. Snake charming as a profession is dying out in India because of competition from modern forms of entertainment and environment laws proscribing the practice. Many Indians have never seen snake charming and it is becoming a folktale of the past.[99][100][101][102]
222
+
223
+ The Irulas tribe of Andhra Pradesh and Tamil Nadu in India have been hunter-gatherers in the hot, dry plains forests, and have practiced the art of snake catching for generations. They have a vast knowledge of snakes in the field. They generally catch the snakes with the help of a simple stick. Earlier, the Irulas caught thousands of snakes for the snake-skin industry. After the complete ban of the snake-skin industry in India and protection of all snakes under the Indian Wildlife (Protection) Act 1972, they formed the Irula Snake Catcher's Cooperative and switched to catching snakes for removal of venom, releasing them in the wild after four extractions. The venom so collected is used for producing life-saving antivenom, biomedical research and for other medicinal products.[103] The Irulas are also known to eat some of the snakes they catch and are very useful in rat extermination in the villages.
224
+
225
+ Despite the existence of snake charmers, there have also been professional snake catchers or wranglers. Modern-day snake trapping involves a herpetologist using a long stick with a V- shaped end. Some television show hosts, like Bill Haast, Austin Stevens, Steve Irwin, and Jeff Corwin, prefer to catch them using bare hands.
226
+
227
+ While not commonly thought of as food in most cultures, in others the consumption of snakes is acceptable, or even considered a delicacy. Snake soup of Cantonese cuisine is consumed by locals in autumn, to warm up their body. Western cultures document the consumption of snakes under extreme circumstances of hunger.[104] Cooked rattlesnake meat is an exception, which is commonly consumed in Texas[105] and parts of the Midwestern United States. In Asian countries such as China, Taiwan, Thailand, Indonesia, Vietnam and Cambodia, drinking the blood of snakes—particularly the cobra—is believed to increase sexual virility.[106] The blood is drained while the cobra is still alive when possible, and is usually mixed with some form of liquor to improve the taste.[106]
228
+
229
+ In some Asian countries, the use of snakes in alcohol is also accepted. In such cases, the body of a snake or several snakes is left to steep in a jar or container of liquor. It is claimed that this makes the liquor stronger (as well as more expensive). One example of this is the Habu snake sometimes placed in the Okinawan liquor Habushu (ブ酒,) also known as "Habu Sake".[107]
230
+
231
+ Snake wine (蛇酒) is an alcoholic beverage produced by infusing whole snakes in rice wine or grain alcohol. The drink was first recorded to have been consumed in China during the Western Zhou dynasty and considered an important curative and believed to reinvigorate a person according to traditional Chinese medicine.[108]
232
+
233
+ In the Western world, some snakes (especially docile species such as the ball python and corn snake) are kept as pets. To meet this demand a captive breeding industry has developed. Snakes bred in captivity tend to make better pets and are considered preferable to wild caught specimens.[109] Snakes can be very low maintenance pets, especially compared to more traditional species. They require minimal space, as most common species do not exceed 5 feet (1.5 m) in length. Pet snakes can be fed relatively infrequently, usually once every 5 to 14 days. Certain snakes have a lifespan of more than 40 years if given proper care.
234
+
235
+ In ancient Mesopotamia, Nirah, the messenger god of Ištaran, was represented as a serpent on kudurrus, or boundary stones.[110] Representations of two intertwined serpents are common in Sumerian art and Neo-Sumerian artwork[110] and still appear sporadically on cylinder seals and amulets until as late as the thirteenth century BC.[110] The horned viper (Cerastes cerastes) appears in Kassite and Neo-Assyrian kudurrus[110] and is invoked in Assyrian texts as a magical protective entity.[110] A dragon-like creature with horns, the body and neck of a snake, the forelegs of a lion, and the hind-legs of a bird appears in Mesopotamian art from the Akkadian Period until the Hellenistic Period (323 BC–31 BC).[110] This creature, known in Akkadian as the mušḫuššu, meaning "furious serpent", was used as a symbol for particular deities and also as a general protective emblem.[110] It seems to have originally been the attendant of the Underworld god Ninazu,[110] but later became the attendant to the Hurrian storm-god Tishpak, as well as, later, Ninazu's son Ningishzida, the Babylonian national god Marduk, the scribal god Nabu, and the Assyrian national god Ashur.[110]
236
+
237
+ In Egyptian history, the snake occupies a primary role with the Nile cobra adorning the crown of the pharaoh in ancient times. It was worshipped as one of the gods and was also used for sinister purposes: murder of an adversary and ritual suicide (Cleopatra).[citation needed] The ouroboros was a well-known ancient Egyptian symbol of a serpent swallowing its own tail.[111] The precursor to the ouroboros was the "Many-Faced",[111] a serpent with five heads, who, according to the Amduat, the oldest surviving Book of the Afterlife, was said to coil around the corpse of the sun god Ra protectively.[111] The earliest surviving depiction of a "true" ouroboros comes from the gilded shrines in the tomb of Tutankhamun.[111] In the early centuries AD, the ouroboros was adopted as a symbol by Gnostic Christians[111] and chapter 136 of the Pistis Sophia, an early Gnostic text, describes "a great dragon whose tail is in its mouth".[111] In medieval alchemy, the ouroboros became a typical western dragon with wings, legs, and a tail.[111]
238
+
239
+ In the Bible, King Nahash of Ammon, whose name means "Snake", is depicted very negatively, as a particularly cruel and despicable enemy of the ancient Hebrews.
240
+
241
+ The ancient Greeks used the Gorgoneion, a depiction of a hideous face with serpents for hair, as an apotropaic symbol to ward off evil.[112] In a Greek myth described by Pseudo-Apollodorus in his Bibliotheca, Medusa was a Gorgon with serpents for hair whose gaze turned all those who looked at her to stone and was slain by the hero Perseus.[113][114][115] In the Roman poet Ovid's Metamorphoses, Medusa is said to have once been a beautiful priestess of Athena, whom Athena turned into a serpent-haired monster after she was raped by the god Poseidon in Athena's temple.[116] In another myth referenced by the Boeotian poet Hesiod and described in detail by Pseudo-Apollodorus, the hero Heracles is said to have slain the Lernaean Hydra,[117][118] a multiple-headed serpent which dwelt in the swamps of Lerna.[117][118]
242
+
243
+ The legendary account of the foundation of Thebes mentioned a monster snake guarding the spring from which the new settlement was to draw its water. In fighting and killing the snake, the companions of the founder Cadmus all perished – leading to the term "Cadmean victory" (i.e. a victory involving one's own ruin).[citation needed]
244
+
245
+ Three medical symbols involving snakes that are still used today are Bowl of Hygieia, symbolizing pharmacy, and the Caduceus and Rod of Asclepius, which are symbols denoting medicine in general.[49]
246
+
247
+ One of the etymologies proposed for the common female first name Linda is that it might derive from Old German Lindi or Linda, meaning a serpent.
248
+
249
+ India is often called the land of snakes and is steeped in tradition regarding snakes.[119] Snakes are worshipped as gods even today with many women pouring milk on snake pits (despite snakes' aversion for milk).[119] The cobra is seen on the neck of Shiva and Vishnu is depicted often as sleeping on a seven-headed snake or within the coils of a serpent.[120] There are also several temples in India solely for cobras sometimes called Nagraj (King of Snakes) and it is believed that snakes are symbols of fertility. There is a Hindu festival called Nag Panchami each year on which day snakes are venerated and prayed to. See also Nāga.[citation needed]
250
+
251
+ In India there is another mythology about snakes. Commonly known in Hindi as "Ichchhadhari" snakes. Such snakes can take the form of any living creature, but prefer human form. These mythical snakes possess a valuable gem called "Mani", which is more brilliant than diamond. There are many stories in India about greedy people trying to possess this gem and ending up getting killed.[citation needed]
252
+
253
+ The snake is one of the 12 celestial animals of Chinese zodiac, in the Chinese calendar.[121]
254
+
255
+ Many ancient Peruvian cultures worshipped nature.[122] They emphasized animals and often depicted snakes in their art.[123]
256
+
257
+ Snakes are a part of Hindu worship. A festival, Nag Panchami, in which participants worship either images of or live Nāgas (cobras) is celebrated every year. Most images of Lord Shiva depict snake around his neck. Puranas have various stories associated with snakes. In the Puranas, Shesha is said to hold all the planets of the Universe on his hoods and to constantly sing the glories of Vishnu from all his mouths. He is sometimes referred to as "Ananta-Shesha", which means "Endless Shesha". Other notable snakes in Hinduism are Ananta, Vasuki, Taxak, Karkotaka and Pingala. The term Nāga is used to refer to entities that take the form of large snakes in Hinduism and Buddhism.
258
+
259
+ Snakes have also been widely revered, such as in ancient Greece, where the serpent was seen as a healer. Asclepius carried a serpent wound around his wand, a symbol seen today on many ambulances.
260
+
261
+ In religious terms, the snake and jaguar are arguably the most important animals in ancient Mesoamerica. "In states of ecstasy, lords dance a serpent dance; great descending snakes adorn and support buildings from Chichen Itza to Tenochtitlan, and the Nahuatl word coatl meaning serpent or twin, forms part of primary deities such as Mixcoatl, Quetzalcoatl, and Coatlicue."[124] In both Maya and Aztec calendars, the fifth day of the week was known as Snake Day.
262
+
263
+ In Judaism, the snake of brass is also a symbol of healing, of one's life being saved from imminent death.[125]
264
+
265
+ In some parts of Christianity, Christ's redemptive work is compared to saving one's life through beholding the Nehushtan (serpent of brass).[126] Snake handlers use snakes as an integral part of church worship in order to exhibit their faith in divine protection. However, more commonly in Christianity, the serpent has been seen as a representative of evil and sly plotting, which can be seen in the description in Genesis chapter 3 of a snake in the Garden of Eden tempting Eve.[127] Saint Patrick is reputed to have expelled all snakes from Ireland while converting the country to Christianity in the 5th century, thus explaining the absence of snakes there.
266
+
267
+ In Christianity and Judaism, the snake makes its infamous appearance in the first book of the Bible when a serpent appears before the first couple Adam and Eve and tempts them with the forbidden fruit from the Tree of Knowledge.[127] The snake returns in Exodus when Moses, as a sign of God's power, turns his staff into a snake and when Moses made the Nehushtan, a bronze snake on a pole that when looked at cured the people of bites from the snakes that plagued them in the desert. The serpent makes its final appearance symbolizing Satan in the Book of Revelation: "And he laid hold on the dragon the old serpent, which is the devil and Satan, and bound him for a thousand years."[128]
268
+
269
+ In Neo-Paganism and Wicca, the snake is seen as a symbol of wisdom and knowledge.
270
+
271
+ Several compounds from snake venoms are being researched as potential treatments or preventatives for pain, cancers, arthritis, stroke, heart disease, hemophilia, and hypertension, and to control bleeding (e.g. during surgery).[130][131][132]
272
+
273
+ Caldwell MW, Nydam RL, Palci A, Apesteguía S (January 2015). "The oldest known snakes from the Middle Jurassic-Lower Cretaceous provide insights on snake evolution". Nature Communications. 6 (5996): 5996. Bibcode:2015NatCo...6.5996C. doi:10.1038/ncomms6996. PMID 25625704.